London Internet Exchange Point Update Keith Mitchell Executive

  • Slides: 28
Download presentation
London Internet Exchange Point Update Keith Mitchell, Executive Chairman NANOG 15 Meeting Denver, Jan

London Internet Exchange Point Update Keith Mitchell, Executive Chairman NANOG 15 Meeting Denver, Jan 1999

LINX Update • • LINX now has 63 members Second site now in use

LINX Update • • LINX now has 63 members Second site now in use New Gigabit backbone in place Renumbered IXP LAN Some things we have learned! Statistics What’s coming in 1999

What is the LINX ? • UK National IXP • Not-for-profit co-operative of ISPs

What is the LINX ? • UK National IXP • Not-for-profit co-operative of ISPs • Main aim to keep UK domestic Internet traffic in UK • Increasingly keeping EU traffic in EU

LINX Status • Established Oct 94 by 5 member ISPs • Now has 7

LINX Status • Established Oct 94 by 5 member ISPs • Now has 7 FTE dedicated staff • Sub-contracts co-location to 2 neutral sites in London Docklands: • Telehouse • Tele. City • Traffic doubling every 4 -6 months !

LINX membership • Now totals 63 • +10 since Oct 98 • Recent UK

LINX membership • Now totals 63 • +10 since Oct 98 • Recent UK • Recent non-UK members: • • • Red. Net XTML Mistral ICLnet Dialnet • • • Carrier 1 GTE Above Net Telecom Eireann Level 3

LINX Members by Country

LINX Members by Country

Second Site • Existing Telehouse site full until 99 Q 3 extension ready •

Second Site • Existing Telehouse site full until 99 Q 3 extension ready • Tele. City is new dedicated co-lo facility, 3 miles from Telehouse • Awarded LINX contract by open tender (8 submissions) • LINX has 16 -rack suite • Space for 800 racks

Second Site • LINX has diverse dark fibre between sites (5 km) • Same

Second Site • LINX has diverse dark fibre between sites (5 km) • Same switch configuration as Telehouse site • Will have machines to act as hot backups for the servers in Telehouse • Will have a K. root server behind a transit router soon

LINX Traffic Issues • Bottleneck was inter-switch link between Catalyst 5000 s • Cisco

LINX Traffic Issues • Bottleneck was inter-switch link between Catalyst 5000 s • Cisco FDDI could no longer cope • 100 base. T nearly full • Needed to upgrade to Gigabit backbone within existing site 98 Q 3

Gigabit Switch Options • Looked at 6 vendors: • Cabletron/Digital, Cisco, Extreme, Foundry, Packet

Gigabit Switch Options • Looked at 6 vendors: • Cabletron/Digital, Cisco, Extreme, Foundry, Packet Engines, Plaintree • Some highly cost-effective options available • But needed non-blocking, modular, future-proof equipment, not workgroup boxes

Old LINX Infrastructure • 5 Cisco Switches: • 2 x Catalyst 5000, 3 x

Old LINX Infrastructure • 5 Cisco Switches: • 2 x Catalyst 5000, 3 x Catalyst 1200 • 2 Plaintree switches • 2 x Wave. Switch 4800 • • FDDI backbone Switched FDDI ports 10 base. T & 100 base. T ports Media convertors for fibre ether (>100 m)

Old LINX Topology

Old LINX Topology

New Infrastructure • Catalyst and Plaintree switches no longer in use • Catalyst 5000

New Infrastructure • Catalyst and Plaintree switches no longer in use • Catalyst 5000 s appeared to have broadcast scaling issues regardless of Supervisor Engine • Plaintree switches had proven too unstable and unmanageable • Catalyst 1200 s at end of useful life

New Infrastructure • Packet Engines PR-5200 • • • Chassis based 16 slot switch

New Infrastructure • Packet Engines PR-5200 • • • Chassis based 16 slot switch Non-blocking 52 Gbps backplane Used for our core, primary switches One in Telehouse, one in Tele. City Will need a second one in Telehouse within this quarter • Supports 1000 LX, 1000 SX, FDDI and 10/100 ethernet

New Infrastructure • Packet Engines PR-1000: • Small version of PR-5200 • 1 U

New Infrastructure • Packet Engines PR-1000: • Small version of PR-5200 • 1 U switch; 2 x SX and 20 x 10/100 • Same chipset as 5200 • Extreme Summit 48: • Used for second connections • Gives vendor resiliency • Excellent edge switch - low cost per port and • 2 x Gigabit, 48 x 10/100 ethernet

New Infrastructure • Topology changes: • Aim to be able to have major failure

New Infrastructure • Topology changes: • Aim to be able to have major failure in one switch without affecting member connectivity • Aim to have major failures on inter-switch links with out affecting connectivity • Ensure that inter-switch connections are not bottlenecks

New backbone • All primary inter-switch links are now gigabit • New kit on

New backbone • All primary inter-switch links are now gigabit • New kit on order to ensure that all inter-switch links are gigabit • Inter-switch traffic minimised by keeping all primary and all backup traffic on their own switches

IXP Switch Futures • Vendor claims of 1000 base. Proprietary 50 km+ range are

IXP Switch Futures • Vendor claims of 1000 base. Proprietary 50 km+ range are interesting • Need abuse prevention tools: • port filtering, RMON • Need traffic control tools: • member/member bandwidth limiting and measurement

Address Transition • Old IXP LAN was 194. 68. 130/24 • New allocation 195.

Address Transition • Old IXP LAN was 194. 68. 130/24 • New allocation 195. 66. 224/19 • New IXP LAN 195. 66. 224/23 • “Striped” allocation on new LAN • 2 addresses per member, same last octet • About 100 routers involved

Address Migration Plan • Configured new address(es) as secondaries • Brought up peerings with

Address Migration Plan • Configured new address(es) as secondaries • Brought up peerings with their new addresses • When all peers are peering on new addresses, stopped old peerings • Swap over the secondary to the primary IP address

Address Migration Plan • Collector dropped peerings with old 194. 68. 130. 0/24 addresses

Address Migration Plan • Collector dropped peerings with old 194. 68. 130. 0/24 addresses • Anyone not migrated at this stage lost direct peering with AS 5459 • Eventually, old addresses no longer in use

What we have learned • . . . the hard way! • Problems after

What we have learned • . . . the hard way! • Problems after renumbering • Some routers still using /24 netmask • Some members treating the /23 network as two /24 s • Big problem if proxy ARP is involved! • Broadcast traffic bad for health • We have seen >50 ARP requests per second at worst times.

ARP Scaling Issues • Renumbering led to lots of ARP requests for unused IP

ARP Scaling Issues • Renumbering led to lots of ARP requests for unused IP addresses • ARP no-reply retransmit timer fixed time-out • Maintenance work led to groups of routers going down/up together ÞSynchronised “waves” of ARP requests

New Mo. U Prohibitions • • • Proxy ARP ICMP redirects Directed broadcasts Spanning

New Mo. U Prohibitions • • • Proxy ARP ICMP redirects Directed broadcasts Spanning Tree IGP broadcasts All non-ARP MAC layer broadcasts

Statistics • LINX total traffic • 300 M/sec avg, 405 M/sec peak • Routing

Statistics • LINX total traffic • 300 M/sec avg, 405 M/sec peak • Routing table • 9, 200 out of 55, 000 routes • k. root-servers • 2. 2 Mbit/sec out, 640 Kbit/sec in • nic. uk • 150 Kbit/sec out, 60 Kbit/sec in

Statistics and looking glass at http: //www 2. linx. net/

Statistics and looking glass at http: //www 2. linx. net/

Things planned for ‘ 99 • Infrastructure spanning tree implementation • Completion of Stratum-1

Things planned for ‘ 99 • Infrastructure spanning tree implementation • Completion of Stratum-1 NTP server • Work on an ARP server • Implementation of route server • Implementation of RIPE NCC test traffic box