technician conference – network overview and proposed enhancement 2008 - 2012
DESCRIPTION
Technician Conference – Network overview and proposed enhancement 2008 - 2012. 17 th March 2008. Presented by :- Stuart Tilley - Network & Systems. Overview. Introduction Current Network Overview Proposed Technology Refresh Core Network Access Network Access bandwidth URL filtering - PowerPoint PPT PresentationTRANSCRIPT
Presented by :-Stuart Tilley - Network & Systems
Technician Conference – Network overview and proposed enhancement
2008 - 2012
17th March 2008
Overview
• Introduction• Current Network Overview• Proposed Technology Refresh
– Core Network – Access Network – Access bandwidth– URL filtering– Edge CPE
• Summary
Introduction
• Current Network Implemented in April 2002• Designed and Built by Synetrix a key LGfL service
provider• Emerging Technology (MPLS) and vendor choice has
provided a platform for;– Delivery of High availability and scalable Broadband services– Secure and safe educational environment– New service development and delivery– Shared community network (LPSN)
• Network Refresh - keeping pace with technology to and beyond 2012
The London Network – Physical Topology
Croydon
Purley
Merton
Bromley
BexleyHeath
Welling
Lewisham
Richmond
Hayes
Harrow
Park Royal
EarlsCourt
TeleHouse
Romford
BarnetHaringey
Newham
WalthamForest
Enfield
Lambeth
AP
Core Core Network Node
Aggregation Point
Core 10Gbps Links
Nodal Loop 100Mbps
Nodal Loop 1Gbps
Camden
The London Network
Physical Network Topology• 3 Core locations and 21 Aggregation Points serving 33
London Authorities• Resilient dark fibre connecting core locations (10Gb/sec
– OC192 SDH)• AP’s connected to core by resilient nodal loops currently
1Gb or 100Mb capacity• Resilient Service Hosting – SLB • Resilient Tier 1 ISP’s (Thus, Abovenet, UKERNA, BBC)
– Total Internet Capacity 6Gbps• All Broadband services delivered over fibre (scalable
bandwidth)
The London Network – Logical
6BoneNative IPv6 peering
BGP4
BBCBGP4
VPN1
VPN3
VPN2
Virtual Firewalls
Gigabit Firewall
MPLS VPN's
Earls Court
Virtual Firewalls
URL
Virus
URL
Virus
email &
Web
Gigabit Firewall
Park Royal
1Gbps
SLB
Author
MPLS IP VPN'sLEA1LEA2LEA3
Edge sites connected at 2, 5, 10 & 100Mbps Ethernet
Edge sites configuredInto appropriate VPN at any AP
Edge sites access coreservices via resilient MPLS core/access network with QoS applied dependant on application
SHDS - WES 1000 (1Gbps)
SHDS or Dark Fibre - 100M-2.4Gbps MPLS
URL
Virus
URL
email &
Web
SLB
160Gbps Router
SHDS - WES 100Mbps
Dark Fibre - 0C192 MPLS (10Gbps)
vpn3
vpn2
vpn1
2Gbps
Camden
AP
UKERNABGP4
2Gbps
InternetBGP4
VPN1
VPN3
VPN2
MPLS VPN's
1Gbps
160Gbps Router
email &
Web
100Mb
AP
VPN1
VPN1
VPN3
VPN2
MPLS VPN's
160Gbps Router
AP
VPN2
Waltham Forest
10Gbps core
10Gbps core 10Gbps core
Newham
Telehouse
Stuart Tilley
Date 25/01/2006
email &
Web
Virus
VPN1VPN1
VPN2
VPN2
VPN3
VPN3
Participate in same L2 broadcast domains as Earls
Court
Participate in same L2 broadcast domains as Park
Royal
The London Network
Logical Network • MPLS core network• Dedicated RFC2547bis Layer3 VPN’s
– Provides fully routed Virtual WANs per ‘customer’ (LEA or LA)
– Totally autonomous routing policy and access control per Virtual WAN – WMSv1 & v2
– Virtual WANs distributed across complete physical network
• QoS Support
Network Statistics
• Total of edge bandwidth purchased 23Gbps
• Total traffic transiting network 3Gbps (average)
• Total capacity of Juniper access layer 228Gbps
• Total Capacity of Juniper core 480Gbps
• Total Internet Bandwidth - (Sept 2002) 30Mbps today averaging over 2Gbps
• HTTP traffic via URL service 1.5GMbps
• Requests served from Cache 400Mbps
Proposed Core Technology upgrade
• Upgrade existing Juniper M160 with Next Generation MX960
• Fully resilient chassis (redundant HW) such as;– Power Supplies– Cooling fans– Routing Engines (RE)– Switch Control Board
• Fully resilient design/configuration– Dual Dense Port Concentrators (DPC’s) 10G + 1G– Support resilient backbone and core switching
• JUNOS code – leading standards development• Low risk migration
Proposed Core Technology Upgrade
Proposed MX960 core build
YELLOW ALARM RED ALARM
NC C NONC C NO MX960ACO/LT
0
ONLINE
OK FAIL
1
ONLINE
OK FAIL
2 6
ONLINE
OK FAIL
7
ONLINE
OK FAIL
8
ONLINE
OK FAIL
9
ONLINE
OK FAIL
10
ONLINE
OK FAIL
11
ONLINE
OK FAIL
5
ONLINE
OK FAIL
4
ONLINE
OK FAIL
3
ONLINE
OK FAIL
2
ONLINE
OK FAIL
1
ONLINE
OK FAIL
0
ONLINE
OK FAIL
0 1 2 3PEM
1
0
FAN
MASTER
ONLINE
OFFLINE
Juniper ®NETWORKS
RE 1RE 0
OK/FAIL
SC
B
FABRICACTIVE
FABRICONLY
TUNNEL
LINK
1/0
RE
-S-1
30
0
OK/FAIL
SC
B
FABRICACTIVE
FABRICONLY
TUNNEL
LINK
1/0
RE
-S-2
00
0
OK/FAIL
DP
C 4
x1
0G
E
TUNNEL
LINK
0/0
TUNNEL
LINK
0/0
TUNNEL
LINK
0/0
TUNNEL
LINK
0/0
OK/FAIL
DP
C 4
0x
GE
0/0 0/5 2/0 2/5
1/0 1/5 3/0 3/5
OK/FAIL
DP
C 4
x1
0G
E
TUNNEL
LINK
0/0
TUNNEL
LINK
0/0
TUNNEL
LINK
0/0
TUNNEL
LINK
0/0
OK/FAIL
DP
C 4
0x
GE
0/0 0/5 2/0 2/5
1/0 1/5 3/0 3/5
YELLOW ALARM RED ALARM
NC C NONC C NO MX960ACO/LT
0
ONLINE
OK FAIL
1
ONLINE
OK FAIL
2 6
ONLINE
OK FAIL
7
ONLINE
OK FAIL
8
ONLINE
OK FAIL
9
ONLINE
OK FAIL
10
ONLINE
OK FAIL
11
ONLINE
OK FAIL
5
ONLINE
OK FAIL
4
ONLINE
OK FAIL
3
ONLINE
OK FAIL
2
ONLINE
OK FAIL
1
ONLINE
OK FAIL
0
ONLINE
OK FAIL
0 1 2 3PEM
1
0
FAN
MASTER
ONLINE
OFFLINE
Juniper ®NETWORKS
RE 1RE 0
OK/FAIL
SC
B
FABRICACTIVE
FABRICONLY
TUNNEL
LINK
1/0
RE
-S-1
30
0
OK/FAIL
SC
B
FABRICACTIVE
FABRICONLY
TUNNEL
LINK
1/0
RE
-S-2
00
0
OK/FAIL
DP
C 4
x1
0G
E
TUNNEL
LINK
0/0
TUNNEL
LINK
0/0
TUNNEL
LINK
0/0
TUNNEL
LINK
0/0
OK/FAIL
DP
C 4
0x
GE
0/0 0/5 2/0 2/5
1/0 1/5 3/0 3/5
OK/FAIL
DP
C 4
x1
0G
E
TUNNEL
LINK
0/0
TUNNEL
LINK
0/0
TUNNEL
LINK
0/0
TUNNEL
LINK
0/0
OK/FAIL
DP
C 4
0x
GE
0/0 0/5 2/0 2/5
1/0 1/5 3/0 3/5
YELLOW ALARM RED ALARM
NC C NONC C NO MX960ACO/LT
0
ONLINE
OK FAIL
1
ONLINE
OK FAIL
2 6
ONLINE
OK FAIL
7
ONLINE
OK FAIL
8
ONLINE
OK FAIL
9
ONLINE
OK FAIL
10
ONLINE
OK FAIL
11
ONLINE
OK FAIL
5
ONLINE
OK FAIL
4
ONLINE
OK FAIL
3
ONLINE
OK FAIL
2
ONLINE
OK FAIL
1
ONLINE
OK FAIL
0
ONLINE
OK FAIL
0 1 2 3PEM
1
0
FAN
MASTER
ONLINE
OFFLINE
Juniper ®NETWORKS
RE 1RE 0
OK/FAIL
SC
B
FABRICACTIVE
FABRICONLY
TUNNEL
LINK
1/0
RE
-S-1
30
0
OK/FAIL
SC
B
FABRICACTIVE
FABRICONLY
TUNNEL
LINK
1/0
RE
-S-2
00
0
OK/FAIL
DP
C 4
x1
0G
E
TUNNEL
LINK
0/0
TUNNEL
LINK
0/0
TUNNEL
LINK
0/0
TUNNEL
LINK
0/0
OK/FAIL
DP
C 4
0x
GE
0/0 0/5 2/0 2/5
1/0 1/5 3/0 3/5
OK/FAIL
DP
C 4
x1
0G
E
TUNNEL
LINK
0/0
TUNNEL
LINK
0/0
TUNNEL
LINK
0/0
TUNNEL
LINK
0/0
OK/FAIL
DP
C 4
0x
GE
0/0 0/5 2/0 2/5
1/0 1/5 3/0 3/5
10Gbps
10Gbps
1Gbps
CONSOLESummit X450e-48p
TM Shared Ports
45x 46x 47x 48x
STACK NO.FAN
PSU-I
PSU-E
MGMT
Solid ON = LinkBlinking = Activity
1 3 52 4 6 7 8 129 1110 1613 1514 17 19 2118 20 22 23 24 2825 2726 3229 3130 33 35 3734 36 38 39 40 4441 4342 4845 474610GStack1
2
1
2
CONSOLESummit X450e-48p
TM Shared Ports
45x 46x 47x 48x
STACK NO.FAN
PSU-I
PSU-E
MGMT
Solid ON = LinkBlinking = Activity
1 3 52 4 6 7 8 129 1110 1613 1514 17 19 2118 20 22 23 24 2825 2726 3229 3130 33 35 3734 36 38 39 40 4441 4342 4845 474610GStack1
2
1
2
CONSOLESummit X450e-48p
TM Shared Ports
45x 46x 47x 48x
STACK NO.FAN
PSU-I
PSU-E
MGMT
Solid ON = LinkBlinking = Activity
1 3 52 4 6 7 8 129 1110 1613 1514 17 19 2118 20 22 23 24 2825 2726 3229 3130 33 35 3734 36 38 39 40 4441 4342 4845 474610GStack1
2
1
2
CONSOLESummit X450e-48p
TM Shared Ports
45x 46x 47x 48x
STACK NO.FAN
PSU-I
PSU-E
MGMT
Solid ON = LinkBlinking = Activity
1 3 52 4 6 7 8 129 1110 1613 1514 17 19 2118 20 22 23 24 2825 2726 3229 3130 33 35 3734 36 38 39 40 4441 4342 4845 474610GStack1
2
1
2
Aggregated 10Gbps uplinks supporting L2
& L3 services
Earls Court CorePark Royal Core
Telehouse Core
Extreme Virtual Switch providing server
aggregation
Extreme Virtual Switch providing server
aggregation
MX960 MX960
MX960
Proposed Access Technology Upgrade
• Replace Existing M10 with Juniper M10i• Fully resilient chassis (redundant HW) such as;
– Power Supplies– Cooling fans– Routing Engine (RE)– Forwarding Engine Board (FEB)
• Fully resilient Design/Configuration– 2 x 1Gbps Nodal loop Interfaces– 2 x 1Gbps Virtual switch uplinks (initial deployment)
Proposed Access Technology Upgrade
• Replace Existing Extreme S48i aggregation switch with Juniper EX4200.
• Redundant Power supply• Virtual Chassis Configuration (max 10)• 48 port 10/100/1000 capability• Architecture design based high end core routing
products– Packet Forwarding Engine– Routing Engine
Proposed Access Technology Upgrade
• Fully resilient design\configuration
– Virtual chassis deployment
– Multiple 1Gbps uplinks (resilience)
TM
LT
M10
JuniperNETW ORKS
AUX/MODEM
CONSOLE
MG M T
PIC 0/3
PIC 1/3
PIC 0/2
PIC 1/2
PIC 0/1
PIC 1/1
PIC 0/0
PIC 1/0
PI nternetrocessor
R ETHERNET 100BASE-TX
ST
AT
US
PO
RT
1R
XLIN
K
PO
RT
0R
XLIN
K
PO
RT
2R
XLIN
K
PO
RT
3R
XLIN
K
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
CONSOLE
49 50
MG
MT
PORT 49
PORT 50
Extreme Networks Summit48siR
ETHERNET 100BASE-TX
ST
AT
US
PO
RT
1R
XLIN
K
PO
RT
0R
XLIN
K
PO
RT
2R
XLIN
K
PO
RT
3R
XLIN
K
ETHERNET 1000 BASE-TX
ST
AT
US
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
CONSOLE
49 50
MG
MT
PORT 49
PORT 50
Extreme Networks Summit48siR
Resilient 200Mbps Capacity Links
Aggregation Point (AP)
BT LES service Active Equipment (A end)
ETHERNET 1000 BASE-TX
ST
AT
US
BT LES service Active Equipment (B end)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
CONSOLE
49 50
MG
MT
PORT 49
PORT 50
Extreme Networks Summit48siR
Point to Point fibre delivered via ‘A’ end and ‘B’ end BT serving exchange
2, 5, 10, 100 Service delivery
Edge Site
1/
3 2 1 0
3 2 1 0
0/
1/
0/
JuniperNETWORKS
MINOR ALARM
MAJOR ALARM
LINK LINK ACTACT
PORT 1 PORT 0
PICS ON/OFF
0/3 0/2 0/1 0/0
AUX/MODEM
OFFLINE
MGMT
CONSOLEPC CARD
RESET
HDD MASTER
FAIL ONLINE
RE-400
JUNIPER NETWORKS LABEL THIS SIDE
AUX/MODEM
OFFLINE
MGMT
CONSOLEPC CARD
RESET
HDD MASTER
FAIL ONLINE
RE-400
JUNIPER NETWORKS LABEL THIS SIDEMINOR ALARM
MAJOR ALARM
LINK LINK ACTACT
PORT 1 PORT 0
PICS ON/OFF
0/3 0/2 0/1 0/0
Ethernet 1000BASE-X SFP
ST
AT
US
LINK
ACTIVITY
Ethernet 1000BASE-X SFP
ST
AT
US
LINK
ACTIVITY
ETHERNET 1000 BASE LX/SX/LH
LINE
RX AC
TI V ITY
RX
TX
STA
TU
S
ETHERNET 1000 BASE LX/SX/LH
LINE
RX AC
TI V ITY
RX
TX
STA
TU
S
Virtual Switch
2Gbps AggregatedUplink
1Gbps Nodal Loops
EX4200 48 port 10/100/1000 switches (max 10 per stack)
Fully resilient M10i(redundant PSU, routing and
forwarding engines)
Sample AP Configuration BT LES service Active Equipment (A end)
BT LES service Active Equipment (B end)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
CONSOLE
49 50
MGM
T
PORT 49
PORT 50
Extreme Networks Summit48siR
Edge Site
1Gbps Nodal Loop
100Mbps Nodal Loop
Aggregation Point (AP)
2, 5, 10, 100 & 1000Mbps Service delivery
Existing Design Proposed Design
Access Bandwidth Upgrade
• All current 100Mbps nodal loops upgraded to 1Gbps– Merton – Croydon – Merton – Earls Court– Bromley - Croydon– Bromley – Welling– Lewisham - Welling– Welling – Bexleyheath– Romford – Bexleyheath– Romford – Telehouse– Waltham Forest – Camden– Haringey – Camden– Haringey – Barnet– Hayes - Harrow
• Prevent degradation of service in the event of primary loop failure• Enhanced Traffic Engineering capability
Access Bandwidth Upgrade
Park Royal
Lambeth
Richmond
Harrow
Hayes
Merton
Barnet
Enfield
Camden
Haringey
Newham
Waltham Forest
Tele House
Croydon
Welling
Romford
Bexley Heath
Bromley
Lewisham
Earls Court
Purley AP
Core Network Node
Aggregation Point
Core 10Gb Links
Nodal Loop 1Gbps
Core
URL Filtering Platform Enhancements
• Evaluation exercise underway “Squid MkII” vs Bluecoat 8100.
• Scaled to 2.5Gbps (N+1 resilience total 5Gbps)• Additional Active/passive F5’s deployed to scale
beyond 2.5Gbps• Current total filtered traffic 1.5Gbps• Expect 500Mbps year on year increase
URL Filtering Platform Enhancements
Disk 1 Disk 4Disk 3Disk 2 Disk 5 Disk 8Disk 7Disk 6
Operating System (RAID1)Mirrrored DisksHot-Swappable
Cached Objects (RAID5)Hot-Swappable
Represents a 4x performance benefit over current hardware
EXT3 Filesystem for operating system
XFS FilesystemSupports stripe-aligned storage blocks for better RAID performance
Balanced-Trees for fast i-node lookupsIdeal for many small files (typically 25KB)
XFS Allocation Groups allow concurrent (multi-threaded) access to
stored objects.
SQUID
4-Core CPU
4-Core CPU
2x 4-Core CPU allowing 8 concurrent execution threads/
process to handle users requests, cache-lookups and drive the high-performance
XFS file system
Represents a minimum of 8x performance benefit over
current hardware
2x 1Gbps copper ethernet interfaces. One facing the internet, the other
facing the user, representing a 10x performance improvement over
current hardware
32GB RAM for super fast access to the most frequently accessed cached-objects. Represents a 16x performance benefit over current hardware
Replacement CPE
• Extreme 24e3/S200 replaced with Juniper J2320• Features
– Forwarding performance IMIX 400Mbps– 3DES performance 170Mbps– 4 onboard 10/100 ports– 3 Physical Interface Card (PIM) slots
• ES code – Combines session state information/next hop forwarding
• MPLS support fast reroute (resilient fibre services)
Summary
• High availability, scalable future proof infrastructure• Low risk implementation/migration• Continued delivery of existing Network Centric services such as;
– Securestore– Desktop Content Control (DCC) – Campus Monitoring Protection (CMP)– High Definition Video Conferencing (HDVC)– Secure Remote Access (SRA)– Broadband Resilience Service (BRS)
• Enhanced distributed functionality – enabling new service developments such as:
– Virtual Private LAN Services (VPLS)– Broadcast video– High capacity Resilient Broadband Services– Security Services