provisioning janet
TRANSCRIPT
2
Where are we today?
»Janet6 in operation since 2013›Transmission and IP layer both managed by Janet NOC
›“No hidden extras” for additional capacity»Currently undergoing a ‘mid-term upgrade’›Core links being increased›Backbone routers being replaced–Greater 100GE density–Existing T4000 platform has no development
› Increase in capacity to regions19/10/2016Campus network engineering for data intensive science
3
The Janet6 backbone
19/10/2016
TelehouseNorth
TelehouseWest
Lowdham
Leeds
EquinixHEX
EquinixPG
EquinixManchester
Erdington
Glasgow
BradleyStoke
4x100GE2x100GE1x100GE
SharedDC, Slough
SharedDC, Leeds
Campus network engineering for data intensive science
6
Plumbing the tubes
»34Mbit/s: 2.64 inches»155Mbit/s: 5.64 inches»622Mbit/s: 11.4 inches»2.5Gbit/s: 1.9ft»10Gbit/s: 3.8ft»40Gbit/s: 7.6ft»100Gbit/s: 12ft»200Gbit/s: 17ft»400Gbit/s: 24ft19/10/2016
400Gbit/s
200Gbit/s
100Gbit/s
40Gbit/s
10Gbit/s
Campus network engineering for data intensive science
7
If Janet were a sewer…
Lee Tunnel, 24 feet in diameter. Photo © Thames Water
19/10/2016Campus network engineering for data intensive science
8
What is it all?
»Much of this is ‘commodity’ IP
»Top graph: traffic between Janet and GEANT› Most external R&E
traffic (except dedicated ccts)
»One instance of an overlay we’ll learn about later is LHCONE
»Bottom graph: traffic between Janet and LHCONE
19/10/2016Campus network engineering for data intensive science
9
That’s just the backbone
»Janet6 ensured we installed transmission nodes into the RNEPs›Or what used to be called RNEPs
»Regions still have a mix of fibre and leased lines›Have tried to move towards fibre where we know there’s a need
»Aiming to rationalise the architecture between backbone and regions
19/10/2016Campus network engineering for data intensive science
10
Site access
»Aim for access circuits to be contention-free›Sometimes usage surprises us–Upgrades have lead times, even if we have fibre to
the door›Account managers are the interface between Jisc and members
»QoS is not a bag of worms that we want to open
19/10/2016Campus network engineering for data intensive science
11
Performance problems
»As far as the NOC is concerned ‘just another form of network fault’›Report in the usual way via the Janet Service Desk
»Realising that some form of faults require more specialist knowledge›Hence the end-to-end performance initiative
19/10/2016Campus network engineering for data intensive science
12
Options
»Can we continue to scale?›Related: can we afford to continue to scale?› Is big data anything more than feeding the current exponential growth?
»Should we offload more traffic?›Separate DTZ interfaces rather than sharing IP capacity?
»Should we encourage more ‘off-peak’ data transfers?
»Notice the question marks – topics to discuss!19/10/2016Campus network engineering for data intensive science
13
Flattening the demand curve?
19/10/2016
Provision basedon this
This is neverused
Campus network engineering for data intensive science
14
Where next?
»More flexible provisioning of end-to-end 10G paths›Primary user of transmission layer was expected to be the Janet IP layer
›Did not forsee scale of requirement and expected them to be offloaded to Lightpaths / Netpaths
›Optical layer is designed for 40/100G+–10G doesn’t use spectrum efficiently
19/10/2016Campus network engineering for data intensive science
15
Where next?
»Could be done on the transmission layer›OTN
»Could be done on an ethernet layer›SDN, EoMPLS
»Does capacity need to be guaranteed or just segregated?
»Smarter services rather than just bandwidth?›On-net storage
19/10/2016Campus network engineering for data intensive science
jisc.ac.uk
16
Questions and discussion
Rob Evans
Chief network [email protected]@jisc.ac.uk
19/10/2016Campus network engineering for data intensive science