basic contextual information
Post on 30-Dec-2015
26 Views
Preview:
DESCRIPTION
TRANSCRIPT
1
Introduction to NETS
Marla MeehlNETS Manager
SCD Network Engineering and Technology Section (NETS)
December 8, 1998
Supercomputing • Communications • Data
NCAR Scientific Computing Division
2
Basic Contextual Information
Supercomputing • Communications • Data
NCAR Scientific Computing Division
3
Role of NETS in UCAR• NETS is responsible for almost all of UCAR networking
– Historical evolution for SCD to manage all UCAR networking
– Important for NETS to remain in SCD (periodic discussion of moving NETS to UCAR administrative domain)
» http://www.scd.ucar.edu/nets/Introducing/organizationlocation.html)
• NETS has additional SCD networking responsibilities– Discussed later
• NETS advised by NCAB– NCAB: Network Coordination and Advisory Board
– Reports to SCD Director
– Technical representatives from all parts of UCAR
– Successful paradigm proposed by ITC to be replicated for other UCAR-wide functions to be managed in an NCAR Division
Supercomputing • Communications • Data
NCAR Scientific Computing Division
4
NETS Responsibilities
Supercomputing • Communications • Data
NCAR Scientific Computing Division
• Types of networking supported for UCAR & SCD– All LANs
– All MANs
– All WANs
• Levels of networking supported for UCAR & SCD– Layer1: All physical cabling plant for UCAR/SCD
– Layer2: All logical networking - VLANs/ELANs, etc. for UCAR/SCD
– Layer3: All routing (99.9% IP) for UCAR/SCD
– Layer4 & above support: a little for UCAR; a lot for SCD
» More details later
5
What NETS Doesn’t Do
• “NETS responsibility ends at the wallplate”– “wallplate” means “telecommunications outlet” and is the
point at which building infrastructure network leaf-node cabling terminates
– Other Divisions are responsible “past the wallplate”
» This mainly means they do the host-networking part
» NETS does consult on configuration, performance, etc.
» “Private” networking beyond the wall plates isn’t forbidden
– For SCD, NETS is involved with all aspects of networking:
» Supercomputer networking
» Host-based networking: routing, configuration, etc.
» Special networking research projects• National Laboratory for Advanced Network Research (NLANR)
Engineering
• Hosting NLANR/CAIDA Web Cache Research Project
Supercomputing • Communications • Data
NCAR Scientific Computing Division
6
What NETS Doesn’t Do (cont.)
• NETS doesn’t do DNS, email, security policy, etc.– NETS does implement security perimeters based on CSAC
recommendations
• NETS doesn’t do MSS networking: HiPPI, FC, etc.– These use non-IP channel-extension protocols
• NETS doesn’t do telephones and PBXs– NETS does install the telephone cabling
– And we do inter-site tie-lines
• NETS doesn’t do first-level NOC/operations– Handled by Computer Room Operators
– They determine which Network Engineer to call
– We will visit network monitor station later
Supercomputing • Communications • Data
NCAR Scientific Computing Division
7
How Networking is Paid For• UCAR networking funding mechanisms
– Space tax: all UCAR programs (including SCD) pay for networking via an annual “tax” based upon square footage occupied by the program
– Space tax pays for “standard service” as defined by NCAB
» Includes all LAN, MAN, and WAN networking necessary for, and benefiting, UCAR as a whole
» Includes all UCAR cabling and core networking to the “wallplate”
» Includes 10-Mbps service to the office
» Includes telephone wiring and inter-site telephone tie-lines
– NETS charges back for anything beyond standard service
» Host-connects greater than 10-Mbps
» “Rush” jobs (less than 1-week advance notice)
» “Special” networking (e.g., satellite hookups)
• SCD networking funding mechanism– Line item in SCD budget
Supercomputing • Communications • Data
NCAR Scientific Computing Division
8
Magnitude of NETS Work
• NETS supports ~1,136 UCAR employees– Located in 9 buildings at 4 different sites
• NETS supports ~3,000 network-attached devices
• NETS supports ~114 IP subnetworks
• 46 dialup lines (via 2 all-digital PRI T1 links)
• ~100 pieces of network-equipment– routers, switches, monitorable repeaters, etc.
• Building cabling– 920 Standard “wallplates” installed
– 1,360 “wallplates” to install by end of FY2000
• NETS consults with 63 UCAR member universities– Involves 700 users of just SCD facilities, with 345 projects
involving 90 university facilities
Supercomputing • Communications • Data
NCAR Scientific Computing Division
9
Networking “Fun”Facts
• Total number of Ethernet switch ports available: 1950
• Total number of Ethernet switch ports used: 1750
• Total number of feet of backbone cable: 27,000 feet
• Total number of feet of wallplate cable: – Fiber: 17,000 feet
– CAT5: 240,000 feet
– 10BaseT: 230,000 feet
– Telephone: 300,000 feet
– Total: 787,000 feet
Supercomputing • Communications • Data
NCAR Scientific Computing Division
10
Resources Available to NETS
• NETS budget (FY1999)– $2,341,100 UCAR funding to NETS
– $261,769 SCD funding to NETS
• Total NETS staff: 15 people– Type of Staff
» 8 Network Engineers• Perform design, operation, tuning, trouble-shooting, etc.
» 4 Network Technicians• Mainly Layer1 (cabling) construction
» 3 Administrative/Support Staff
• Source of staff funding– 12 UCAR-funded staff
– 2 SCD-funded staff
– 1 staff funded by outside funding (NSF NLANR Program)
Supercomputing • Communications • Data
NCAR Scientific Computing Division
11
Overview of UCARLANs, MANs, and WANs
Supercomputing • Communications • Data
NCAR Scientific Computing Division
12
13
LANs
Supercomputing • Communications • Data
NCAR Scientific Computing Division
14
LAN Cabling
• Standard “wallplate” to each workspace– Connects nearest telecommunications closet to:
» 4 Cat5 cables
» 2 Fiber cable pairs (62.5 micron Multimode)
» 2 Cat3 cables (mainly for telephone)
– Only 40% of space meets this standard (920 wallplates)
– 1,360 new wallplates must to be installed by end of FY2000
» Required to support Fast Ethernet (100BaseX)
» $2,000,000 project (approved by UCAR management)
• Closets connect to root closet with fiber bundles– ML root closet is in SCD machine room (ML 29)
– FL root closet is in SCD machine room (FL2 3095)
• Network equipment goes in closets (~35 closets)
Supercomputing • Communications • Data
NCAR Scientific Computing Division
15
16
17
LAN Design & Equipment
• Backbone UCAR LAN network is largely ATM– OC-3 (155-Mbps) so far; some OC-12 testing
– Use ATM ELANs in the core: one per VLAN
– 3 Cisco ATM switches (model 1010)
• Rest of network is mainly switched Ethernet– VLAN-based (one VLAN per IP-subnet)
– 10BaseX and 100BaseX technology
– 23 Cisco 5500 Ethernet packet switches
• Routing– 4 Cisco 7507 routers
– 1 Cisco 4700 router
– 1 Cisco 2500 router
• UCAR is essentially an all-Cisco shop
Supercomputing • Communications • Data
NCAR Scientific Computing Division
18
19
Important LAN Projects
• FY1999– FUN Recabling Project (FL4 Uniform Network)
– ATD, MIS, COMET Computer Room Recabling
– FL1 South Atrium Recabling
– Y2K engineering
• FY2000– Year 2000 Recabling Project
– 100BaseX standard service implementation/expansion
– Y2K troubleshooting
Supercomputing • Communications • Data
NCAR Scientific Computing Division
20
MANs
Supercomputing • Communications • Data
NCAR Scientific Computing Division
21
Basic MAN Networking
• Inter-site connectivity– ML-FL OC-3 (155-Mbps) ATM link
» Also carries two virtual T1 voice tie-lines
– 10 Mbps link to Jeffco site
– T1 (1.5 Mbps) link to Marshall site
– UCAR-owned fiber between all FL campus buildings
• Home dial-up to NCAR– 2 PRI T1 lines (46 56Kbps/ISDN lines)
– Cisco 5300 Remote Access Server
• OC3 ATM atmospheric laser link to NOAA, Boulder (owned and operated by NOAA)
Supercomputing • Communications • Data
NCAR Scientific Computing Division
22
23
The BRAN MAN
Supercomputing • Communications • Data
NCAR Scientific Computing Division
24
BRAN• Boulder Research and Administration Network
– “Fiber for a healthy community”
• Consortium to build private fiber loop in Boulder– City of Boulder
– University of Colorado-Boulder
– National Institute of Standards and Technology (NIST) - Boulder
– National Oceanic and Atmospheric Administration (NOAA) - Boulder
– NCAR/UCAR
• Connects partners’ facilities + US West & ICG POPs– Includes ML-FL link of ~20 fiber pairs
• Construction estimated at $350,000/partner
• Essentially provides unlimited free bandwidth
• Bypasses US West– Provides competition between US West & ICG
Supercomputing • Communications • Data
NCAR Scientific Computing Division
25
26
WANs
Supercomputing • Communications • Data
NCAR Scientific Computing Division
27
UCAR WAN Connections
• Commodity Internet Connection– DS-3 (45-Mbps) Cable and Wireless service
– Cost-shared with local gigapop partners (more later)
– Steady 50% utilization; 85% peaks (5 min averages)
• OC-3 (155-Mbps) connection to NSF’s vBNS
• Planned OC-3 connection to UCAID’s Abilene Internet2 network
• All UCAR WAN connections part of the Front Range GigaPop (FRGP) operated by NETS (details later)
Supercomputing • Communications • Data
NCAR Scientific Computing Division
28
NSF’s vBNS:very-high-speed
Backbone Network Service
Supercomputing • Communications • Data
NCAR Scientific Computing Division
29
vBNS: History
Supercomputing • Communications • Data
NCAR Scientific Computing Division
• vBNS goals– jumpstart use of high-performance networking for advanced
research while advancing research itself with high-performance networking
– supplement Commodity Internet which has been inadequate for universities since NSFnet was decommissioned
• vBNS started about 3 years ago with the 5 NSF supercomputing centers
• vBNS started adding universities about 2 years ago
• NSF funding for vBNS ends March 2000
30
vBNS: The Network
• Operated by MCI/Worldcom
• ATM based network using mainly IP
• OC-12 (622-Mbps) backbone
• OC-12 (622-Mbps), OC-3 (155-Mbps) & DS-3 (45-Mbps) to institutions
• 73 institutions currently connected
• 131 institutions approved for connection to vBNS
Supercomputing • Communications • Data
NCAR Scientific Computing Division
31
vBNS and NCAR
• NCAR was an original vBNS node
• 43 of 63 UCAR member-universities are approved for vBNS (at last check on 8/1998)
• 28 UCAR members currently connected
• Major benefit for UCAR and its members– greatly superior to the Commodity Internet
– example: more UNIDATA data possible
– example: terabyte data transfers possible
Supercomputing • Communications • Data
NCAR Scientific Computing Division
32
33
UCAID’s Abilene Internet2 Network
Supercomputing • Communications • Data
NCAR Scientific Computing Division
34
Abilene: History
• First called the Internet2 Project
• Then non-profit UCAID (University Corporation for Advanced Internet Development) was founded
– UCAID is patterned after the UCAR model
– UCAID currently has 130 members (mostly universities)
• Abilene is the name of UCAID’s first network
• Note: Internet2 used to refer to:– the Internet organization, which is now called UCAID
– the actual network, which is now named Abilene
– the concept for a future network, soon to be reality in the form of Abilene
Supercomputing • Communications • Data
NCAR Scientific Computing Division
35
Abilene: Goals
• Goals: jumpstart use of high-performance networking for advanced research while advancing research itself with high-performance networking (same as vBNS)
• To be operated and managed by the members themselves (similar to the UCAR model)
• Provide an alternative when NSF support of the vBNS terminates on March 2000
Supercomputing • Communications • Data
NCAR Scientific Computing Division
36
Abilene: The Basic Network
• Uses Qwest OC48 (2.4Gbps) fiber optic backbone– grow to OC192 (9.6Gbps) fiber optic backbone
– Qwest to donate .5 billion worth of fiber leases over 5 years
• Hardware provided by Cisco Systems and Nortel (Northern Telecom)
• Internet Protocol (IP) over SONET– no ATM layer
• Uses 10 core router nodes at Qwest POPs– Denver is one of these
Supercomputing • Communications • Data
NCAR Scientific Computing Division
37
Abilene: Status
Supercomputing • Communications • Data
NCAR Scientific Computing Division
• Abilene soon to be designated by NSF as an NSF-approved High-Performance Network (HPN)
– puts Abilene on an equal basis with vBNS
• Abilene reached peering agreement with vBNS so NSF HPC (High Performance Connection) schools have equal access to each other regardless of vBNS or Abilene connection
• UCAID expects Abilene to come online 1/1999– UCAID expects 50 universities online on 1/1999
– UCAID expects 13 gigapops online on 1/1999
• Abilene beta network now includes a half-dozen universities
– plus exchanging routes with vBNS
38
Abilene and NCAR
• 48 of 63 UCAR member-universities are UCAID members (at last check on 8/1998)
• NSF funding of vBNS terminates March 2000
• Same benefit for UCAR and its members as vBNS– greatly superior to the Commodity Internet
– example: more UNIDATA data possible
– example: terabyte data transfers possible
Supercomputing • Communications • Data
NCAR Scientific Computing Division
39
40
41
The GigaPop Concept
Supercomputing • Communications • Data
NCAR Scientific Computing Division
42
What Is A GigaPop?
• Multiple sites agree to aggregate to a central location and share high-speed access from there, instead of each maintaining direct links to multiple networks
• Share costs through sharing infrastructure
• Share Commodity Internet expenses
• Essentially statistical multiplexing of expensive high-speed resources
– at any given time much more bandwidth is available to each institution than each could afford without sharing
• Share engineering and management expertise
• More clout with vendors
Supercomputing • Communications • Data
NCAR Scientific Computing Division
43
44
Front Range GigaPop (FRGP)
Supercomputing • Communications • Data
NCAR Scientific Computing Division
45
FRGP: Current NCAR Services
• vBNS access
• Shared Commodity Internet access
• Intra-Gigapop access
• Web cache hosting
• 24 x 365 NOC (Network Operation Center)
• Engineering and management
Supercomputing • Communications • Data
NCAR Scientific Computing Division
46
47
FRGP+Abilene: What Should NCAR Do?
• Why should NCAR connect to Abilene?– fate of vBNS is unknown after March 2000
– 48 of 63 UCAR members are also Internet2 members
• Why should NCAR join a joint FRGP/Abilene effort?– combined FRGP/Abilene effort saves NCAR money
– provides excellent intra-gigapop connectivity
– provides greater depth and redundancy of commodity internet access
Supercomputing • Communications • Data
NCAR Scientific Computing Division
48
FRGP: Why NCAR as GP Operator?
• NCAR already has considerable gigapop operational experience
• NCAR is already serving the FRGP members– Abilene connection is an incremental addition to existing gigapop
– doesn’t require a completely new effort from scratch
• NCAR already has a 24 x 365 NOC
• NCAR has an existing networking staff to team with the new FRGP engineer
• NCAR is university-neutral
Supercomputing • Communications • Data
NCAR Scientific Computing Division
49
FRGP: Membership Types
• “Full” members– both Commodity Internet + Abilene and/or vBNS access
• Commodity-only members– just Commodity Internet access
Supercomputing • Communications • Data
NCAR Scientific Computing Division
50
FRGP: Full Members
• University of Colorado - Boulder
• Colorado State University
• University of Colorado - Denver
• NCAR/UCAR
• University of Wyoming
Supercomputing • Communications • Data
NCAR Scientific Computing Division
51
FRGP: Commodity-only Members
• Colorado School of Mines
• Denver University
• University of Northern Colorado
Supercomputing • Communications • Data
NCAR Scientific Computing Division
52
FRGP: Possible Future Members
• U of C System
• NOAA/Boulder
• NIST/Boulder
• NASA/Boulder
Supercomputing • Communications • Data
NCAR Scientific Computing Division
53
FRGP: But!!!
• This is far from a done deal at this time!
• Members still have funding issues
• No agreements have yet been decided
• Latest developments– Qwest asked to bid on FRGP, but bid was unacceptably
expensive
Supercomputing • Communications • Data
NCAR Scientific Computing Division
54
55
FRGP: Why Add a Denver Gigapoint?
• Much cheaper for most members to backhaul to Denver instead of to existing NCAR gigapoint
– U of Wyoming, Colorado State, UofC Denver
• UofC Denver has computer room space that’s two blocks from Denver’s telco hotel.
• But also don’t want to re-engineer NCAR gigapoint– wanted to preserve vBNS backhaul to NCAR
– wanted to preserve MCI Commodity Internet backhaul to NCAR
– wanted to minimize changes to the existing gigapoint
• Incremental addition of Denver gigapoint is most cost-effective engineering option
Supercomputing • Communications • Data
NCAR Scientific Computing Division
56
FRGP: Abilene Implications for NCAR
• New annual expenses for NCAR
• New one-time share of startup costs
• NCAR employs & manages new FRGP engineer
• NCAR manages additional network equipment– including new off-site equipment in Denver
• Increased engineering responsibilities for NCAR
• Increased administrative/accounting responsibilities for NCAR
Supercomputing • Communications • Data
NCAR Scientific Computing Division
57
Useful URLs
• http://www.scd.ucar.edu
• http://www.scd.ucar.edu/nets
• http://www.ucar.edu/ucargen/groups/ncab/
• http://www.vbns.net
• http://www.ucaid.net
Supercomputing • Communications • Data
NCAR Scientific Computing Division
top related