advanced topics and mpls - …d2zmdbbm9feqrf.cloudfront.net/2015/usa/pdf/brkmpl-3101.pdf ·...
TRANSCRIPT
Advanced Topics and Future Directions in
MPLSSantiago Álvarez, Distinguished Technical Marketing Engineer
BRKMPL-3101
• IETF Update
• Segment Routing
• SDN WAN Orchestration
• Ethernet VPN
• Conclusion
Agenda
IETF Update
• Standards organization responsible for definition of MPLS specification
• Seven active working groups• MPLS
• Source Packet Routing in Networking (SPRING)
• Traffic Engineering Architecture and Signaling (TEAS)
• Common Control and Measurement Plane (CCAMP)
• Path Computation Element (PCE)
• BGP Enabled Services (BESS)
• Pseudowire And LDP-enabled Services (PALS)
• Previous working groups (recently concluded/closed)• Layer 3 Virtual Private Networks (L3VPN)
• Pseudowire Edge-to-Edge (PWE3)
• Layer 2 Virtual Private Networks (L2VPN)
• Most active developments in PCE, SPRING and BESS working groups
• MPLS related work also defined in IS-IS, OSPF and IDR (BGP) working groups
Internet Engineering Task Force
• Defined MPLS architecture and base protocols (LDP, RSVP-TE)
• Over 130 RFCs published to date
• Mature set of IP/MPLS specifications for both unicast and multicast
• Recent publications
• Updates to LDP for IPv6 (RFC 7552)
• Extended Administrative Groups in MPLS-TE (RFC 7308)
• Gap Analysis for Operating IPv6-Only MPLS Networks (RFC 7439)
• Current work on
• LSP Ping/Trace for Link Aggregation Group (LAG) Interfaces
• LSP and PW Ping/Trace over networks using Entropy Labels (EL)
• LSP Ping/Trace reply mode simplification
MPLS Working Group
• Source routing with forwarding state in packet
• No need for per-path state information at transit nodes
• Defined for MPLS and IPv6 data planes
• Extension to existing architectures and protocols carried in existing working groups (IS-IS, OSPF, IDR, PCE, etc.)
• Current work on
• Problem Statement and Requirements (draft-ietf-spring-problem-statement)
• Architecture (draft-ietf-spring-segment-routing)
• Segment Routing with MPLS data plane (draft-ietf-spring-segment-routing-mpls)
SPRING WG
• Defines path computation in large, multi-domain and multi-layer networks
• Initial focus on inter-area/inter-AS traffic engineering
• Current work addresses requirements for software defined networking and multi-layer network optimization
• Key areas of standardization:
• Stateful PCE (draft-ietf-pce-stateful-pce)
• PCE-initiated LSP (draft-ietf-pce-pce-initiated-lsp)
• PCEP Extensions for Segment Routing (draft-ietf-pce-segment-routing)
• Carrying Binding Label/Segment-ID in PCE-based Networks (draft-sivabalan-pce-binding-label-sid)
PCE WG
• Defines and extends network services based on BGP
• Focuses on L3 and L2 VPN services and their application to data center networking and service chaining
• Evolved from
• L3VPN and L2VPN working groups (both concluded)
• BGP extensions for NVO3
• Key areas of standardization:
• Ethernet VPN
• mVPN enhancements (extranet, bidir tunnels, PE-CE)
BESS WG
• MPLS includes a broad set of specifications
• Numerous extensions still being proposed (8+ working groups)
• Areas of most active standardization:
• Segment Routing (SPRING WG)
• Path computation element (PCE WG)
• Ethernet VPN, PBB-EVPN, VPWS EVPN (BESS WG)
IETF Summary
Segment Routing
• Source Routing
• the source chooses a path and encodes it in the packet header as an ordered list of segments
• the rest of the network executes the encoded instructions without any further per-flow state
• Segment: an identifier for any type of instruction
• forwarding or service
Segment Routing
• MPLS: an ordered list of segments is represented as a stack of labels
• SR re-uses MPLS data plane without any change
• IPv6: an ordered list of segments is represented as a routing extension header
Segment Routing
This session focuses on MPLS data plane
IPv6 IPv6
IPv6
Control
PlaneIPv4
MPLSData
Plane
• Shortest-path to the IGP prefix
• Global
• 16000 + Index
• Signaled by ISIS/OSPF
IGP Prefix Segment
DC (BGP-SR)
10
11
12
13
14
2 4
6 5
7
WAN (IGP-SR)
3
1
PEER
16005
• Forward on the IGP adjacency
• Local
• 1XY
• X is the “from”
• Y is the “to”
• Signaled by ISIS/OSPF
IGP Adjacency Segment
DC (BGP-SR)
10
11
12
13
14
2 4
6 5
7
WAN (IGP-SR)
3
1
PEER
124
• Shortest-path to the BGP prefix
• Global
• 16000 + Index
• Signaled by BGP
BGP Prefix Segment
DC (BGP-SR)
10
11
12
13
14
2 4
6 5
7
WAN (IGP-SR)
3
1
PEER
16001
• Forward to the BGP peer
• Local
• 1XY
• X is the “from”
• Y is the “to”
• Signaled by BGP-LS (topology information) to the controller
BGP Peering Segment
DC (BGP-SR)
10
11
12
13
14
2
6
7
WAN (IGP-SR)
3
1
PEER
Low Lat, Low BW
4
5High Lat, High BW
147
• WAE collects via BGP-LS
• IGP segments
• BGP segments
• Topology
WAN Controller
DC (BGP-SR)
10
11
12
13
14
2 4
6 5
7
WAN (IGP-SR)
3
1
PEER
Low Lat, Low BW
BGP-LS
BGP-LS
BGP-LS
• WAE computes that the green path can be encoded as
• 16001
• 16002
• 124
• 147
• WAE programs a single per-flow state to create an application-engineered end-to-end policy
An end-to-end path as a list of segments
DC (BGP-SR)
10
11
12
13
14
2 4
6 5
7
WAN (IGP-SR)
3
1
PEER
Low Lat, Low BW
50
Default ISIS cost metric: 10
{16001,16002,124,147}
PCEP, Netconf, BGP
• LFIB populated using SID information
• Forwarding table remains constant (Nodes + Adjacencies) regardless of number of paths
• Other protocols (LDP, RSVP, BGP) can still program LFIB
LFIB with Segment Routing
PE
PE
PE
PE
PE
PE
PE
PE
P
In
Label
Out
Label
Out
Interface
L1 L1 Intf1
L2 L2 Intf1
… … …
L8 L8 Intf4
L9 L9 Intf2
L10 Pop Intf2
… … …
Ln Pop Intf5
Node
Segment
Ids
Adjacency
Segment
Ids
Forwarding
table remains
constant
MPLS Control and Forwarding Operation with Segment Routing
PE1 PE2
IGPPE1 PE2
Services
IPv4 IPv6IPv4
VPN
IPv6
VPNVPWS VPLS
Packet
TransportLDP
MPLS Forwarding
RSVP BGP Static IS-IS OSPF
No changes to
control or
forwarding plane
IGP label
distribution, same
forwarding plane
BGP / LDP
• IP-based FRR is guaranted in any topology
• 2002, LFA FRR project at Cisco
• draft-bryant-ipfrr-tunnels-03.txt
• Directed LFA (DLFA) is guaranteed when metrics are symetric
• No extra computation (RLFA)
• Simple repair stack
• node segment to P node
• adjacency segment from P to Q
Automated & Guaranteed FRR
Backbone
C1 C2
E1 E4
E3E2
1000
Node segment
to P node
Default metric: 10
Topology Independent Loop-Free Alternate (TI-LFA)
Zero Segment Single Segment Double Segment
1000
Packet to Z
Default metric:10
R5
R2R1
A Z
R3R4
R5
Packet to Z
prefix-SID(Z)
Packet to Z
Q-space
P-space
Packet to Z
prefix-SID(Z)
prefix-SID(R4)
Default metric:10
R5
R2R1
A Z
R3R4
Packet to Z
Packet to Z
prefix-SID(Z)
Packet to Z
P-space Q-space
1000
Packet to Z
prefix-SID(Z)
adj-SID(R4-R3)
prefix-SID(R4)
Packet to Z
prefix-SID(Z)
adj-SID(R4-R3)
Default metric:10
R5
R2R1
A Z
R3R4 R3R4
Packet to Z
Packet to Z
prefix-SID(Z)
Packet to Z
• Efficient packet networks leverage ecmp-aware shortest-path!
• node segment!
• Simplicity
• one less protocol to operate
• No complex LDP/ISIS synchronization to troubleshoot
Simple and Efficient Transport of MPLS services
A B
M N
PE2PE1
All VPN services ride on the node segment
to PE2
Simple Disjointness
Non-Disjoint Traffic
A sends traffic with [65]
Classic ecmp “a la IP”
Disjoint Traffic
A sends traffic with [111, 65]
Packet gets attracted in blue plane
and then uses classic ecmp “a la IP”
ECMP-awareness!
• Tokyo to Brussels
• data: via US: cheap capacity
• VoIP: via Russia: low latency
• CoS-based TE with SR
• IGP metric set such as• Tokyo to Russia: via Russia
• Tokyo to Brussels: via US
• Russia to Brussels: via Europe
• Anycast segment “Russia” advertised by Russia core routers
• Tokyo CoS-based policy
• Data and Brussels: push the node segment to Brussels
• VoIP and Brussels: push the anycast node to Russia, push Brussels
CoS-based TE
Node segment to Brussels
Node segment to Russia
AS1
AS2
AS3
Content producer engineers its WAN traffic to egress peers
AS4
B
C
D
E
Payload
9.9.9.9/32
Payload
PeeringSID(E)
PrefixSID (C)
Engineered Path
TE Policyinstalled by Controller
Payload
PrefixSID(B) Payload
Best BGP and IGP
Path
Payload
PeeringSID(E)
Engineered Path
ISIS/SR-based WAN
A
BGP may advertise
• PeerNode SID
• PeerAdj SID
• PeerSet SID
• Per-application flow engineering
• End-to-End
• DC, WAN, AGG, PEER
• Millions of flows
• No signaling
• No midpoint state
• No reclassification atboundaries
Application Engineered Routing
DC (or AGG)
10
11
12
13
14
Push{16001,
200, 147}
Low-Latency to 7for application A12
2 4
6 5
7
Default ISIS cost metric: 10Default Latency metric: 10
ISIS: 35
WAN
3
1
BSID: 200
200: pop and push {16002,16004}
PEER
Low Lat, Low BW
Low-Lat to 4
PeerSID: 147, Low Lat, Low BW
PeerSID: 147, High Lat, High BW
SDN WAN Orchestration
• Significant capital investment so operator wants to maximize ROI
• Often under-utilized, need to run “hotter” for faster TTM of new services while protecting SLA
• Should be a “link” in a chain of services offered and orchestrated across combined DC/Cloud/WAN networks
• Should be a monitizable resource
• No open, standard APIs for application interaction and automation
Current WAN Challenges
SDN WAN Orchestration• Automation of WAN engineering and operations using
centralized control
• WAN orchestration provides network analytics, optimization, calendaring and planning
• Relies on centralized software platform with
• Global network view
• Multiple southbound mechanisms to collect and deploy network information in real time
• Northbound APIs for application interaction with network providing abstraction and programmability
• Incorporates many aspects of Traffic Engineering
• MPLS, Optical and Segment Routing as key WAN technologies
Optical
IPv4/IPv6/MPLS Segment Routing
API
Plan
Optimization &
Prediction
Analytics
Collector Deployer
Calendaring
Orchestration
Apps
WAN
Orchestration
Platform
Use-Case: Bandwidth Scheduling
SDN WAN Orch
Collector Deployer
NB API
WAN
R1
R2
R3
1
4
Data Center #1 Data Center #2
① Network conditions reported tocollector consistently
② Cust requests DC #1 – DC #2 bandwidth at Future Date
③ Demand admission request:<R1-R3, B/W, Future Date>
④ SDN W-O returns booking confirmation
⑤ On Future Date SDN WAN placescustomer demand on IGP or explicit path (TE tunnel) w/ B/W
2
Web
Portal
3
PCEP
5
32
Use-Case: Path Delegation and Optimization
SDN WAN Orch
Collector Deployer
NB API
WAN
R1
R2
R3
1
4
Customers DC/Clouds
① Network conditions reported tocollector
② Ops configures TE paths on head-endswith delegation thru NMS
③ Ops submits LSP optimization request
④ SDN W-O returns impact with changeover plan. Ops confirms
⑤ SDN W-O programs re-optimizationinto network
2
3
PCEP 5
33
NMS
Ops
Use-Case: Data Center – WAN Orchestration
SDN WAN Orch
Collector Deployer
NB API
WAN PathR1 R2
1
2
3
4
① Network conditions reportedto collector
② Cust App requests DC resource“spin-up” of service overlay;
③ App requests correspondingWAN resource allocation
④ SDN W-O programs WAN path
⑤ SDN W-O programs classifierto map customer traffic ontoWAN path
Premium
5
DC-WAN Orch App
DC Controller
NB API
PCEP OF
OF
CustomerTraffic
5
34
• An external PCE requires some form of topology acquisition
• A PCE may learn topology using BGP-LS, IGP, SNMP, etc.
• BGP-LS characteristics
• aggregates topology across one or more domains
• provides familiar operational model
• New BGP-LS attribute TLVs for SR
• IGP: links, nodes, prefixes
• BGP: peer node, peer adjacency, peer set
Topology Acquisition
Domain 1 Domain 2
Domain 0
BGP-LS
TED
BGP-LS BGP-LS
RR
PCE
PCE Architecture Introduction Addresses complex requirements for path computation in large, multi-domain
and multi-layer networks
Path computation element (PCE) – Computes network paths based on network information (topology, paths, etc.)
– Stores TE topology database (synchronized with network)
– May reside on a network node or on out-of-network server
– May initiate path creation
– Stateful - stores path database included resources used (synchronized with network)
– Stateless - no knowledge of previously established paths
Path computation client (PCC)– May send path computation requests to PCE
– May send path state updates to PCE
PCC and PCE communicate via Path Computation Element Protocol (PCEP)
H E L L Omy name is
PCE
• No knowledge of previously established paths
• Limited ability to optimize network resources
• Useful for inter-domain MPLS-TE in non-SDN deployments
• Knowledge of network topology and previously established paths (synchronized LSP databased)
• More optimal centralized path computation
• Enables centralized path initiation and update control
• Well suited for SDN deployments
Stateless and Stateful PCE
PCEP
Stateful PCE
TED
LSP DB
Stateful
PCC
PCEP
Stateless
PCC
Stateless PCE
TED
• PCC initiates path setup
• PCC retains control on path updates
• PCE learns LSP state to optimize path computation
• PCC or PCE may initiate path setup
• PCC may delegate update control to PCE
• PCC may revoke delegation
• PCE may return delegation
Active and Passive Stateful PCE
PCEP
Passive Stateful PCE
TED
LSP DB
Stateful
PCC
PCEP
Active Stateful PCE
TED
LSP DB
Stateful
PCC
PCE has update
control over
delegated pathsPCC maintains
update control
over paths
• Tighter integration with application demands
• PCE can be part of controller architecture determining what paths to set up and when
• PCC may initiate path setup based on distributed network state
• Can be used in conjunction with PCE-initiated paths
Active Stateful PCEPCE-Initiated and PCC-Initiated LSPs
PCEP
PCC-Initiated (Active Stateful PCE)
TED
LSP DB
Stateful
PCC
PCEP
PCE-Initiated (Active Stateful PCE)
TED
LSP DB
Stateful
PCC
PCC initiates
LSP and
delegates
update
control
PCE initiates
LSP and
maintains
update control
Path Computation Element Protocol (PCEP) Session establishment (capabilities), upkeep and closure
– PCE or PCC initiated (Open, Close, Keepalive messages)
Path computation
– PCC sends requests, PCE sends replies (Request, Reply messages)
Event notification (e.g. request cancelation, congestion)
– PCE or PCC originated (Notification messages)
Error announcement (e.g. protocol error)
– PCE or PCC originated (Error message)
LSP state synchronization and delegation
– PCC originated (Report message)
LSP update (delegated LSPs only)
– PCE originated (Update message)
LSP creation / initiation
– PCE originated (Create / Initiate message)
Protocol operates over TCP (port 4189)
Stateless
and
stateful
PCE
Stateful
PCE only
PCC PCE
Open/Close/Keepalive
Open/Close/Keepalive
PCC PCE
Reply
Request
PCC PCE
Notification
Notification
PCC PCE
Error
Error
PCC PCEReport
PCC PCEUpdate
PCC PCECreate/Initiate
PCE-Initiated LSP
Update
Attribute list
(D flag)
LSP Update
PCC PCE
• PCE updates LSP state and/or attributes
• Report with status=‘Up’ sent to all stateful PCEs
Report
Status=“Up”
(D flag)
Create / Initiate
Name=“ MyLSP”
Attribute list
Report
LSP id=0
Report
LSP id=X1
(S flag)
PCC PCE
Session Establishment State Synchronization LSP Creation / Initiation
PCC PCEOpen
Stateful (U, I Flag)
LSP Cleanup Timer
PCC PCE
Keepalive
Keepalive
PCE / PCC advertise stateful capability
Session support LSP updates (U flag)
Session supports LSP creation/initiation (I flag)
Cleanup timer for PCE LSPs
Session established when both peers receive keepalive
Open
Stateful (U, I Flag)
LSP Cleanup Timer
• Full synchronization of LSP state after session established
• PCC reports all PCE-initiated LSPs to synchronize (S flag) state with PCE
• PCC uses LSP id 0 to indicate end of synchronization
Report
LSP id=X2
(S flag)
• PCE request LSP creation / initiation with attribute list
• PCC attempts LSP setup and reports back LSP id and state
• PCC automatically delegates LSP control (D flag) to PCE
• PCC may not revoke delegation
• PCE may return delegation
Report
Name=“MyLSP”
LSP id=X
(D flag)
Report
Status=“X”
(D flag)
LSP Reporting
PCC PCE
• PCC reports LSP state as part of
Initial state synchronization
Control delegation
Control revoking
Deletion
Signaling error
State change
• Sent to all stateful PCEs
Report
Status=“Up”
(D flag)
Binding Label/Segment-ID
• Associate a binding label to RSVP-TE LSP
• Associate binding SID to Segment Routed Traffic Engineering path
• Upstream nodes can use label/SID to steer traffic into appropriate TE path
• Steering control with P-to-P TE tunnels
• Shorter SID list for segment routing
PPE
PCE
PCEP
Initiate
Binding Label/SIDPCEP
Report
Binding Label/SID
PCE Extensions for Segment Routing (SR) Segment routing enables source routing
based on segment ids distributed by IGP
PCE specifies path as list of segment ids
PCC forwards traffic by pushing segment id list on packets
No path signaling required
Minimal forwarding state
Maximum network forwarding virtualization
The state is no longer in the network but in the packet
Paths may be PCE- or PCC-initiated
PCEP
Segment List:: 10,20,30,40
Stateful PCE
TED
LSP DB
Stateful
PCC
Node
SID
Adjacency
SID
Forwarding
table remains
constant
In Out Int
L1 L1 Intf1
… … …
L7 L7 Int3
L8 Pop Intf3
… … …
L9 Pop Intf5
ApplicationPath
Request
draft-sivabalan-pce-segment-routing
PCEP Extensions for Segment Routing
• Segment Routing capability (when opening PCEP session)
• Existing ERO object with new Segment Routing Explicit Route Object (SR-ERO) sub-object
• Sub-objects include a segment id (SID) and/or an associated “Node or Adjacency Identifier”(NAI)
• A NAI can specify a
• IPv4 node
• IPv6 node
• IPv4 adjacency
• IPv6 adjacency
• Unnumbered adjacency with IPv4 node ids
• Request parameters indicate path type (SR or RSVP)
SID / NAI
SID / NAI
SID / NAI
SR-ERO
subobject (1)
SR-ERO
subobject (2)
SR-ERO
subobject (n)
ERO Object
Ethernet VPN
• Next generation solutions for Ethernet services
• BGP control-plane for Ethernet Segment and MAC distribution and learning over MPLS core
• Same principles and operational experience of IP VPNs
• No use of Pseudowires
• Uses MP2P tunnels for unicast
• Multi-destination frame delivery via ingress replication (via MP2P tunnels) or LSM
• Multi-vendor solutions under IETF standardization
Overview
E-LAN E-LINE E-TREE
EVPN
VPWS
EVPN
E-TREE
PBB-
EVPN
EVPN
• Existing VPLS solutions do not offer an All-Active per-flow redundancy
• Looping of Traffic Flooded from PE
• Duplicate Frames from Floods from the Core
• MAC Flip-Flopping over Pseudowire
• E.g. Port-Channel Load-Balancing does not produce a consistent hash-value for a frame with the same source MAC (e.g. non MAC basedHash-Schemes)
VPLS challenges for per-flow Redundancy
PE1
PE2
PE3
PE4
CE1 CE2
Echo !
PE1
PE2
PE3
PE4
CE1 CE2Duplicate !
M1
M1
M2
PE1
PE2
PE3
PE4
CE1 CE2MAC
Flip-Flop
M1 M2
• Leverages principles and operational experience of MP-BGP and IP VPNs• VPN Route Distinguisher / Target
• BGP Multi-Pathing
• BGP Prefix-Independent Convergence
• BGP Local Repair
• RT Constrain
• Inter Autonomous Systems
• Multi-homing and Load-balancing
• Provisioning Simplicity
• Optimal Forwarding
• Flexible VPN forwarding policies
• MAC Address Scalability
• Fast Convergence
Benefits
• Core Auto-Discovery
• Redundancy Group AD
• Access auto-sensing
• Dynamic service carving
• Dynamic designated forwarder (DF) election
Provisioning
Simplicity
• Device and Network Multi-Homing (MHD/MHN)
• ALL-Active (per Flow) load-balancing
• Single-Active (per Vlan) LB
• PE-to-PE LB (BGP multi-pathing)
• Core multi-pathing
Multi-Homing
And Load
Balancing
• VPN scalability
• VPN forwarding policies
• MAC scale (via Provider Backbone Bridging)
• Fast Convergence (Link (mass withdrawal) / Node)
• MAC Moves
Other
Benefits
• Ethernet loop avoidance
• Access and Core split-horizon
• BUM traffic duplication avoidance (DF)
• BUM traffic forwarding (Ingress Replication or P2MP LSM)
Optimal
Forwarding
• Next generation solution for Ethernet multipoint (E-LAN) connectivity services
• Data-plane learning of local C-MACs
• PEs run Multi-Protocol BGP to advertise locally learnt customer MAC addresses (C-MACs) & learn remote C-MACs
• No pseudowire full-mesh required
• Under standardization at IETF – WG draft: draft-ietf-l2vpn-evpn
Ethernet VPN (E-LAN)
MPLS
PE1
CE1
PE2
PE3
CE3
PE4
VID 100
SMAC: M1
DMAC: F.F.F
BGP MAC adv. Route
EVPN NLRI
MAC M1 via PE1
Data-plane address
learning from Access
Control-plane address
advertisement / learning
over Core
• Next generation solution for Ethernet multipoint (E-LAN) services by combining Provider Backbone Bridging (PBB - IEEE 802.1ah) and Ethernet VPN
• Data-plane learning of local C-MACs and remote C-MAC to B-MAC binding
• PEs run Multi-Protocol BGP to advertise local Backbone MAC addresses (B-MACs) & learn remote B-MACs
• Takes advantage of PBB encapsulation to simplify BGP control plane operation – faster convergence
• Lowers BGP resource usage (CPU, memory) on deployed infrastructure (PEs and RRs)
• Under standardization at IETF – WG draft: draft-ietf-l2vpn-pbb-evpn
PBB Ethernet VPN (E-LAN)
MPLS
PE1
CE1
PE2
PE3
CE3
PE4
B-MAC:
B-M1 B-M2
B-M2
BGP MAC adv.
Route
EVPN NLRI
MAC B-M1 via PE2
B-MAC:
B-M1
Control-plane address
advertisement /
learning over Core (B-
MAC)
Data-plane address
learning from Access
• Local C-MAC to local
B-MAC binding
Data-plane address
learning from Core
• Remote C-MAC to
remote B-MAC binding
PBBBackbone
Edge
Bridge
EVPN
PBB-EVPN PE
C-MAC:
MB
C-MAC:
MA
• Next generation solution for Ethernet point-to-point (E-LINE) connectivity services
• No use of Pseudowires
• No MAC learning performed by PE
• PEs run Multi-Protocol BGP to advertise local Ethernet Segment / AC identifiers
• Under standardization at IETF –draft-boutros-l2vpn-evpn-vpws
Ethernet VPN VPWS (E-LINE)
MPLS
PE1
CE1PE2
CE2
ES1 ES2
VPWS Service Config:
EVI = 100
Local AC ID = ES1
Remote AC ID = ES2
VPWS Service Config:
EVI = 100
Local AC ID = ES2
Remote AC ID = ES1
BGP Ethernet Auto-
Discovery Route
EVPN NLRI
Ethernet Segment ES1
reachable via PE1 using
MPLS label X
BGP Ethernet Auto-
Discovery Route
EVPN NLRI
Ethernet Segment ES2
reachable via PE2 using
MPLS label Y
Provisioning Model
VPWS service configured to
advertise a local AC ID
(segment) and target a
remote AC ID
Conclusion
• New MPLS enhancements focus on
• Segment Routing
• Path computation element
• Ethernet VPN
• Segment routing provides a simplified control plane alternative for increased network scalability and virtualization in MPLS networks
• PCE enables centralized path computation, creation and control (SDN model)
• Ethernet VPN provides more optimal and scalable solution for Ethernet services (E-LAN, E-LINE, E-TREE)
Conclusion
Participate in the “My Favorite Speaker” Contest
• Promote your favorite speaker through Twitter and you could win $200 of Cisco Press products (@CiscoPress)
• Send a tweet and include
• Your favorite speaker’s Twitter handle @111pontes
• Two hashtags: #CLUS #MyFavoriteSpeaker
• You can submit an entry for more than one of your “favorite” speakers
• Don’t forget to follow @CiscoLive and @CiscoPress
• View the official rules at http://bit.ly/CLUSwin
Promote Your Favorite Speaker and You Could Be a Winner
Complete Your Online Session Evaluation
Don’t forget: Cisco Live sessions will be available for viewing on-demand after the event at CiscoLive.com/Online
• Give us your feedback to be entered into a Daily Survey Drawing. A daily winner will receive a $750 Amazon gift card.
• Complete your session surveys though the Cisco Live mobile app or your computer on Cisco Live Connect.
Continue Your Education
• Demos in the Cisco campus
• Walk-in Self-Paced Labs
• Table Topics
• Meet the Engineer 1:1 meetings
• Related sessions
Backup
Cisco PCE Models (Cisco IOS XR)
ApplicationPath
Request
BGP-LS /
SNMP / CLI
PCEP
Stateful PCE
(NS-OS)
TED
LSP DB
SDNWAN Orchestration
Stateful
PCC
Stateless
PCCArea 1 Area 2
Area 0
BGP-LS /
SNMP / CLI
Stateless PCE
TED
PCEP
PCE-initiated
LSP
PCC-initiated
LSP
Stateless PCC Area 1 Area 2
Area
0Stateless PCE
(ABR)
Stateless PCE
(ABR)PCEP
PCEP
PCC-initiated
LSP
Inter-Area MPLS TE
ABRs act as stateless PCEs
ABRs implement backward recursive PCE-Based Computation
Introduced in IOS XR 3.5.2
IOS XR 5.1.1 introduces PCEP RFC-compliance
• Out-of-network, stateful PCE server
• PCE always initiates LSPs
• Introduced in IOS XR 5.1.1
• Out-of-network, stateless PCE server
• PCC initiates LSPs
• Introduced in IOS XR 3.5.2
• IOS XR 5.1.1 introduces PCEP RFC-compliance
Thank you