asr 9000 enterprise l3vpn design and implementation guide

81
Cisco ASR 9000 Enterprise L3VPN Design and Implementation Guide Authors: Chris Lewis, Saurabh Chopra, Javed Asghar July 2014 Building Architectures to Solve Business Problems

Upload: vannhu

Post on 11-Feb-2017

282 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Cisco ASR 9000 Enterprise L3VPN Design and Implementation GuideAuthors: Chris Lewis, Saurabh Chopra, Javed AsgharJuly 2014

Building Architectures to Solve Business Problems

Page 2: ASR 9000 Enterprise L3VPN Design and Implementation Guide

iiAbout Cisco Validated Design (CVD) Program

About Cisco Validated Design (CVD) Program

The CVD program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable,

and more predictable customer deployments. For more information visit http://www.cisco.com/go/designzone.

ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLEC-

TIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS

SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF

MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING

FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUP-

PLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES,

INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF

THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED

OF THE POSSIBILITY OF SUCH DAMAGES.

THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR

THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR

OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT

THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY

DEPENDING ON FACTORS NOT TESTED BY CISCO.

The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of

California, Berkeley (UCB) as part of UCB’s public domain version of the UNIX operating system. All rights reserved.

Copyright © 1981, Regents of the University of California.

Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R)

Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses

and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in

the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative

content is unintentional and coincidental.

Cisco ASR 9000 Enterprise L3VPN Design and Implementation Guide

© 2014 Cisco Systems, Inc. All rights reserved.

Page 3: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Design and Implementation Guide

C O N T E N T S

C H A P T E R 1 Introduction 1-1

C H A P T E R 2 Overview 2-1

Terminology 2-2

C H A P T E R 3 Enterprise Network Virtualization Design 3-1

Small Network Design and Implementation 3-1

PE Operation and Configuration 3-2

VRF Configuration 3-2

PE VRF Configuration 3-3

PE-CE Routing Protocol Configuration 3-4

PE eBGP Routing Configuration with CPE 3-4

Route Reflector Operation and Configuration 3-7

Route Reflector Configuration 3-7

PE and P Transport Configuration 3-8

Fast Failure Detection Using Bidirectional Forwarding Detection 3-8

Fast Convergence Using Remote Loop Free Alternate Fast Reroute 3-8

Fast Convergence Using BGP Prefix Independent Convergence 3-9

PE and P Transport Configuration 3-9

QoS Operation and Implementation in the Core Network 3-14

PE and P Core QoS Configuration 3-15

Large Scale Network Design and Implementation 3-16

Using Core Network Hierarchy to Improve Scale 3-17

Large Scale Hierarchical Core and Aggregation Networks with Hierarchy 3-18

PE Transport Configuration 3-19

ABR Transport Configuration 3-21

CORE RR Transport Configuration 3-25

C H A P T E R 4 PE-to-CE Design Options 4-1

Inter-Chassis Communication Protocol 4-1

ICCP Configuration 4-2

Ethernet Access 4-2

Hub-and-Spoke Using MC-LAG Active/Standby 4-2

iiiCisco ASR 9000 Enterprise L3VPN

Page 4: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Contents

Hub-and-Spoke with VRRP IPv4 and IPv6 Active/Active 4-4

PE Configuration 4-5

Access Switch Configuration 4-7

CPE Configuration 4-8

G.8032 Ring Access with VRRP IPv4 and IPv6 4-8

PE Configuration 4-9

CE Configuration 4-14

Ethernet Access Node Configuration 4-14

nV (Network Virtualization) Access 4-16

nV Satellite Simple Rings 4-17

nV L1 Fabric Configuration 4-18

nV Satellite Layer 2 Fabric 4-20

nV L2 Fabric Configuration 4-21

nV Cluster 4-22

nV Cluster Configuration 4-24

Native IP-Connected Access 4-25

MPLS Access using Pseudowire Headend 4-28

Access Device Configuration 4-28

PE Configuration 4-29

CE Configuration 4-31

C H A P T E R 5 PE UNI QoS 5-1

PE UNI QoS Configuration 5-2

PE UNI QoS Configuration with PWHE Access 5-4

C H A P T E R 6 Performance and Scale 6-1

Internet Peering Application 6-2

100G Edge and Core-Facing Ports 6-5

A P P E N D I X A Related Documents A-1

ivCisco ASR 9000 Enterprise L3VPN

Design and Implementation Guide

Page 5: ASR 9000 Enterprise L3VPN Design and Implementation Guide

1-1Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Chapter 1 Introduction

1Introduction

Enterprise Layer 3 (L3) network virtualization enables one physical network to support multiple L3 virtual private networks (L3VPNs). To a group of end users, it appears as if each L3VPN is connected to a dedicated network with its own routing information, quality of service (QoS) parameters, and security and access policies.

This functionality has numerous applications, including:

• Requirements to separate departments and functions within an organization for security or compliance with statutes such as the Sarbanes-Oxley Act or Health Insurance Portability and Accountability Act (HIPAA).

• Mergers and acquisitions in which consolidating disparate networks into one physical infrastructure that supports existing IP address spaces and policies provides economic benefits.

• Airports in which multiple airlines each require an independent network with unique policies, but the airport operator provides only one network infrastructure

• requirements to separate guest networks from internal corporate networks.

For each use case requiring network separation, a L3VPN infrastructure offers the following key benefits over non-virtualized infrastructures or separate physical networks:

• Reduced costs—Multiple user groups with virtual networks benefit from greater statistical multiplexing to provide bandwidth with higher utilization of expensive WAN links.

• A single network enables simpler management and operation of operations, administration, and management (OAM) protocols.

• Security between virtual networks is built in without needing complex access control lists (ACLs) to restrict access for each user group.

• Consolidating network resources into one higher-scale virtualized infrastructure enables more options for improved high availability (HA), including device clustering and multi-homing.

Page 6: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Design and Implementation Guide

C H A P T E R 2

Overview

End-to-end virtualization of an enterprise network infrastructure relies upon the following primary components:

• Virtual routing instances in edge routers, delivering service to each group that uses a virtualized infrastructure instance

• Route-distinguishers, added to IPv4 addresses to support overlapping address spaces in the virtual infrastructure

• Label-based forwarding in the network core so that forwarding does not rely on IP addresses in a virtual network, which can overlap with other virtual networks

Figure 2-1 summarizes the three most common options used to virtualize enterprise Layer 3 (L3) WANs.

Figure 2-1 Transport Options for L3 WAN Virtualization

2972

58

PE PE

CE Customer-deployed Backbone(IP and/or MPLS)

Customer Managed Backbone

CE

Site 1

Self Deployed IP/MPLS Backbone

CE

P

P

PSite 3

Site 2

1

PEPE

CE

SP Managed Domain

Customer ManagedBackbone

Customer ManagedBackbone

CE

Site 1 ProviderEthernetService

SP Managed “Ethernet” Service

CE

Site 3

Site 2

2

PEPE

CE

SP Managed Domain

VRFs IP Routing Peer (BGP, Static, IGP)

Customer ManagedBackbone

Customer ManagedBackbone

CE

Site 1Provider

MPLS VPNService

SP Managed “IP VPN” Service

CE

Site 3

Site 2

3

2-1Cisco ASR 9000 Enterprise L3VPN

Page 7: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 2 Overview Terminology

This guide focuses on Option 1 in Figure 2-1, the enterprise-owned and operated Multiprotocol Label Switching (MPLS) L3VPN model.

TerminologyThe following terminology is used in the MPLS L3VPN architecture:

• Virtual routing and forwarding instance (VRF)—This entity in a physical router enables the implementation of separate routing and control planes for each client network in the physical infrastructure.

• Label Distribution Protocol (LDP)—This protocol is used on each link in the MPLS L3VPN network to distribute labels associated with prefixes; labels are locally significant to each link.

• Multiprotocol BGP (MP-BGP)—This protocol is used to append route distinguisher values to ensure unique addressing in the virtualized infrastructure, and imports and exports routes to each VRF based on route target community value.

• P (provider) router—This type of router, also called a Label Switching Router (LSR), runs an Interior Gateway Protocol (IGP) and LDP.

• PE (provider edge) router—This type of router, also called an edge router, imposes and removes MPLS labels and runs IGP, LDP, and MP-BGP.

• CE (customer edge) router—This type of router is the demarcation device in a provider-managed VPN service. It is possible to connect a LAN to the PE directly. However, if multiple networks exist at a customer location, a CE router simplifies the task of connecting the networks to an L3VPN instance.

The PE router must import all client routes served by the associated CE router into the VRF of the PE router associated with that virtual network instance. This enables the MPLS L3VPN to distribute route information to enable route connectivity among branch, data center, and campus locations.

Figure 2-2 shows how the components combine to create an MPLS L3VPN service and support multiple L3VPNs on the physical infrastructure. In the figure, a P router connects two PE routers. The packet flow is from left to right.

Figure 2-2 Figure 2 Major MPLS L3VPN Components and Packet Flow

The PE on the left has three groups, each using its own virtual network. Each PE has three VRFs (red, green and blue); each VRF is for the exclusive use of one group using a virtual infrastructure.

2972

594 ByteIGP Label

4 ByteVPN Label Original Packet

PEPE

PE

PE

P P

P

DataVPN

LabelIGP

Label

2-2Cisco ASR 9000 Enterprise L3VPN

Design and Implementation Guide

Page 8: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 2 Overview Terminology

When an IP packet comes to the PE router on the left, the PE appends two labels to the packet. BGP appends the inner (VPN) label and its value is constant as the packet traverses the network. The inner label value identifies the interface on the egress PE out of which the IP packet will be sent. LDP assigns the outer (IGP) label; its value changes as the packet traverses the network to the destination PE.

For more information about MPLS VPN configuration and operation, refer to “Configuring a Basic MPLS VPN” at:

• http://www.cisco.com/c/en/us/support/docs/multiprotocol-label-switching-mpls/mpls/13733-mpls-vpn-basic.html

2-3Cisco ASR 9000 Enterprise L3VPN

Design and Implementation Guide

Page 9: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Design and Implementation Guide

C H A P T E R 3

Enterprise Network Virtualization Design

This Cisco Validated Design (CVD) focuses on the role of Cisco ASR 9000 Series Aggregation Services Routers (ASR 9000) as P and PE devices in the Multiprotocol Label Switching (MPLS) L3VPN architecture described in Figure 2-2 on page 2-2. Providers can use this architecture to implement network infrastructures that connect virtual networks among data centers, branch offices, and campuses using all types of WAN connectivity.

In this architecture, data centers (branch or campus) are considered customer edge (CE) devices. The design considers provider (P) and provider edge (PE) router configuration with the following connectivity control and data plane options between PE and CE routers:

• Ethernet hub-and-spoke or ring

• IP

• Network virtualization (nV)

• Pseudowire Headend (PWHE) for MPLS CE routers

Two options are considered for the MPLS L3VPN infrastructure incorporating P and PE routers:

• A flat LDP domain option, which is appropriate for smaller MPLS VPN deployments (700-1000 devices).

• A hierarchical design using RFC 3107-labeled BGP to segment P and PE domains into IGP domains to help scale the infrastructure well beyond 50,000 devices.

This chapter first examines topics common to small and large network implementations. These topics are discussed in the context of small network design. Later, it looks at additional technologies needed to enable small networks to support many more users. This chapter includes the following major topics:

• Small Network Design and Implementation, page 3-1

• Large Scale Network Design and Implementation, page 3-16

Small Network Design and ImplementationFigure 3-1 shows the small network deployment topology.

3-1Cisco Enterprise L3 Virtualization

Page 10: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design Small Network Design and Implementation

Figure 3-1 Small Deployment Topology

• Core and aggregation networks form one IGP and LDP domain.

– Scale target for this architecture is less than 700 IGP/LDP nodes

• All VPN configuration is on the PE nodes.

• Connectivity between the PE Node and the branch/campus router includes the following options:

– Ethernet hub-and-spoke or ring

– IP between PE and CE

– Network virtualization

– PWHE to collapse CE into PE as nV alternative

The domain of P and PE routers, which is no greater than a few hundred, can be implemented using single IGP and LDP instances. On the left is the data center, with the network extending across the WAN to branch and campus locations.

PE Operation and ConfigurationPE routers must perform multiple tasks, separating individual group control and data planes, and advertising routes between sites in the same VPN.

This functionality is achieved by creating VRF instances to provide separate data and control plane for the L3VPN. VRFs are configured with route distinguishers, which are unique for a particular VRF on the PE device. MP-BGP, which is configured on PEs, advertises and receives VRF prefixes appended with route distinguishers, which are also called VPNv4 prefixes.

Each VRF is also configured with a route target, which is a BGP-extended community representing a VPN that is tagged to VPNv4 prefixes when a route is advertised or exported from PE. Remote PEs selectively import only those VPNV4 prefixes into their VRF, which are tagged with the RT that matches the configured VRF-imported RT. PE can use static routing or run routing protocols with CPE at branches to learn prefixes. Unless there is a compelling reason to do otherwise in the design, route targets and route distinguishers are set to the same values to simplify configuration.

VRF ConfigurationVRF configuration comprises the following major steps, which are described in detail in the subsequent sections:

• Defining a unique VRF name on the PE.

• Configuring a route distinguisher value for the VRF under router BGP so that VRF prefixes can be appended with RD value to make VPNv4 prefixes.

Pre-AggregationNode

Pre-AggregationNode

Core andAggregation

IP/MPLS Domain

Pre-AggregationNode

Pre-AggregationNode

Pre-AggregationNode

Pre-AggregationNode

CoreNode

CoreNode

CoreNode

Ethernet

nV

CoreNode

DataCenter

Campus/Branch 29

7260

3-2Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 11: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design Small Network Design and Implementation

• Importing and exporting route targets corresponding to the VPN in the VRF configuration so that PE can advertise routes with the assigned export route target and download prefixes tagged with configured import route target into the VRF table.

• Applying the VRF on the corresponding interface connected to CPE.

PE VRF Configuration

Step 1 Configure a VRF named BUS-VPN2.

vrf BUS-VPN2

Step 2 Enter IPv4 address-family configuration mode for VRF.

address-family ipv4 unicast

Step 3 Configure the import route target to selectively import IPv4 routes into the VRF matching the route target.

import route-target 8000:8002

Step 4 Configure the export route target to tag IPv4 routes having this route target while advertising to remote PE routers.

export route-target 8000:8002

Step 5 Enter IPv6 address-family configuration mode for VRF.

address-family ipv6 unicast

Step 6 Configure the import route target to selectively import IPv6 routes into the VRF matching the route target.

import route-target 8000:8002 !

Step 7 Configure the export route target to tag IPv6 routes having this route target while advertising to remote PE routers.

export route-target 8000:8002 !!

Step 8 Enter router BGP configuration mode.

router bgp 101

Step 9 Enter VRF BGP configuration mode.

vrf BUS-VPN2

Step 10 Define the route distinguisher value for the VRF. The route distinguisher is unique for each VRF in each PE router.

rd 8000:8002

Step 11 Enter VRF IPv4 address-family configuration mode.

address-family ipv4 unicast

3-3Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 12: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design Small Network Design and Implementation

Step 12 Redistribute directly-connected IPv4 prefixes.

redistribute connected

Step 13 Enter VRF IPv6 address-family configuration mode.

address-family ipv6 unicast

Step 14 Redistribute directly-connected IPv6 prefixes.

redistribute connected

Step 15 Enter CPE-facing interface configuration mode.

interface GigabitEthernet0/0/1/7

Step 16 Configure VRF on the interface.

vrf BUS-VPN2? ipv4 address 100.192.30.1 255.255.255.0 ipv6 address 2001:100:192:30::1/64!

At this stage, the L3 VRF and the route distinguisher are configured to append to routes coming into the VRF. The route distinguisher enables multiple VPN clients to use overlapping IP address spaces. The L3VPN core can differentiate overlapping addresses because each IP address is appended with a route distinguisher and therefore is globally unique. Combined client IP addresses and route distinguishers are referred to as VPNv4 addresses.

To get routes from a client site at the CE (branch or campus router) into the VRF, either static routing or a routing protocol is used. Examples of the most common static routing and eBGP scenarios follow.

PE-CE Routing Protocol ConfigurationThis section describes how to configure PE-CE routing protocols.

PE eBGP Routing Configuration with CPE

PE is configured with an Exterior Border Gateway protocol (eBGP) session with CPE in the VRF under address-family IPv4 to exchange IPv4 prefixes with CPE. Routes learned from CPE are advertised to remote PEs using MP-BGP.

The following procedure illustrates the configuration.

Step 1 Enter router BGP configuration mode.

router bgp 101

Step 2 Enter VRF BGP configuration mode.

vrf BUS-VPN2?

Step 3 Configure the CPE IP address as a BGP peer and its autonomous system (AS) as remote-as.

neighbor 100.192.30.3 remote-as 65002?

Step 4 Enter VRF IPv4 address-family configuration mode for BGP.

3-4Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 13: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design Small Network Design and Implementation

address-family ipv4 unicast!

PE Static Routing Configuration with CPE

PE is configured using static routes in the VRF, with next-hop as the CPE address. Configuration use IPv4 address-family to configure IPv4 static routes. The static routes are then advertised to remote PEs by redistributing under BGP.

The following procedure illustrates the configuration.

Step 1 Enter router static configuration mode for the VRF.

router staticvrf BUS-VPN2

Step 2 Enter VRF IPv4 address-family configuration mode for static.

address-family ipv4 unicast

Step 3 Configure Static route 100.192.194.0/24 with next hop 100.192.40.3

100.192.194.0/24 100.192.40.3

router bgp 101<snip>vrf BUS-VPN2rd 8000:8002 address-family ipv4 unicast

Step 4 Redistribute Static Prefixes under BGP VRF address-family IPv4 so that they are advertised to remote PEs.

redistribute static

After routes from the branch or campus router are in the client VRF, the routes must be advertised to other sites in the L3VPN to enable reachability. Reachability is delivered using MP-BGP to advertise VPNv4 addresses, associated with the VRF at the branch location, to members of the same VPN.

PE MP-BGP Configuration

MP-BGP configuration comprises BGP peering with route reflector for VPNv4 and VPNv6 address families to advertise and receive VPNv4 and VPNv6 prefixes. MP-BGP uses session-group to configure address-family independent (global) parameters; peers requiring the same parameters can inherit its configuration.

Session-group includes update-source, which specifies the interface whose address is used for BGP communication, and remote-as, which specifies the AS number to which the CPE belongs. Neighbor-group is configured to import session-group for address-family independent parameters, and to configure address-family dependent parameters, such as next-hop-self, in the corresponding address-family.

The following procedure illustrates MP-BGP configuration on PE.

Step 1 Enter Router BGP configuration mode.Step TBD

3-5Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 14: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design Small Network Design and Implementation

router bgp 101

Step 2 Step TBD Configured BGP Router-ID

bgp router-id 100.111.11.2

Step 3 Configure the VPNv4 unicast address-family to exchange VPNv4 prefixes.

address-family vpnv4 unicast!

Step 4 Configure the VPNv6 unicast address-family to exchange VPNv4 prefixes.

address-family vpnv6 unicast!

Step 5 Configure session-group to define address-family independent parameters.

session-group ibgp

Step 6 Specify remote-as as the route reflector AS number.

remote-as 101

Step 7 Specify update-source as Loopback0 for BGP communication.

update-source Loopback0!

Step 8 Enter neighbor-group configuration mode.

neighbor-group rr

Step 9 Import session-group address-family independent parameters.

use session-group ibgp

Step 10 Enable vpnv4 address-family for neighbor group and configure address-family dependent pa-rameters under VPNv4 address-family.

address-family vpnv4 unicast!

Step 11 Enable vpnv6 address-family for neighbor group and configure address-family dependent pa-rameters under VPNv6 AF.

address-family vpnv6 unicast!

Step 12 Import the neighbor-group route-reflector to define the route-reflector address as a VPNv4 and VPNv6 peer.

neighbor 100.111.4.3 use neighbor-group rr!

The above sections described how we can configure virtual networks on a PE router. The network can have hundreds of PE routers connecting to Campus/Branch Routers and Data centers. A PE router in one location learns VRF prefixes of remote location using Multiprotocol IBGP. PEs cannot advertise VPNv4 prefix received from one IBGP peer to another due to IBGP split-horizon rule. IBGP requires a full mesh between all IBGP-speaking PEs. It can cause scalability and overhead issues as PE routers require maintaining the IBGP session with all remote PEs and sending updates to all IBGP peers; this causes causing duplication. To address this issue, route reflectors can be deployed, as explained below.

3-6Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 15: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design Small Network Design and Implementation

Route Reflector Operation and ConfigurationRoute reflectors (RR) addresses the scalability and overhead issues of requiring full mesh of IBGP sessions because of the IBGP split-horizon rule. When a device is assigned as a RR, and PE devices are assigned as its clients, the split horizon rule is relaxed on the RR, enabling the route protector to the prefixes received from one client PE to another client PE. PEs must maintain IBGP sessions with the RR only to send and receive updates. The RR reflects updates received from one PE to other PEs in the network, eliminating the requirement for IBGP full mesh.

By default, a RR does not change next-hop or any other prefix attributes. Prefixes received by PEs still have remote PEs as next-hop, not the RR, so PEs can send traffic directly to remote PEs. This eliminates the requirement to have the RR in the data path and RR can only be used for RR function.

Route Reflector Configuration

This section describes ASR 1000 RR configuration, which includes configuring a peer-group for router BGP. PEs having the same update policies (such as update-group, remote-as) can be grouped into the same peer group, which simplifies peer configuration and enables more efficient updating. The peer-group is made a RR client so that the RR can reflect routes received from a client PE to other client PEs.

Step 1 Loopback interface for IBGP session.

interface loopback0ip address 100.111.4.3 255.255.255.255

Step 2 Enter Router BGP configuration mode.

router bgp 101bgp router-id 100.111.4.3

Step 3 Define Peer-group rr-client.

neighbor rr-client peer-group

Step 4 Specify Update-source as Loopback0 for BGP communication.

neighbor rr-client update-source Loopback0

Step 5 Specify remote-as as AS number of PE.

neighbor rr-client remote-as 101

Step 6 Configure PE router as Peer-group member.

neighbor 100.111.11.2 peer-group rr-client

Step 7 Enter VPNv4 address-family mode.

address-family vpnv4

Step 8 Make peer-group members RR client.

neighbor rr-client route-reflector-client

Step 9 Configure RR to send both standard and Extended community(RT) to Peer-group members.

neighbor rr-client send-community both

Step 10 Activate the PE as peer for VPNv4 peering under VPNv4 address-family.

3-7Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 16: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design Small Network Design and Implementation

neighbor 100.111.11.2 activate

After configuring PE with the required virtual network configuration described above, transport must be set up to carry virtual network traffic from one location to another. The next section describes how we can implement transport and optimize it with fast detection and convergence for seamless service delivery.

PE and P Transport ConfigurationTransport networks, comprising PE and P routers, transport traffic from multiple L3VPNs from one location to another. To achieve seamless communication across virtual networks, transport networks require reachability and label-based forwarding across the transport domain, along with fast failure detection and convergence. Bidirectional Forwarding Detection (BFD) is used for fast failure detection. Fast convergence uses Remote Loop Free Alternate Fast Reroute (rLFA FRR) and BGP Prefix Independent Convergence (PIC). These methods are described in subsequent sections.

Transport implementation requires PE, P, and RR devices configured using IGP for reachability. These devices also use LDP to exchange labels for prefixes advertised and learned from IGP. The devices maintain a Label Forwarding Information Base (LFIB) to make forwarding decisions.

When sending VRF traffic from a branch or campus router to a remote location, PE encapsulates traffic in MPLS headers, using a label corresponding to the BGP next-hop (remote PE) for the traffic. Intermediate devices, such as P devices, examine the top label on the MPLS header, perform label swapping, and use LFIB to forward traffic toward the remote PE. P devices can ignore the VRF traffic and forward packets using only labels. This enables the establishment and use of labeled-switched paths (LSPs) when a PE device forwards VPN traffic to another location.

Fast Failure Detection Using Bidirectional Forwarding Detection

Link failure detection in the core normally occurs through loss of signal on the interface. This is not sufficient for BGP, however, because BGP neighbors are typically not on the same segment. A link failure (signal loss) at a BGP peer can remain undetected by another BGP peer. Absent some other failure detection method, reconvergence occurs only when BGP timers expire, which is too slow. BFD is a lightweight, fast hello protocol that speeds remote link failure detection.

PE and P devices use BFD as a failure detection mechanism on the CORE interfaces that informs IGP about link or node failure within a millisecond (ms). BFD peers send BFD control packets to each other on the interfaces enabled with BFD at negotiated intervals. If a BFD peer does not receive a control packet and the configured dead timer (in ms) expires, the BFD session is torn down and IGP is rapidly informed about the failure. IGP immediately tears down the session with the neighbor and switches traffic to an alternate path. This enables failure detection is achieved in ms.

Fast Convergence Using Remote Loop Free Alternate Fast Reroute

After BFD detects a failure, the next step is to "fast converge" the network to an alternate path. For IGP prefixes, LFAs enable fast. The type of LFA depends on the network topology. The first type, called simply LFA, is suitable for hub-and-spoke topologies. The second type is called remote LFA (rLFA) and is suitable for ring topologies.

3-8Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 17: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design Small Network Design and Implementation

• LFA FRR calculates the backup path for each prefix in the IGP routing table; if a failure is detected, the router immediately switches to the appropriate backup path in about 50 ms. Only loop-free paths are candidates for backup paths.

• rLFA FRR works differently because it is designed for cases with a physical path, but no loop-free alternate paths. In the rLFA case, automatic LDP tunnels are set up to provide LFAs for all network nodes.

Without LFA or rLFA FRR, a router calculates the alternate path after a failure is detected, which results in delayed convergence. However, LFA FRR calculates the alternate paths in advance to enable faster convergence. P and PE devices have alternate paths calculated for all prefixes in the IGP table, and use rLFA FRR to fast reroute in case of failure in a primary path.

Fast Convergence Using BGP Prefix Independent Convergence

For BGP prefixes, fast convergence is achieved using BGP PIC, in which BGP calculates an alternate best path and primary best path and installs both paths in the routing table as primary and backup paths. This functionality is similar to rLFA FRR, which is described in the preceding section. If the BGP next-hop remote PE becomes unreachable, BGP immediately switches to the alternate path using BGP PIC instead of recalculating the path after the failure. If the BGP next-hop remote PE is alive but there is a path failure, IGP rLFA FRR handles fast reconvergence to the alternate path and BGP updates the IGP next-hop for the remote PE.

PE and P Transport Configuration

This section describes how to configure PE and P transport to support fast failure detection and fast convergence.

PE Transport Configuration

PE configuration includes enabling IGP (IS-IS or OSPF can be used) to exchange core and aggregation reachability, and enabling LDP to exchange labels on core facing interfaces. A loopback interface is also advertised in IGP as the BGP VPNv4 session is created, using update-source Loopback0 as mentioned in PE Operation and Configuration, page 3-2. Using the loopback address to source updates and target updates to remote peers improves reliability; the loopback interface is always up when the router is up, unlike physical interfaces that can have link failures.

BFD is configured on core-facing interfaces using a 15 ms hello interval and multiplier 3 to enable fast failure detection in the transport network. rLFA FRR is used under IS-IS level 2 for fast convergence if a transport network failure occurs. BGP PIC is configured under VPNv4 address-family for fast convergence of VPNv4 Prefixes if a remote PE becomes unreachable.

The following procedure describes PE transport configuration.

Step 1 Loopback Interface for BGP VPNv4 neighbor ship.

interface Loopback0ipv4 address 100.111.11.1 255.255.255.255ipv6 address 2001:100:111:11::1/128!

Step 2 Core interface.

interface TenGigE0/0/0/0

ipv4 address 10.11.1.0 255.255.255.254!

3-9Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 18: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design Small Network Design and Implementation

Step 3 Enter Router IS-IS configuration.

router isis core

Step 4 Assign NET address to the IS-IS process.

net 49.0100.1001.1101.1001.00

Step 5 Enter IPv4 address-family for IS-IS.

address-family ipv4 unicast

Step 6 Metric style Wide generates new-style TLV with wider metric fields for IPv4.

metric-style wide!

Step 7 Enter IPv6 address-family for IS-IS.

address-family ipv6 unicast

Step 8 Metric-style Wide generates new-style TLV with wider metric fields for IPv6.

metric-style wide !

Step 9 Configure IS-IS for Loopback interface.

interface Loopback0

Step 10 Make loopback passive to avoid sending unnecessary hellos on it.

Passive

Step 11 Enter IPv4 Address-family for Loopback.

address-family ipv4 unicast !

Step 12 Enter IPv6 Address-family for Loopback.

address-family ipv6 unicast !!

Step 13 Configure IS-IS for TenGigE0/0/0/0 interface.

interface TenGigE0/0/0/0

Step 14 Configure IS-IS Circuit-Type on the interface.

circuit-type level-2-only

Step 15 Configure Minimum Interval between sending BFD hello packets to the neighbor.

bfd minimum-interval 15

Step 16 Configure BFD multiplier.

bfd multiplier 3

Step 17 Enable BFD to detect failures in the path between adjacent forwarding engines.

bfd fast-detect ipv4

Step 18 Enter the IPv4 Address-family for TenGig interface.

address-family ipv4 unicast

Step 19 Configure IS-IS metric for Interface.

metric 10

3-10Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 19: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design Small Network Design and Implementation

Step 20 Enable per prefix FRR for Level 2 prefixes.

fast-reroute per-prefix level 2

Step 21 Configure an FRR path that redirects traffic to a remote LFA tunnel.

fast-reroute per-prefix remote-lfa tunnel mpls-ldp

Step 22 Enable mpls LDP sync to ensure LDP comes up on link before Link is used for forwarding to avoid packet loss.

mpls ldp sync !!

Step 23 Enter MPLS LDP configuration mode.

mpls ldp

log graceful-restart!

Step 24 Configure router-id for LDP.

router-id 100.111.11.1!

Step 25 Enable LDP on TenGig0/0/0/0.

interface TenGigE0/0/0/0 address-family ipv4 !

Step 26 Enter BGP configuration mode.

router bgp 101

Step 27 Enter VPNv4 address-family mode.

address-family vpnv4 unicastes

Step 28 Configure receive capability of multiple paths for a prefix to the capable peers.

additional-paths receive

Step 29 Configure send capability of multiple paths for a prefix to the capable peers.

additional-paths send

Step 30 Enable BGP PIC functionality with appropriate route-policy to calculate back up paths.

additional-paths selection route-policy add-path-to-ibgp!

Step 31 Configure route-policy used in BGP PIC.

route-policy add-path-to-ibgp

Step 32 Configure to install 1 backup path.

set path-selection backup 1 installend-policy

3-11Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 20: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design Small Network Design and Implementation

P Transport Configuration

P transport configuration includes enabling IGP (IS-IS or OSPF) to exchange core and aggregation reachability, and enabling LDP to exchange labels on core-facing interfaces. P routers are not required because VRF is not configured on them and so they do not need VPNv4 and VPNv6 prefixes. P routers know only core and aggregation prefixes in the transport network and do not need to know prefixes belonging to VPNs. P swap labels based on the top packet label belonging to remote PEs, and use LFIB to accomplish PE-to-PE LSP. rLFA FRR is used under IS-IS level 2 for fast convergence if a transport network failure occurs.

Step 1 Core Interface connecting to PE.

interface TenGigE0/0/0/0

ipv4 address 10.11.1.1 255.255.255.254!

Step 2 Core Interface connecting to Core MPLS network.

interface TenGigE0/0/0/1ipv4 address 10.2.1.4 255.255.255.254!

Step 3 Enter Router IS-IS configuration.

router isis core

Step 4 Assign NET address to the IS-IS process.

net 49.0100.1001.1100.2001.00

Step 5 Enter IPv4 address-family for IS-IS.

address-family ipv4 unicast

Step 6 Metric-style Wide generates new-style TLV with wider metric fields for IPv4.

metric-style wide !

Step 7 Configure IS-IS for Loopback interface.

interface Loopback0

Step 8 Make loopback passive to avoid sending unnecessary hellos on it.

Passive

Step 9 Enter IPv4 Address-family for Loopback.

address-family ipv4 unicast !!

Step 10 Configure IS-IS for TenGigE0/0/0/0 interface.

interface TenGigE0/0/0/0

Step 11 Configure IS-IS Circuit-Type on the interface.

circuit-type level-2-only

Step 12 Configure Minimum Interval between sending BFD hello packets to the neighbor.

bfd minimum-interval 15

Step 13 Configure BFD multiplier.

bfd multiplier 3

3-12Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 21: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design Small Network Design and Implementation

Step 14 Enable BFD to detect failures in the path between adjacent forwarding engines.

bfd fast-detect ipv4

Step 15 Enter the IPv4 Address-family for TenGig interface.

address-family ipv4 unicast

Step 16 Configure IS-IS metric for Interface.

metric 10

Step 17 Enable mpls LDP sync to ensure LDP comes up on link before Link is used for forwarding to avoid packet loss.

mpls ldp sync !

!

Step 18 Configure IS-IS for TenGigE0/0/0/1 interface.

interface TenGigE0/0/0/1

Step 19 Configure IS-IS Circuit-Type on the interface.

circuit-type level-2-only

Step 20 Configure Minimum Interval between sending BFD hello packets to the neighbor.

bfd minimum-interval 15

Step 21 Configure BFD multiplier.

bfd multiplier 3

Step 22 Enable BFD to detect failures in the path between adjacent forwarding engines.

bfd fast-detect ipv4

Step 23 Enter the IPv4 Address-family for TenGig interface.

address-family ipv4 unicast

Step 24 Configure IS-IS metric for Interface.

metric 10

Step 25 Enable per prefix FRR for Level 2 prefixes.

fast-reroute per-prefix level 2

Step 26 Configure an FRR path that redirects traffic to a remote LFA tunnel.

fast-reroute per-prefix remote-lfa tunnel mpls-ldp

Step 27 Enable mpls LDP sync to ensure LDP comes up on link before Link is used for forwarding to avoid packet loss.

mpls ldp sync !!

Step 28 Enter MPLS LDP configuration mode.

mpls ldplog neighbor graceful-restart

3-13Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 22: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design Small Network Design and Implementation

Step 29 Configure router-id for LDP.

router-id 100.111.2.1

Step 30 Enable LDP on TenGig0/0/0/0.

interface TenGigE0/0/0/0!

Step 31 Enable LDP on TenGig0/0/0/1.

interface TenGigE0/0/0/1!

QoS Operation and Implementation in the Core NetworkEnterprise virtual networks consist of traffic types that include voice, video, critical applications traffic, and end user web traffic. This traffic requires different priorities and treatments based upon their characteristics and their criticality to the business. In the MPLS core network, QoS ensures proper treatment to the virtual network's traffic being transported. This is achieved as described in this section.

As discussed in previous sections, MPLS header is imposed on traffic in the Enterprise virtual network ingressing the MPLS network on PEs. When this labeled traffic is transported in the core network, QoS implementation uses 3-bit MPLS EXP bits field (0-7) present in the MPLS header for proper QoS treatment. DiffServ PHB, which defines packet-forwarding properties associated with different traffic classes, is divided into the following:

• Expedited Forwarding (EF)—Used for traffic requiring low loss, low latency, low jitter, and assured bandwidth.

• Assured Forwarding (AF)—Allows four classes with certain buffer and bandwidth.

• Best Effort (BE)—Best effort forwarding.

This guide focuses on the MPLS Uniform QoS model in which DSCP marking of received branch or campus router's traffic on PE is mapped to corresponding MPLS EXP bits. The mapping shown in Table 3-1 is used for different traffic classes to DSCP and MPLS EXP.

The QoS configuration includes configuring class-maps created for the different traffic classes mentioned above assigned with the corresponding MPLS Exp. While configuring policy maps, real-time traffic class CMAP-RT-EXP is configured with highest priority 1; it is also policed to ensure low latency

Table 3-1 Traffic Class Mapping

Traffic Class PHB DSCP MPLS EXP

Network Management AF 56 7

Network Control Protocols AF 48 6

Enterprise Voice and Real-time EF 46 5

Enterprise Video Distribution AF 32 4

Enterprise Telepresence AF 24 3

Enterprise Critical: In Contract AF 16 2

Enterprise Critical: Out of Contract AF 8 1

Enterprise Best Effort BE 0 0

3-14Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 23: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design Small Network Design and Implementation

expedited forwarding (EF). Rest classes are assigned with the respective required bandwidth. WRED is used as congestion avoidance mechanism for Exp 1 and 2 traffic in the Enterprise critical class CMAP-EC-EXP. The Policy-map is applied to the PE and P Core interfaces in egress direction across the MPLS network.

PE and P Core QoS Configuration

Step 1 Class-map for the Enterprise critical traffic.

class-map match-any CMAP-EC-EXP

Step 2 Matching MPLS experimental 1 OR 2 from traffic topmost MPLS header.

match mpls experimental topmost 1 2end-class-map!

Step 3 Class map for Enterprise Telepresence traffic.

class-map match-any CMAP-ENT-Tele-EXP

Step 4 Matching MPLS experimental 3 from traffic topmost MPLS header.

match mpls experimental topmost 3end-class-map!

Step 5 Class-map for video traffic.

class-map match-any CMAP-Video-EXP

Step 6 Matching MPLS experimental 4 from traffic topmost MPLS header.

match mpls experimental topmost 4end-class-map!

Step 7 Class-map for real-time traffic.

class-map match-any CMAP-RT-EXP

Step 8 Match MPLS experimental 5 from traffic topmost MPLS header.

match mpls experimental topmost 5

end-class-map!

Step 9 Class-map for control traffic.

class-map match-any CMAP-CTRL-EXP

Step 10 Match MPLS experimental 6 from traffic topmost MPLS header.

match mpls experimental topmost 6end-class-map!

Step 11 Class-map for Network Management traffic.

class-map match-any CMAP-NMgmt-EXP

Step 12 Match MPLS experimental 7 from traffic topmost MPLS header.

match mpls experimental topmost 7end-class-map!!

Step 13 Policy-map configuration for 10gig Link.

3-15Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 24: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design Large Scale Network Design and Implementation

policy-map PMAP-NNI-E

Step 14 Match the RT class.

class CMAP-RT-EXP

Step 15 Define top priority 1 for the class for low-latency queuing.

priority level 1

Step 16 Police the priority class.

police rate 1 gbps!!class CMAP-CTRL-EXP

Step 17 Assign the desired bandwidth to the class.

bandwidth 200 mbps!class CMAP-NMgmt-EXP bandwidth 500 mbps!class CMAP-Video-EXP bandwidth 2 gbps!class CMAP-EC-EXPbandwidth 1 gbps!

Step 18 Use WRED for Enterprise critical class for both Exp 1 and 2 for congestion avoidance. Experimental 1 will be dropped early.

random-detect exp 2 80 ms 100 msrandom-detect exp 1 40 ms 50 ms!class CMAP-ENT-Tele-EXP bandwidth 2 gbps!class class-default!end-policy-map!

Step 19 Core interface on P or PE.

interface TenGigE0/0/0/0

Step 20 Egress service policy on the interface.

service-policy output PMAP-NNI-E

Large Scale Network Design and ImplementationWhen an MPLS network comprises more than 1000 devices, implementing a hierarchical network design is recommended. In this guide, the hierarchical network design uses labeled BGP, as defined in RFC 3107. Figure 3-2 shows a network with hierarchy.

3-16Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 25: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design Large Scale Network Design and Implementation

Figure 3-2 Large Network, Ethernet/SDH/nV Branch Connectivity

• The core and aggregation networks add hierarchy with 3107 ABR at border of core and aggregation.

• The core and aggregation networks are organized as independent IGP/LDP domains.

• The network domains are interconnected with hierarchical LSPs based on RFC 3107, BGP IPv4+labels. Intra-domain connectivity is based on LDP LSPs.

• Topologies between the PE Node and branch router can be Ethernet hub-and-spoke, IP, Ethernet ring, or nV.

Using Core Network Hierarchy to Improve ScaleThe main challenges of large network implementation result from network size, such as the size of routing and forwarding tables in individual P and PE devices caused by the large number of network nodes, and trying to run all nodes in one IGP/LDP domain. In an MPLS environment, unlike in an all-IP environment, all service nodes need a /32 network address as a node identifier. /32 addresses, however, cannot be summarized, because link state databases grow in a linear fashion as devices are added to the MPLS network.

The labeled BGP mechanism, defined in RFC 3107, can be used so that link state databases in core network devices do not have to learn the /32 addresses of all MPLS routers in the access and aggregation domains. The mechanism effectively moves prefixes from the IG link state database into the BGP table. Labeled BGP, implemented in the MPLS transport network, introduces hierarchy in the network to provide better scalability and convergence. Labeled BGP ensures all devices only receive needed information to provide end-to-end transport.

Large-scale MPLS transport networks used to transport virtual network traffic can be divided into two IGP areas. In the Open Shortest Path First (OSPF) backbone area, the core network is configured using Intermediate System to Intermediate System (IS-IS) L2. In the OSPF non-backbone area, the aggregation network is configured with IS-IS L1. Another option is to run different IGP processes in the core and aggregation networks. No redistribution occurs between core and aggregation IGP levels/areas/processes, which helps to reduce the size of the routing and forwarding tables of the routers in each domain and provides better scalability and faster convergence. Running IGP in the area enables intra-area reachability, and LDP is used to build intra-area LSPs.

Because route information is not redistributed between different IGP levels/areas, PE devices need a mechanism to reach PE device loopbacks in other area/levels and send VPN traffic. Labeled BGP enables inter-area reachability and accomplish end-to-end LSP between PEs. Devices that are connected to both aggregation and core domains are called Area Border Routers (ABRs). ABRs run labeled Interior BGP (iBGP) sessions with PEs in their local aggregation domain and serve as route reflectors for the PEs. PEs advertise their loopback addresses (used for VPNv4 peering) and their corresponding labels to local route reflector ABRs using labeled IBGP. ABRs run labeled IBGP sessions with a RR device in the core domain, which reflects PE loopback addresses and labels learned from one ABR client to other ABR

2972

61

CoreNode

CoreNode

CoreNode

AggregationNode

AggregationNode

CoreNode

Core NetworkIP/MPLS Domain

iBGP (eBGP) Hierarchical LSP

Aggregation NetworkIP/MPLS Domain

AggregationNode

AggregationNode

AggregationNode

AggregationNode

Aggregation NetworkIP/MPLS Domain

DataCenter

LDP LSP LDP LSP LDP LSP

Ethernet

nV

Campus/Branch

3-17Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 26: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design Large Scale Network Design and Implementation

clients without changing next-hop or other attributes. ABRs learn PE loopback addresses and labels from other aggregation domains and advertise them to PEs in their local aggregation domain. ABRs use next-hop-self while advertising routes to PEs in local aggregation domain and to RRs in the core domain.

This makes PEs learn remote PE loopback addresses and labels with local ABR as BGP next-hop and ABRs learn remote PE loopback addresses with remote ABR as the BGP next-hop. PEs use two transport labels when sending labeled VPN traffic to the MPLS cloud: one label for remote PE and another label for its BGP next-hop (local ABR). The top label for BGP next-hop local ABR is learned from local IGP/LDP. The label below that, for remote PE, is learned through labeled IBGP with the local ABR. Intermediate devices across different domains perform label swapping based on the top label in received MPLS packets. This achieves end-to-end hierarchical LSP without running the entire network in a single IGP/LDP domain. Devices learn only necessary information, such as prefixes in local domains and remote PE loopback addresses, which makes labeled BGP scalable for large networks.

Figure 3-3 Large Network Control and Data Plane

• Aggregation domains run ISIS level-1/OSPF non-backbone area and core domain runs ISIS level-2/backbone area.

• ABR connects to both aggregation and core domains.

• ABR runs Labeled iBGP with PEs in local aggregation domain and core RR in core domain.

• ABR uses next-hop-self while advertising routes to PEs and core RR.

Large Scale Hierarchical Core and Aggregation Networks with Hierarchy PE routers are configured in IS-IS level-1 (OSPF non-backbone area) to implement ABR, PE, and core RR transport configuration for large scale MPLS VPNs. ABR aggregation domain facing interfaces are configured using IS-IS level-1 (OSPF non-backbone area) and core domain-facing interface configured with IS-IS Level-2(OSPF backbone area). Core RR interfaces will remain in IS-IS Level-2 (Or OSPF backbone area). PE and local ABR are configured with Labeled IBGP session with ABR as RR. Core RR is configured with Labeled BGP peering with all ABRs. LDP is configured in a similar way to the smaller network. ABR is configured with next-hop-self for both PE and core-labeled BGP peers to achieve hierarchical LSP. BFD is used on all interfaces as a fast failure detection mechanism. BGP PIC is configured for fast convergence of IPv4 prefixes learnt through labeled IBGP. rLFA FRR is configured under IS-IS for providing fast convergence of IGP learnt prefixes.

2972

62

CoreNode

AggregationNode

CoreNode

Core RR

RRABR

RRABR

iBGP (eBGP) Hierarchical LSP

AggregationNode

AggregationNode

AggregationNode

AggregationNode

AggregationNode

DataCenter

LDP LSP LDP LSP LDP LSP

Ethernet

nV

Campus/Branch

next-hop-self next-hop-self

BGP IPv4+label BGP IPv4+label

BGP IPv4+label

Core NetworkISIS Level 2 Or OSPF Backbone Area

Aggregation NetworkISIS Level 1 Or OSPF Non

Backbone Area

VPNLabel

RemotePE Label

Local RRABR Label

Aggregation NetworkISIS Level 1 Or OSPF Non

Backbone Area

VPNLabel

RemotePE Label

Local RRABR Label

VPNLabel

RemotePE Label

3-18Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 27: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design Large Scale Network Design and Implementation

ABR's loopbacks are required in both aggregation and core domains since their loopbacks are used for labeled BGP peering with PEs in local aggregation domain as well as RR in the core domain. To achieve this, ABR loopbacks are kept in the IS-IS Level-1-2 or OSPF backbone area.

PE Transport Configuration

Step 1 Enter router IS-IS configuration for PE.

router isis agg-acc

Step 2 Define NET address.

net 49.0100.1001.1100.7008.00

Step 3 Define is-type as level 1 for the PE in aggregation domain.

is-type level-1

Step 4 Enter IPv4 address-family for IS-IS.

address-family ipv4 unicast

Step 5 Metric-style Wide generates new-style TLV with wider metric fields for IPv4.

metric-style wide!

Step 6 Configure IS-IS for Loopback interface.

interface Loopback0

Step 7 Make loopback passive to avoid sending unnecessary hellos on it.

passive point-to-point

Step 8 Enter IPv4 Address-family for Loopback.

address-family ipv4 unicast !

Step 9 Configure IS-IS for TenGigE0/2/0/0 interface.

interface TenGigE0/2/0/0

Step 10 Configure minimum interval between sending BFD hello packets to the neighbor.

bfd minimum-interval 15

Step 11 Configure BFD multiplier.

bfd multiplier 3

Step 12 Enable BFD to detect failures in the path between adjacent forwarding engines.

bfd fast-detect ipv4

Step 13 Configure point-to-point IS-IS interface.

point-to-point

Step 14 Enter the IPv4 Address-family for TenGig interface.

address-family ipv4 unicast

Step 15 Enable per prefix FRR for Level 2 prefixes.

3-19Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 28: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design Large Scale Network Design and Implementation

fast-reroute per-prefix level 2

Step 16 Configure an FRR path that redirects traffic to a remote LFA tunnel.

fast-reroute per-prefix remote-lfa tunnel mpls-ldp

Step 17 Configure IS-IS metric for Interface.

metric 10

Step 18 Enable mpls LDP sync to ensure LDP comes up on link before link is used for forwarding to avoid packet loss.

mpls ldp sync !!

Step 19 Enter router BGP configuration mode.

router bgp 101!

Step 20 Enter IPv4 address-family.

address-family ipv4 unicast

Step 21 Configure receive capability of multiple paths for a prefix to the capable peers.

additional-paths receive

Step 22 Configure send capability of multiple paths for a prefix to the capablepeers.

additional-paths send

Step 23 Enable BGP PIC functionality with appropriate route-policy to calculate back up paths.

additional-paths selection route-policy add-path-to-ibgp!

Step 24 Configure session-group to define parameters that are address-family independent.

session-group intra-as

Step 25 Specify remote-as as AS number of RR.

remote-as 101

Step 26 Specify Update-source as Loopback0 for BGP communication.

update-source Loopback0!

Step 27 Enter neighbor-group configuration mode.

neighbor-group ABR

Step 28 Import Session-group AF-independent parameters.

use session-group intra-as

Step 29 Enable Labeled BGP address-family for neighbor group.

address-family ipv4 labeled-unicast !

Step 30 Configure ABR loopback as neighbor.

neighbor 100.111.3.1

Step 31 Inherit neighbor-group ABR parameters.

use neighbor-group ABR!

3-20Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 29: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design Large Scale Network Design and Implementation

!

Step 32 Configure route-policy used in BGP PIC.

route-policy add-path-to-ibgp

Step 33 Configure to install 1 backup path.

set path-selection backup 1 installend-policy

Step 34 Enter MPLS LDP configuration mode.

mpls ldplog neighbor graceful-restart

Step 35 Configure router-id for LDP.

!router-id 100.111.7.8

Step 36 Enable LDP on TenGig0/2/0/0.

interface TenGigE0/2/0/0

ABR Transport Configuration

Step 1 Enter Router IS-IS configuration for PE.

router isis agg-acc

Step 2 Define NET address.

net 49.0100.1001.1100.3001.00

Step 3 Enter IPv4 address-family for IS-IS.

address-family ipv4 unicast

Step 4 Metric-style Wide generates new-style TLV with wider metric fields for IPv4.

metric-style wide!

Step 5 Configure IS-IS for Loopback interface.

interface Loopback0

Step 6 Make loopback passive to avoid sending unnecessary hellos on it.

passive point-to-point

Step 7 Enter IPv4 address-family for Loopback.

address-family ipv4 unicast !

Step 8 Configure IS-IS for TenGigE0/2/0/0 interface.

interface TenGigE0/2/0/0

Step 9 Configure aggregation-facing interface as IS-IS level-1 interface.

3-21Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 30: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design Large Scale Network Design and Implementation

circuit-type level-1

Step 10 Configure minimum interval between sending BFD hello packets to the neighbor.

bfd minimum-interval 15

Step 11 Configure BFD multiplier

bfd multiplier 3

Step 12 Enable BFD to detect failures in the path between adjacent forwarding engines.

bfd fast-detect ipv4

Step 13 Configure point-to-point IS-IS interface.

point-to-point address-family ipv4 unicast

Step 14 Enable per prefix FRR for Level 2 prefixes.

fast-reroute per-prefix level 2

Step 15 Configure an FRR path that redirects traffic to a remote LFA tunnel.

fast-reroute per-prefix remote-lfa tunnel mpls-ldp

Step 16 Configure IS-IS metric for Interface.

metric 10

Step 17 Enable MPLS LDP sync to ensure LDP comes up on link before link is used for forwarding to avoid packet loss.

mpls ldp sync !!

Step 18 Configure IS-IS for TenGigE0/2/0/1 interface.

interface TenGigE0/2/0/1

Step 19 Configure core-facing interface as IS-IS level-2 interface.

circuit-type level-2-only

Step 20 Configure minimum interval between sending BFD hello packets to the neighbor.

bfd minimum-interval 15

Step 21 Configure BFD multiplier.

bfd multiplier 3

Step 22 Enable BFD to detect failures in the path between adjacent forwarding engines.

bfd fast-detect ipv4

Step 23 Configure point-to-point IS-IS interface.

point-to-point address-family ipv4 unicast

Step 24 Enable per prefix FRR for Level 2 prefixes.

fast-reroute per-prefix level 2

Step 25 Configure an FRR path that redirects traffic to a remote LFA tunnel.

3-22Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 31: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design Large Scale Network Design and Implementation

fast-reroute per-prefix remote-lfa tunnel mpls-ldp

Step 26 Configure IS-IS metric for Interface.

metric 10

Step 27 Enable mpls LDP sync to ensure LDP comes up on link before link is used for forwarding to avoid packet loss.

mpls ldp sync !!

Step 28 Enter Router BGP configuration mode.

router bgp 101!

Step 29 Enter IPv4 address-family.

address-family ipv4 unicast

Step 30 Configure receive capability of multiple paths for a prefix to the capable peers.

additional-paths receive

Step 31 Configure send capability of multiple paths for a prefix to the capable peers.

additional-paths send

Step 32 Enable BGP PIC functionality with appropriate route-policy to calculate back up paths.

additional-paths selection route-policy add-path-to-ibgp!

Step 33 Configure session-group to define parameters that are address-family independent.

session-group intra-as

Step 34 Specify remote-as as AS number of RR.

remote-as 101

Step 35 Specify update-source as Loopback0 for BGP communication.

update-source Loopback0!

Step 36 Enter neighbor-group PE configuration mode.

neighbor-group PE

Step 37 Import session-group AF-independent parameters.

use session-group intra-as

Step 38 Enable labeled BGP address-family for neighbor group.

address-family ipv4 labeled-unicast

Step 39 Configure peer-group for PE as RR client.

route-reflector-client

Step 40 Set next-hop-self for advertised prefixes to PE.

next-hop-self !

Step 41 Enter neighbor-group core configuration mode.

neighbor-group CORE

3-23Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 32: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design Large Scale Network Design and Implementation

Step 42 Import session-group AF-independent parameters.

use session-group intra-as

Step 43 Enable Labeled BGP address-family for neighbor-group.

address-family ipv4 labeled-unicast

Step 44 Set next-hop-self for advertised prefixes to CORE RR.

next-hop-self !

Step 45 Configure PE loopback as neighbor.

neighbor 100.111.7.8

Step 46 Inherit neighbor-group PE parameters.

use neighbor-group PE!

Step 47 Configure core RR loopback as neighbor.

neighbor 100.111.11.3

Step 48 Inherit neighbor-group core parameters.

use neighbor-group CORE! !

Step 49 Configure route-policy used in BGP PIC.

route-policy add-path-to-ibgp

Step 50 Configure to install 1 backup path

set path-selection backup 1 installend-policy

Step 51 Enter MPLS LDP configuration mode.

mpls ldplog neighbor graceful-restart

Step 52 Configure router-id for LDP.

!router-id 100.111.3.1

Step 53 Enable LDP on TenGig0/0/0/0.

interface TenGigE0/2/0/0!

Step 54 Enable LDP on TenGig0/0/0/1.

interface TenGigE0/2/0/1!!

3-24Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 33: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design Large Scale Network Design and Implementation

CORE RR Transport Configuration

Step 1 Enter router IS-IS configuration for PE.

router isis agg-acc

Step 2 Define NET address.

net 49.0100.1001.1100.1103.00

Step 3 Enter IPv4 address-family for IS-IS.

address-family ipv4 unicast

Step 4 Metric-style Wide generates new-style TLV with wider metric fields for IPv4.

metric-style wide!

Step 5 Configure IS-IS for loopback interface.

interface Loopback0

Step 6 Make loopback passive to avoid sending unnecessary hellos on it.

passive point-to-point

Step 7 Enter IPv4 address-family for Loopback.

address-family ipv4 unicast !

Step 8 Configure IS-IS for TenGigE0/2/0/0 interface.

interface TenGigE0/2/0/0

Step 9 Configure core interface as IS-IS level-2 interface.

circuit-type level-2-only

Step 10 Configure minimum interval between sending BFD hello packets to the neighbor.

bfd minimum-interval 15

Step 11 Configure BFD multiplier.

bfd multiplier 3

Step 12 Enable BFD to detect failures in the path between adjacent forwarding engines.

bfd fast-detect ipv4

Step 13 Configure point-to-point IS-IS interface.

point-to-point address-family ipv4 unicast

Step 14 Enable per-prefix FRR for Level 2 prefixes.

fast-reroute per-prefix level 2

Step 15 Configure an FRR path that redirects traffic to a remote LFA tunnel.

fast-reroute per-prefix remote-lfa tunnel mpls-ldp

Step 16 Configure IS-IS metric for interface.

3-25Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 34: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design Large Scale Network Design and Implementation

metric 10

Step 17 Enable MPLS LDP sync to ensure LDP comes up on link before link is used for forwarding to avoid packet loss.

mpls ldp sync !!

Step 18 Enter router BGP configuration mode.

router bgp 101!

Step 19 Enter IPv4 address-family.

address-family ipv4 unicast

Step 20 Configure receive capability of multiple paths for a prefix to the capable peers.

additional-paths receive

Step 21 Configure send capability of multiple paths for a prefix to the capable peers.

additional-paths send

Step 22 Enable BGP PIC functionality with appropriate route-policy to calculate back-up paths.

additional-paths selection route-policy add-path-to-ibgp!

Step 23 Configure session-group to define parameters that are address-family independent.

session-group intra-as

Step 24 Specify remote-as as AS number of RR.

remote-as 101

Step 25 Specify update-source as Loopback0 for BGP communication.

update-source Loopback0!!

Step 26 Enter neighbor-group PE configuration mode.

neighbor-group ABR

Step 27 Import session-group AF-independent parameters.

use session-group intra-as

Step 28 Enable labeled BGP address-family for neighbor group.

address-family ipv4 labeled-unicast

Step 29 Configure peer-group for ABR as RR client.

route-reflector-client !

Step 30 Configure ABR loopback as neighbor.

neighbor 100.111.11.3

Step 31 Inherit neighbor-group PE parameters.

use neighbor-group ABR!!

Step 32 Enter MPLS LDP configuration mode.

3-26Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 35: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 3 Enterprise Network Virtualization Design Large Scale Network Design and Implementation

mpls ldplog neighbor graceful-restart

Step 33 Configure router-id for LDP.

!router-id 100.111.2.1

Step 34 Enable LDP on TenGig0/0/0/0.

interface TenGigE0/2/0/0!

This section described how we can implement hierarchical transport network using Labeled BGP as a scalable solution in a large scale network with fast failure detection and fast convergence mechanisms. This solution helps to avoid unnecessary resource usage, simplifies network implementation, and achieves faster convergence for large networks.

Virtual network implementation on PE including VRF creation, MP BGP, BGP PIC, rLFA, VPNv4 RR, Transport QoS, and P configuration will remain the same in concept and configuration as described in Small Network Design and Implementation, page 3-1.

3-27Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 36: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Design and Implementation Guide

C H A P T E R 4

PE-to-CE Design Options

While the domain creating the MPLS L3 service consisting of P and PE routers remains the same regardless of access technologies, the technologies and designs used to connect the PE to CE device varies considerably based on technology preference, installed base, and operational expertise.

Common characteristics, however, exist for each of the options. Each design needs to consider the following:

• The topology implemented, either hub-and-spoke or rings

• How redundancy is configured

• The type of QoS implementation

Network availability is critical for enterprises because network outages often lead to loss of revenue. In order to improve network reliability, branch/Campus routers and data centers are multihomed on PE devices using one of the various access topologies to achieve PE node redundancy. Each topology should, however, be reliable and resilient to provide seamless connectivity. This is achieved as described in this chapter, which includes the following major topics:

• Inter-Chassis Communication Protocol, page 4-1

• Ethernet Access, page 4-2

• nV (Network Virtualization) Access, page 4-16

• Native IP-Connected Access, page 4-25

• MPLS Access using Pseudowire Headend, page 4-28

Inter-Chassis Communication ProtocolPE nodes connecting to dual-homed CE work in active/standby model with active PE taking care of forwarding and standby PE monitoring the active PE status to take over forwarding in case of active PE failure. The nodes require a mechanism to communicate local connectivity failure to the CE and to detect peer node failure condition so that traffic can be moved to the standby PE. Inter-Chassis Communication Protocol (ICCP) provides the control channel to communicate this information.

ICCP allows active and standby PEs, connecting to dual-homed CPE, to exchange information regarding local link failure to CPE and detect peer node failure or its Core Isolation. This critical information helps to move forwarding from active to standby PE within milliseconds. PEs can be co-located or geo-redundant. ICCP communication between PEs occurs either using dedicated link between PEs or using the core network. ICCP configuration includes configuring redundancy group (RG) on both PEs with each other's address for ICCP communication. Using this information, PEs set up ICCP control

4-1Cisco Enterprise L3 Virtualization

Page 37: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options Ethernet Access

connection and different applications like Multichassis Link Aggregation Group (MC-LAG) and Network Virtualization (nV) described in the next sections use this control connection to share state information. ICCP is configured as described below.

ICCP Configuration

Step 1 Add an ICCP redundancy group with the mentioned group-id.

redundancy

iccp group group-id

Step 2 This is the ICCP peer for this redundancy group. Only one neighbor can be configured per redundancy group. The IP address is the LDP router-ID of the neighbor. This configuration is required for ICCP to function.

member neighbor neighbor-ip-address !

Step 3 Configure ICCP backbone interfaces to detect isolation from the network core, and trigger switchover to the peer PE in case the core isolation is occurred on the active PE. Multiple backbone interfaces can be configured for each redundancy group. When all the backbone in-terfaces are not UP, this is an indication of core iso-lation.

backbone backbone interface interface-type-id !

We discussed ICCP providing control channel between PEs to communicate state information to provide resilient access infrastructure which can be used by different topologies. The next section discusses various access topologies that can be implemented among branch, campus or data center devices, and the Enterprise L3VPN network. Each topology ensures redundancy and fast failure detection and convergence mechanisms to provide seamless last mile connectivity.

Ethernet AccessEthernet access can be implemented in hub-and-spoke OR ring access as described below.

Hub-and-Spoke Using MC-LAG Active/StandbyIn hub-and-spoke access topology, CE device is dual homed to PE devices in the MPLS VPN network. The MC-LAG feature provides an end-to-end interchassis redundancy solution for Enterprise. MC-LAG involves PE devices collaborating through ICCP connection to act as a single Link Aggregation Group (LAG) from the perspective of CE device, thus providing device-level and link-level redundancy. To achieve this, PE devices use ICCP connection to coordinate with each other to present a single LACP bundle (spanning the two devices) to the CE device. Only one of the PE devices forwards traffic at any one time, eliminating the risk of forwarding loops. L3VPN service is configured on this bundle interface or subinterface on PE. PE devices coordinate through the ICCP connection to perform a switchover while presenting an unchanged bundle interface to the CE for the following failure events:

4-2Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 38: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options Ethernet Access

• Link failure—A port or link between the CE and one of the PEs fails.

• Device failure—Meltdown or reload of one of the PEs, with total loss of connectivity to the CE, the core and the other PE.

• Core isolation—A PE loses its connectivity to the core network and therefore is of no value, being unable to forward traffic to or from the CE.

Figure 4-1 Figure X. Hub-and-Spoke Access with MLACP

A loss of connectivity between the PEs may lead both devices to assume that the other has experienced device failure; this causes them to attempt to take on the active role, which causes a loop. CE can mitigate this situation by limiting the number of links so that only links connected to one PE are active at a time. Hub-and-spoke access configuration is described in Table 4-1.

2972

63

CPE(Branch/Campus

Router)

PE(ASR 9000)

PE(ASR 9000)

MC-LAG ICCPBE222

TenG0/0/0/0TenG0/0/0/2

Active Port

Hot Standby Port

(All VLANs)

G0/1/0/0

G0/10

G0/1/0/0

G0/11

TenG0/0/0/0TenG0/0/0/2

LAGPo1 MPLS

Table 4-1 Hub-and-Spoke Access Configuration

PE1 Configuration PE2 Configuration Explanationredundancy iccp group 222

redundancy iccp group 222

Adds Redundancy config mode for ICCP group 222

mlacp node 1 mlacp node 2 Sets the LACP system priority to be used in this ICCP Group. Should be unique for each PE.

mlacp system mac 0000.000e.1100

mlacp system mac 0000.000e.1100

Configures the LACP system ID to be used in this ICCP Group. Should be same on both PEs.

mlacp system priority 1 mlacp system priority 1 Sets the LACP system priority to be used in this ICCP Group. Recommended to configure higher priority (lower value) on PEs.

member neighbor 100.111.11.2

member neighbor 100.111.11.1

Configures neighbor PE for Redundancy group

4-3Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 39: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options Ethernet Access

Table 4-2 describes CE configuration.

MC-LAG provides interchassis redundancy based on the active/standby PE model. In order to achieve the active/active PE model for both load balancing and redundancy, we can use VRRP as described below.

Hub-and-Spoke with VRRP IPv4 and IPv6 Active/Active

In hub-and-spoke access topology, the CE device is dual homed to PE devices in the MPLS VPN network. VRRP is used to provide VLAN-based redundancy and load balancing between PEs by configuring VRRP groups for multiple data VLANs on PEs. Each PE acts as a VRRP master for a set of VLANs. CE uses VRRP address as the default gateway. Half of the VLAN's traffic uses one VRRP master PE and the other half uses the other VRRP master PE. If any link or node fails on a PE, all traffic is switched to the other PE and it takes over the role of VRRP master for all the VLANs. This way both load balancing and redundancy between PEs is achieved using VRRP. BFD can be used to fast detect the VRRP peer failure. In order to detect core isolation, VRRP can be configured with backbone interface tracking so that if the backbone interface goes down, PE will decrease its VRRP priority and the peer PE will take master ownership for all the VLANs and switchover the traffic.

The branch /campus router CE is configured so that each of its uplinks to PEs is configured to forward all local VLANs. The data-path forwarding scheme causes the CE to automatically learn which PE or interface is active for a given VLAN. This learning occurs at an individual destination MAC address level.

backbone interface TenGigE0/0/0/0 interface TenGigE0/0/0/2

backbone interface TenGigE0/0/0/0 interface TenGigE0/0/0/2

Configures ICCP backbone interfaces. When all backbone interfaces are not UP, this is an indication of core isolation. When one or more backbone interfaces are UP, then the POA is not isolated from the network core.

! interface Bundle-Ether222 ! interface GigE0/1/0/0 bundle id 222 mode active !

! interface Bundle-Ether222 ! interface GigE0/1/0/0 bundle id 222 mode active !

Configures Bundle interface.

Table 4-1 Hub-and-Spoke Access Configuration (continued)

PE1 Configuration PE2 Configuration Explanation

Table 4-2 CE Configuration

CE Configuration Explanation

interface gig 0/10 channel-group 1 mode active ! interface gig 0/11 channel-group1 mode active

Configures CE interface towards PE in port-channel.

! interface port-channel 1 lacp max-bundle 1 !

Defines maximum number of active bundled LACP ports allowed in a port channel. In our case, both PEs have one link each to CPE and only one link remains active.

4-4Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 40: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options Ethernet Access

Hub-and-spoke with VRRP configuration includes configuring bundle interface on both PE devices on the links connecting to the CE. In this case, although bundle interfaces are used, in contrast to MC-LAG, they are not aggregated across the two PEs. On PE ASR9000s, bundle subinterfaces are configured to match data VLANs, and VRF are configured on them for L3VPN service. VRRP is configured on these L3 interfaces. For achieving ECMP, one PE is configured with a higher priority for one VLAN VRRP group and the other PE for another VLAN VRRP group. VRRP hello timers can be changed and set to a minimum available value of 100msec. BFD is configured for VRRP for fast failover and recovery. For core isolation tracking, VRRP is configured with backbone interface tracking for each group so that if all backbone interfaces go down, the overall VRRP priority will be lowered below peer PE VRRP priority and the peer PE can take the master ownership.

Figure 4-2 Hub-and-Spoke Access with VRRP

PE Configuration

Step 1 Enter VRRP Configuration Mode.

router vrrp

Step 2 Enter bundle subinterface VRRP Configuration mode.

interface Bundle-Ether1.12

Step 3 Enter VRRP IPv4 address family for bundle subinterface.

address-family ipv4

Step 4 Configure VRRP group 112.

vrrp 112

Step 5Make high priority for VRRP group 112 to 254 so that PE becomes VRRP active for this group. priority 254

Step 5 Allow preemption to be delayed for a configurable time period, allowing the router to popu-late its routing table before becoming the active router.

preempt delay 15

Step 6 Configure VRRP address for the VRRP group.

2972

64

PE(ASR 9000)

PE(ASR 9000)

Bundle-Ether 1.12112.1.1.2

Bundle-Ether 1.12112.1.1.2

VRRP 112 ActiveVRRP 113 Standby

VRRP 113 ActiveVRRP 112 Standby

VRRP112.1.1.1 and

113.1.1.1

G0/3/1/12

G0/3/1/12

G0/13

G0/14

CPE(Branch/Campus

Router)AccessSwitch

MPLSNetwork

4-5Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 41: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options Ethernet Access

address 112.1.1.1

Step 7 Configure millisecond timers for advertisement with force keyword to force the timers.

timer msec 100 force

Step 8 BFD enabled between PEs to detect fast failures.

bfd fast-detect peer ipv4 112.1.1.3

Step 9 Enable backbone tracking so that if one interface goes down, VRRP priority will be lowered by 100 and if two interfaces go down, (core isolation) priority will be lowered by 200; that will be lower than peer default priority and switchover will take place.

track interface TenGigE0/0/0/0 100 track interface TenGigE0/0/0/0 100 ! !

Step 10 Enter VRRP IPv6 address family for bundle subinterface.

address-family ipv6

Step 11 Configure VRRP group 112.

vrrp 112

Step 12 Make high priority for VRRP group 112 to 254 so that PE becomes VRRP active for this group.

priority 254

Step 13 Allow preemption to be delayed for a configurable time period, allowing the router to popu-late its routing table before becoming the active router.

preempt delay 15

Step 14 Configure VRRP address for the VRRP group.

address global 2001:112:1:1::1

Step 15 Configure millisecond timers for advertisement with force keyword to force the timers.

timer msec 100 force address linklocal autoconfig

Step 16 Enter Bundle subinterface VRRP Configuration Mode.

interface Bundle-Ether1.13

Step 17 Enter VRRP IPv4 address family for bundle subinterface.

address-family ipv4

Step 18 Configure VRRP group 113. Default priority for VRRP group 113 so that other PE with 254 priority becomes VRRP active for this group.

vrrp 113

Step 19 Configure VRRP address for the VRRP group.

address 113.1.1.1

Step 20 Configure millisecond timers for advertisement with force keyword to force the timers.

timer msec 100 force

Step 21 BFD enabled between PEs tp detect fast failures.

4-6Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 42: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options Ethernet Access

bfd fast-detect peer ipv4 113.1.1.3 ! !

Step 22 Enter VRRP IPv6 address family for bundle subinterface.

address-family ipv6

Step 23 Configure VRRP group 113. Default priority for VRRP group 113 so that other PE becomes VRRP active for this group.

vrrp 113

Step 24 Configure VRRP address for the VRRP group.

address global 2001:113:1:1::1

Step 25 Configure millisecond timers for advertisement with force keyword to force the timers.

address linklocal autoconfig

Step 26 BFD enabled between PEs tp detect fast failures.

timer msec 100 force

Step 27 Configure physical interface with Bundle.

interface GigabitEthernet0/3/1/12 bundle id 1 mode on

interface Bundle-Ether1.12

Step 28 Configure VRF under interface for L3VPN service.

vrf BUS-VPN2 ipv4 address 112.1.1.2 255.255.255.0 ipv6 address 2001:112:1:1::2/64 encapsulation dot1q 112!interface Bundle-Ether1.13

Step 29 Configure VRF under interface for L3VPN service.

vrf BUS-VPN2ipv4 address 113.1.1.2 255.255.255.0ipv6 address 2001:113:1:1::2/64encapsulation dot1q 113

Access switch is configured with data VLANs allowed on PE and CE-connecting interfaces. Spanning tree is disabled as Pseudo MLACP takes care of the loop prevention.

Access Switch Configuration

Step 1 Disable spanning tree for data VLANs used in Pseudo MLACP.

no spanning-tree vlan 112-113

Step 2 Trunk connecting to CE and PE has the same configuration allowing the data VLANs on trunks.

interface GigabitEthernet0/1switchport trunk allowed vlan 100-103,112,113

4-7Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 43: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options Ethernet Access

switchport mode trunk!interface GigabitEthernet0/13switchport trunk allowed vlan 100-103,112,113 switchport mode trunk!interface GigabitEthernet0/14switchport trunk allowed vlan 100-103,112,113 switchport mode trunk

CPE Configuration

Step 1 SVI configuration.

interface Vlan112ip address 112.1.1.251 255.255.255.0 ipv6 address 2001:112:1:1::251/64!

Step 2 SVI configuration.

interface Vlan113ip address 113.1.1.251 255.255.255.0 ipv6 address 2001:113:1:1::251/64!

Step 3 IPv4 and IPv6 static routes configured with next hop as VRRP address. One PE is master for one VRRP address and the other PE is master for other VRRP address.

ip route 112.2.1.0 255.255.255.0 112.1.1.1 ip route 113.2.1.0 255.255.255.0 113.1.1.1 ipv6 route 2001:112:2:1::/64 2001:112:1:1::1 ipv6 route 2001:113:2:1::/64 2001:113:1:1::1

G.8032 Ring Access with VRRP IPv4 and IPv6

In this access topology, PEs are connected to a G.8032 Ethernet ring formed by connecting Ethernet access nodes to each other in a ring form. The G.8032 Ethernet ring protection switching protocol elects a specific link to protect the entire ring from loops. Such a link, which is called the Ring Protection Link (RPL), is typically maintained in disabled state by the protocol to prevent loops. The device connecting to the RPL link is called the RPL owner responsible for blocking RPL link. Upon a node or a link failure in the ring, the RPL link is activated allowing forwarding to resume over the ring. G.8032 uses Ring Automatic Protection Switching (R-APS) messages to coordinate the activities of switching the RPL on and off using a specified VLAN for the APS channel.

The G.8032 protocol also allows superimposing multiple logical rings over the same physical topology by using different instances. Each instance contains an inclusion list of VLAN IDs and defines different RPL links. In this guide, we are using two G.8032 instances with odd-numbered and even-numbered VLANs. ASR9000's PEs also participate in the ring and act as the RPL owner. One PE acts as RPL owner for RPL for even-numbered VLAN's instance and the other PE as RPL owner for RPL for odd-numbered VLAN's instance so one PE remains in blocking state for one instance and other PE for other instance. Hence, load balancing and redundancy are achieved by making use of two RPLs, each RPL serving one instance.

4-8Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 44: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options Ethernet Access

In the G.8032 configuration, PE devices, which are configured as RPL owner nodes for one of the two instances, are specified with the interface connected to the ring. Two instances are configured for odd and even VLANs. PEs are configured as RPL owner for one of the instances each to achieve load balancing and redundancy. Both instances are configured with dot1q subinterface for the respective APS channel communication.

PEs are configured with BVI interfaces for VLANs in both instances and VRF is configured on BVI interfaces for L3VPN service. CE interface connecting to G.8032 ring is configured with trunk allowing all VLANs on it and SVIs configured on CE for L3 communication. BVIs are configured with First Hop Redundancy Protocol (FHRP) and CE uses FHRP address as default gateway. In our example, we are using VRRP on PEs as FHRP although we can use any available FHRP protocol. PEs are configured with high VRRP priority for VLANs in the case for which they are not RPL owner. CE uses VRRP address as default gateway. Since VRRP communication between PEs will be blocked along the ring due to G.8032 loop prevention mechanism, a pseudowire configured between PEs exists that enables VRRP communication. In normal condition, CE sends traffic directly along the ring to VRRP active PE gateway. Two failure conditions exist:

• In the case of link failure in ring, both PEs will open their RPL links for both instances and retain their VRRP states as VRRP communication between them is still up using pseudowire. Due to the broken ring, CE will have direct connectivity to only one PE along the ring, depending on which section (right or left) of G8032 ring has failed. In that case, CE connectivity to other PE will use the path to reachable PE along the ring and then use pseudowire between PEs.

• In the case of PE Node failure, pseudowire connectivity between PEs will go down causing VRRP communication to also go down. The PE that is UP to become VRRP Active for all VLANs and all traffic from CE will be sent to that PE.

Figure 4-3 Ethernet Access with G.8032 Ring

PE's dot-1q subinterface for data VLAN communication with CE, pseudowire connecting both PEs and BVI interface are configured in the same bridge domain, which allows both PEs and CE in same broadcast domain for that data VLAN. So if the link fails, the CE can still communicate to both PEs along the available path and pseudowire.

PE Configuration

Step 1 Interface connecting to G.8032 interface.

2972

65

G0/15

Teng0/3/0/0

Teng0/3/0/0

G.8032Ethernet Ring

Access

MPLSNetwork

CPE(Branch/Campus

Router)Ethernet

Access Node

EthernetAccess Node

EthernetAccess Node

PE(ASR 9000)

Blocked for Instance 2 (Odd VLANs)

Blocked for Instance 1 (Even VLANs)

PE(ASR 9000)

4-9Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 45: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options Ethernet Access

interface TenGigE0/3/0/0!

Step 2 Subinterface for data VLAN 118.

interface TenGigE0/3/0/0.118 l2transport encapsulation dot1q 118rewrite ingress tag pop 1 symmetric!

Step 3 Subinterface for data VLAN 119.

interface TenGigE0/3/0/0.119 l2transport encapsulation dot1q 119

Step 4 Symmetrically POP 1 tag while receiving the packet and PUSH 1 tag while sending the traffic from interface.

rewrite ingress tag pop 1 symmetric!

Step 5 Interface BVI configuration mode.

interface BVI118

Step 6 Configuring VRF under interface.

vrf BUS-VPN2ipv4 address 118.1.1.2 255.255.255.0ipv6 address 2001:118:1:1::2/64!

Step 7 Interface BVI configuration mode.

interface BVI119

Step 8 Configuring VRF under interface.

vrf CE-VPN-RING-2ipv4 address 119.1.1.2 255.255.255.0ipv6 address 2001:119:1:1::2/64!!

Step 9 Enter VRRP Configuration Mode.

router vrrp

Step 10 Enter Bundle subinterface VRRP Configuration mode.

interface BVI118

Step 11 Enter VRRP IPv4 address family for bundle subinterface.

address-family ipv4

Step 12 Configure VRRP group 118.

vrrp 118

Step 13 Make high priority for VRRP group 118 to 254 so that PE becomes VRRP active for this group.

priority 254

Step 14 Allow preemption to be delayed for a configurable time period, allowing the router to popu-late its routing table before becoming the active router.

4-10Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 46: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options Ethernet Access

preempt delay 15

Step 15 Configure VRRP address for the VRRP group.

address 118.1.1.1

Step 16 Configure millisecond timers for advertisement with force keyword to force the timers.

timer msec 100 force

Step 17 BFD enabled between PEs to detect fast failures.

bfd fast-detect peer ipv4 118.1.1.3

Step 18 Enter VRRP IPv6 address family for bundle subinterface.

address-family ipv6

Step 19 Configure VRRP group 118.

vrrp 118

Step 20 Make high priority for VRRP group 118 to 254 so that PE becomes VRRP active for this group.

priority 254

Step 21 Allow preemption to be delayed for a configurable time period, allowing the router to popu-late its routing table before becoming the active router.

preempt delay 15

Step 22 Configure VRRP address for the VRRP group.

address global 2001:118:1:1::1 address linklocal autoconfig

Step 23 Configure millisecond timers for advertisement with force keyword to force the timers.

timer msec 100 force

Step 24 Enter Bundle subinterface VRRP Configuration mode.

interface BVI119

Step 25 Enter VRRP IPv4 address family for bundle subinterface.

address-family ipv4

Step 26 Configure VRRP group 119. Default priority for VRRP group 119 such that other PE with 254 priority becomes VRRP active for this group.

vrrp 119

Step 27 Configure VRRP address for the VRRP group.

address 119.1.1.1

Step 28 Configure millisecond timers for advertisement with force keyword to force the timers.

timer msec 100 force

Step 29 BFD enabled between PEs to detect fast failures.

bfd fast-detect peer ipv4 119.1.1.3

Step 30 Enter VRRP IPv6 address family for bundle subinterface.

address-family ipv6

4-11Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 47: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options Ethernet Access

Step 31 Configure VRRP group 113. Default priority for VRRP group 113 so that other PE becomes VRRP active for this group.

vrrp 119

Step 32 Configure VRRP address for the VRRP group.

address global 2001:119:1:1::1 address linklocal autoconfig

Step 33 Configure millisecond timers for advertisement with force keyword to force the timers

timer msec 100 force!!

Step 34 Enter L2VPN Configuration mode.

l2vpn

Step 35 Configure bridge group named L2VPN.

bridge group L2VPN

Step 36 Configure Bridge-domain named CE-L3VPN-118.

bridge-domain CE-L3VPN-118

Step 37 Enable subinterface connected to ring towards CE under bridge domain CE-L3VPN-118.

interface TenGigE0/3/0/0.118

Step 38 Configure pseudowire to neighbor PE in the same bridge domain.

neighbor 100.111.3.2 pw-id 118

Step 39 Configure L3 interface BVI in the same bridge domain CE-L3VPN-118.

routed interface BVI118

Step 40 Configure another bridge domain CE-L3VPN-119.

bridge-domain CE-L3VPN-119

Step 41 Enable subinterface connected to ring towards CE under same bridge domain CE-L3VPN-119.

interface TenGigE0/3/0/0.119

Step 42 Configure pseudowire to neighbor PE in the same bridge domain CE-L3VPN-119.

neighbor 100.111.3.2 pw-id 119

Step 43

routed interface BVI119 !

Step 44 Configure G.8032 ring named ring_test.

ethernet ring g8032 ring_test

Step 45 Configure port0 for g.8032 ring.

port0 interface TenGigE0/3/0/0 !

Step 46 Mention port 1 as none and G.8032 ring as open ring.

4-12Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 48: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options Ethernet Access

port1 none open-ring

Step 47 Enter instance 1 configuration.

Instance 1

Step 48 Configure VLANs in the inclusion list of instance 1.

Inclusion-list vlan-ids 99,106,108,118,500,64,604,1001-2000

Step 49 Enter APS channel configuration mode.

aps-channel

Step 50 Configure subinterface used for APS channel communication.

port0 interface TenGigE0/3/0/0.99 port1 none ! !

Step 51 Enter instance 2 configuration.

instance 2

Step 52 Configure instance with ring profile.

profile ring_profile

Step 53 Configure PE as RPL owner on port0 for instance 2.

rpl port0 owner

Step 54 Configure VLANs in the inclusion list of instance 1.

inclusion-list vlan-ids 199,107,109,109,119,501,2001-3000

Step 55 Enter APS channel configuration mode.

aps-channel

Step 56 Configure subinterface used for APS channel communication.

port0 interface TenGigE0/3/0/0.199 port1 none

Step 57 Configure Ethernet Ring profile.

ethernet ring g8032 profile ring_profile

Step 58 Configure G.8032 WTR timer.

timer wtr 10

Step 59 Configure Guard timer.

timer guard 100

Step 60 Configure hold-off timer.

timer hold-off 0!

4-13Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 49: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options Ethernet Access

CE Configuration

Step 1 Enable VKA 118 and 119.

vlan 118,119 !

Step 2 Configure data SVI on CE.

interface Vlan118ip address 118.1.1.251 255.255.255.0 ipv6 address 2001:118:1:1::251/64!

Step 3 Configure data SVI on CE.

interface Vlan119ip address 119.1.1.251 255.255.255.0ipv6 address 2001:119:1:1::251/64!

Step 4 Enable G.8032 ring facing trunk to allow data VLANs.

interface GigabitEthernet0/15switchport trunk allowed vlan 106-109,118,119 switchport mode trunk!

Step 5 Configure IPv4 Static route towards VRRP address for VLAN 118.

ip route 118.2.1.0 255.255.255.0 118.1.1.1

Step 6 Configure IPv4 Static route towards VRRP address for VLAN 119.

ip route 119.2.1.0 255.255.255.0 119.1.1.1

Step 7 Configure IPv6 Static route towards VRRP address for VLAN 118.

ipv6 route 2001:118:2:1::/64 2001:118:1:1::1

Step 8 Configure IPv6 Static route towards VRRP address for VLAN 119.

ipv6 route 2001:119:2:1::/64 2001:119:1:1::1

Ethernet Access Node Configuration

Step 1 Configure Ethernet Ring profile.

ethernet ring g8032 profile ring_profile

Step 2 Configures G.8032 WTR timer.

timer wtr 10

Step 3 Configure Guard timer.

timer guard 100!

Step 4 Configure G.8032 ring named ring_test.

4-14Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 50: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options Ethernet Access

ethernet ring g8032 ring_test

Step 5 Configures ring as G.8032 ring as open ring.

open-ring

Step 6 Exclude VLAN 100.

exclusion-list vlan-ids 1000

Step 7 Mention port0 as ten 0/0/0/0 for ring.

port0 interface TenGigabitEthernet0/0/0

Step 8 Mention port1 as ten 0/0/0/0 for ring

port1 interface TenGigabitEthernet0/1/0

Step 9 Configure Instance 1.

instance 1

Step 10 Configure instance with ring profile.

profile ring_profile

Step 11 Configure VLANs included in Instance 1.

inclusion-list vlan-ids 99,106,108,118,301-302,310-311,1001-2000

Step 12 Configure APS channel.

aps-channel

Step 13 Assign service instance for APS messages on port0 and Port 1.

port0 service instance 99 port1 service instance 99 !!

Step 14 Configure Instance 2.

instance 2

Step 15 Configure instance with ring profile.

profile ring_profile

Step 16 Configure device interface as next neighbor to RPL link owner.

rpl port1 next-neighbor

Step 17 Configure VLANs included in Instance 2.

inclusion-list vlan-ids 107,109,119,199,351,2001-3000

Step 18 Configure APS channel.

aps-channel

Step 19 Assign service instance for APS messages on port0 and Port 1.

port0 service instance 199 port1 service instance 199 ! !!

4-15Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 51: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options nV (Network Virtualization) Access

Step 20 Configure interface connected to ring.

interface TenGigabitEthernet0/0/0!

Step 21 Configure service instance used for APS messages on G.8032 ring for both instances.

service instance 99 ethernet encapsulation dot1q 99 rewrite ingress tag pop 1 symmetric bridge-domain 99!service instance 199 ethernet encapsulation dot1q 199 rewrite ingress tag pop 1 symmetric bridge-domain 199!

Step 22 Configure interface connected to ring.

interface TenGigabitEthernet0/1/0

Step 23 Configure service instance used for APS messages on G.8032 ring for both instances.

service instance 99 ethernet encapsulation dot1q 99 rewrite ingress tag pop 1 symmetric bridge-domain 99!service instance 199 ethernet encapsulation dot1q 199 rewrite ingress tag pop 1 symmetric bridge-domain 199!!

nV (Network Virtualization) AccessnV Satellite enables a system-wide solution in which one or more remotely-located devices or "satellites" complement a pair of host PE devices to collectively realize a single virtual switching entity in which the satellites act under the management and control of the host PE devices. Satellites and Hosts PEs communicate using a Cisco proprietary protocol that offers discovery and remote management functions, thus turning the satellites from standalone devices into distributed logical line cards of the host.

The technology, therefore, allows Enterprises to virtualize access devices to which branch or campus the routers terminate, converting them into nV Satellite devices, and to manage them through PE nodes that operate as nV hosts. By doing so, the access devices transform from standalone devices with separate management and control planes into low profile devices that simply move user traffic from a port connecting branch or campus router towards a virtual counterpart at the host, where all network control plane protocols and advanced features are applied. The satellite only provides simple functions such as local connectivity and limited (and optional) local intelligence that includes ingress QoS, OAM, performance measurements, and timing synchronization.

The satellites and the hosts exchange data and control traffic over point-to-point virtual connections known as Fabric Links. Branch or Campus Ethernet traffic carried over the fabric links is specially encapsulated using 802.1ah. A per-Satellite-Access-Port derived ISID value is used to map a given

4-16Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 52: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options nV (Network Virtualization) Access

satellite node physical port to its virtual counterpart at the host for traffic flowing in the upstream and downstream direction. Satellite access ports are mapped as local ports at the host using the following naming convention:

<port type><Satellite-ID>/<satellite-slot>/<satellite-bay>/<satellite-port>

where:

• <port type> is GigabitEthernet for all existing satellite models

• <Satellite-ID> is the satellite number as defined at the Host

• <satellite-slot>/<satellite-bay>/<satellite-port> are the access port information as known at the satellite node.

These satellite virtual interfaces on the Host PEs are configured with VRF to enable L3VPN service.

The satellite architecture encompasses multiple connectivity models between the host and the satellite nodes. The guide will discuss release support for:

• nV Satellite Simple Rings

• nV Satellite Layer 2 Fabric

In all nV access topologies, host nodes load share traffic on a per-satellite basis. The active/standby role of a host node for a specific satellite is determined by a locally-defined priority and negotiated between the hosts via ICCP.

ASR9000v and ASR901 are implemented as a satellite devices:

• ASR9000v has four 10 GbE ports that can be used as ICL.

• ASR901 has two GbE ports that can be used as ICL and that can be used as ICL and ASR903 can have up to two 10 GbE ports can be used as ICL.

nV Satellite Simple RingsIn this topology, satellite access nodes connecting branch or campus are connected in an open ring topology terminating at the PE host devices as shown in Figure 4-4.

Figure 4-4 nV with L1 Fabric Access

2972

66

G0/0/40

PE(ASR 9000)

CPE(Branch/Campus

Router)

Satellite Satellite

nV HostnV Host

Satellite

Satellite Ring

Host Fabric Port

Satellite FabricPort Toward

Active nV Host

Satellite FabricPort Toward

Active nV HostSatellite Fabric

Port TowardStandby nV Host

Satellite FabricPort Toward

Standby nV Host

Satellite

PE(ASR 9000)

4-17Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 53: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options nV (Network Virtualization) Access

The PE device advertises multicast discovery messages periodically over a dedicated VLAN over fabric links. Each satellite access device in the ring listens for discovery messages on all its ports and dynamically detects the Fabric link port toward the host.

The satellite uses this auto-discovered port for the establishment of a management session and for the exchange of all the upstream and the downstream traffic with each of the hosts (data and control). At the host, incoming and outgoing traffic is associated to the corresponding satellite node using the satellite mac address, which was also dynamically learned during the discovery process. Discovery messages are propagated from one satellite node to another and from either side of the ring so that all nodes can establish a management session with both hosts. nV L1 fabric access configuration is described below.

nV L1 Fabric Configuration

Step 1 Interface acting as Fabric link connecting to nV ring.

interface TenGigE0/2/0/3ipv4 point-to-pointipv4 unnumbered Loopback10

Step 2 Enter nV configuration mode under interface.

Nv

Step 3 Define fabric link connectivity to simple ring using keyword "Network".

satellite-fabric-link network

Step 4 Enter Redundancy configuration mode for ICP group 210.

redundancy iccp-group 210 !

Step 5 Define the Access ports of satellite ID 100.

satellite 100 remote-ports GigabitEthernet 0/0/0-30,31-43 !

Step 6 Define the Access ports of satellite ID 101.

satellite 101 remote-ports GigabitEthernet 0/0/0-43 !

Step 7 Define the Access ports of satellite ID 101.

satellite 102 remote-ports GigabitEthernet 0/0/0-43 ! !!!

Step 8 Virtual Interface configuration corresponding to satellite 100. Interface is configured with the VRF for L3VPN service.

interface GigabitEthernet100/0/0/40negotiation autoload-interval 30!interface GigabitEthernet100/0/0/40.502 l2transport

4-18Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 54: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options nV (Network Virtualization) Access

vrf BUS-VPN2ipv4 address 51.1.1.1 255.255.255.252encapsulation dot1q 49!!

Step 9 Configure ICCP redundancy group 210 and defines peer PE address in the redundancy group.

redundancy

iccp group 210 member neighbor 100.111.11.2 !

Step 10 Configure system mac for nV communication.

nv satellite system-mac cccc.cccc.cccc ! !!!

Step 11 Enter nV configuration mode to define satellites.

Nv

Step 12 Define the Satellite ID.

satellite 100

Step 13 Define ASR9000v device as satellite device.

type asr9000v

Step 14 Configure satellite address used for Communication.

ipv4 address 100.100.1.10 redundancy

Step 15 Define the priority for the Host PE

Host-priority 20 !

Step 16 Satellite chassis serial number to identify satellite.

serial-number CAT1729U3BF ! !

Step 17 Define the Satellite ID.

satellite 101

Step 18 Define ASR9000v device as satellite device.

type asr9000v

Step 19 Configure satellite address used for Communication.

ipv4 address 100.100.1.3 redundancy

Step 20 Define the priority for the Host PE

4-19Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 55: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options nV (Network Virtualization) Access

host-priority 20 !

Step 21 Satellite chassis serial number to identify satellite.

serial-number CAT1729U3BB!

Step 22 Define the Satellite ID.

satellite 102

Step 23 Define ASR9000v device as satellite device.

type asr9000v

Step 24 Configure satellite address used for Communication.

ipv4 address 100.100.1.20 redundancy

Step 25 Step 25Define the priority for the Host PE.

Host-priority 20 !

Step 26 Satellite chassis serial number to identify satellite.

serial-number CAT1729U3AU!

nV Satellite Layer 2 FabricIn this model, satellite nodes connecting to branch or campus are connected to the host(s) over any Layer 2 Ethernet network. Such a network can be implemented as a native or as an overlay Ethernet transport to fit Enterprise access network designs.

Figure 4-5 nV with L2 Fabric Access using Native or Overlay Transport

CPE(Branch/Campus

Router)

Satellite

2972

67

Unique SatelliteVLANs Toward Hosts

PE(ASR 9000)

PE(ASR 9000)

NativeL2 Fabric

nV L2 Fabric withNative Ethernet Transport

nV HostHost Fabric

Port

Host FabricSubinterface

Satellite FabricPort and Sub

Interfaces

CPE(Branch/Campus

Router)

Satellite

PE(ASR 9000)

PE(ASR 9000)

nV L2 Fabric withEoMPLS Transport

nV HostHost Fabric

Port

Host FabricSubinterface

Satellite FabricPort and Sub

Interfaces

PWE3

IP/MPLSL2 Fabric

PW

E3

4-20Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 56: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options nV (Network Virtualization) Access

In the case of L2 Fabric, a unique VLAN is allocated for the point-to-point emulated connection between the Host and each Satellite device. The host uses such VLAN for the advertisement of multicast discovery messages.

Satellite devices listen for discovery messages on all the ports and dynamically create a subinterface based on the port and VLAN pair on which the discovery messages were received. VLAN configuration at the satellite is not required.

The satellite uses this auto-discovered subinterface for the establishment of a management session and for the exchange of all upstream and downstream traffic with each of the hosts (data and control). At the host, incoming and outgoing traffic is associated to the corresponding satellite node based on VLAN assignment. nV L2 fabric access configuration is described below.

nV L2 Fabric Configuration

Step 1 Interface acting as Fabric link connecting to nV ring.

interface TenGigE0/1/1/3load-interval 30transceiver permit pid all!

Step 2 Interface acting as Fabric link connecting to nV ring.

interface TenGigE0/1/1/3.210ipv4 point-to-pointipv4 unnumbered Loopback200encapsulation dot1q 210

Step 3 Enter nV configuration mode under interface.

Nv

Step 4 Define fabric link connectivity to satellite 210.

satellite-fabric-link satellite 210

Step 5 Configure Ethernet cfm to detect connectivity failure to the fabric link.

ethernet cfm continuity-check interval 10ms !

Step 6 Enter Redundancy configuration mode for ICP group 210.

redundancy iccp-group 210 !

Step 7 Define the Access ports of satellite ID 100

remote-ports GigabitEthernet 0/0/0-9 !!!

Step 8 Virtual Interface configuration corresponding to satellite 100. Interface is configured with the VRF for L3VPN service.

interface GigabitEthernet210/0/0/0negotiation autoload-interval 30!

4-21Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 57: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options nV (Network Virtualization) Access

interface GigabitEthernet210/0/0/0.49vrf BUS-VPN2ipv4 address 51.1.1.1 255.255.255.252encapsulation dot1q 49!

Step 9 Configure ICCP redundancy group 210 and defines peer PE address in the redundancy group.

redundancyiccp group 210 member neighbor 100.111.11.2 !

Step 10 Configure system mac for nV communication.

nv satellite system-mac cccc.cccc.cccc ! !!!

Step 11 Enter nV configuration mode to define satellites.

nV

Step 12 Define the Satellite ID 210 and type of platform ASR 901.

satellite 210 type asr901 ipv4 address 27.27.27.40 redundancy

Step 13 Define the priority for the Host PE.

host-priority 17 !

Step 14 Satellite chassis serial number to identify satellite.

serial-number CAT1650U00D!!

nV Cluster ASR 9000 NV Cluster system is designed to simplify L3VPN, L2VPN. and Multicast dual-homing topologies and resiliency designs by making two ASR9k systems operate as one logical system. An NV cluster system has these properties and covers some of use cases (partial list) described in Figure 4-6.

• Without an ASR9k cluster, a typical MPLS-VPN dual-homing scenario has a CE dual-homed to two PEs where each PE has its own BGP router ID, PE-CE peering, security policy, routing policy maps, QoS, and redundancy design, all of which can be quite complex from a design perceptive.

• With a ASR9k Cluster system, both PEs will share a single control plane, a single management plane, and a fully distributed data plane across two physical chassis, and support one universal solution for any service including L3VPN, L2VPN, MVPN, Multicast, etc. The two clustered PEs can be geographically redundant by connecting the cluster ports on the RSP440 faceplate, which

4-22Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 58: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options nV (Network Virtualization) Access

will extend the EOBC channel between the rack 0 and rack 1 and operate as a single XR ASR9k router. For L3VPN, we will use one BGP router ID with the same L3VPN instance configured on both rack 0 and rack 1 and have one BGP router ID and peering with CEs and remote PEs.

Figure 4-6 ASR 9000 nV Cluster Use Cases for Universal Resiliency Scheme

In the topology depicted and descibed in Figure 4-7, we tested and measured L3VPN convergence time using a clustered system and compared it against VRRP/HSRP. We tested both cases with identical scale and configuration as shown in the table in Figure 4-7. We also measured access-to-core and core-to-access traffic convergence time separately for better convergence visibility.

Figure 4-7 L3VPN Cluster Convergence Test Topology

The convergence results of L3VPN cluster system versus VRRP/HSRP are summarized in Figure 4-8. We covered the five types of failure tests listed below.

Note We repeated each test three times and reported the worst-case numbers of three trials.

• IRL link failure

• EOBC link failure

• Power off Primary DSC failover

2972

64

ASR 9000 “nV System”

EOBC

IRLs

EO

IR

BC

s

VideoDistribution

Router

Data CenterInterconnect

Enterprise WANCore/Edge

CarrierEthernet

Always-On Virtual Chassis:Single Control Plane

Single Management PlaneFully Distributed Data Plane Across two Physical Chassis

1 Universal Solution for Any Service

Cloud GatewayRouter

InternetEdge/Peering

Campus CorePE/IP

BusinessServices PE

2972

69

MC-LAG

Access Core

ASR 9006Cluster PE

EO

BC

IRLs

CarrierEthernetL2VPN

VRFs

L3VPNScale

IPv4 eBGP sessions 3k

IPv6 eBGP sessions 500

VRF bundle sub-interfaces 3k

Advertise prefix 1M

Multicast (S,G) N/A

4-23Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 59: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options nV (Network Virtualization) Access

• DSC RP redundancy switchover

• Process restart

Figure 4-8 L3VPN Cluster Convergence Results versus VRRP/HSRP

nV Cluster PE with L3vpn Service can be implemented on ASR9000 Rack0 and Rack1 as described below.

nV Cluster Configuration

Step 1 Configure Rack ID 1 for rack 1 in ROMmon mode.

CLUSTER_RACK_ID = 1

Step 2 Configure Rack ID 0 for rack 0 in ROMmon mode.

CLUSTER_RACK_ID = 0

Step 3 Configure nV Edge in Admin mode. Required only on Rack 0.

Nv

Step 4 Configure nV Edge in Admin mode. Required only on Rack 0.

edge control

Step 5 Configure Serial Number of Rack 0.

serial FOX1435G0JR rack 0

Step 6 Configure Serial Number of Rack 1.

serial FOX1436H557 rack 1

2972

70

MC-LAG

Access Core

Rack 0

Rack 1

ASR 9006Cluster PE

EO

BC

IRLs

CarrierEthernetL2VPN

VRFs

Failure Test # Trigger Edge to CoreConvergence

Core to EdgeConvergence

VRRP/HSRPConvergence

IRL Link Failure 1 Rack 0: 1 IRL Down – Fiber Pull 4 msec 3 msec N/A

2 Rack 1: 1 IRL Down – Fiber Pull 4 msec 1 msec N/A

3 All IRLs Down – Fiber Pull 99 msec 205 msec N/A

EOBC Link Failure 4 Rack 0: 1 EOBC Down – Fiber Pull 0 0 N/A

5 Rack 1: 1 EOBC Down – Fiber Pull 0 0 N/A

6 All EOBC Down – Fiber Pull 0 112 msec N/A

Primary Off: Primary DSC 7 Rack 0: Powered Down 4.6 sec 2.1 sec 10 sec / 10 sec

8 Rack 1: Powered Down 4.5 sec 2.1 sec 10 sec / 10 sec

DSC RP RedundancySwitchover

9 Rack 0: Primary DSCs RP failover 0 0 10 sec / 10 sec

10 Rack 1: Primary DSCs RP failover 0 0 10 sec / 10 sec

Process restart 11 LDP 0 0 0

12 BGP 0 0 0

13 ISIS/OSPF 0 0 0

14 L2VPN 0 0 0

42 1

3

3

4-24Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 60: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options Native IP-Connected Access

! data minimum 0

Step 7 Configure Inter Rack Links (L1 links). Used for forwarding packets whose ingress and egress interfaces are on separate racks.

interface TenGigE0/3/0/1

Step 8 Configure the interface as nV Edge interface.

Nv edge interface !

Step 9 Configure mandatory LACP configuration for Bundle interfaces.

lacp system mac f866.f217.5d24!

Step 10 Configure Bundle interface.

interface Bundle-Ether1

Step 11 Configure VRF service.

vrf BUS-VPN2ipv4 address 40.1.1.1 255.255.255.0

Step 12 nV Edge requires a manual configuration of mac-address under the Bundle interface.

mac-address f866.f217.5d23

Native IP-Connected AccessIn Native Ethernet Access topology, the branch or campus router is dual homed to PEs with redundancy and load balancing mechanisms being taken care by the Routing protocol configuration. VRF service is configured on both PE's interfaces connecting to CPE. CPE can be connected to the PEs using direct links or through normal Ethernet access network. The configuration on the CPE decides which PE will be used as the primary to send traffic.

• If the BGP is the routing protocol between PE and CE, high local preference is configured on CE for primary PE so that best path is selected for primary PE.

• In the case of static routing, floating static routes are configured on CPE such that static route with lower Administrative distance points to primary PE and higher AD to backup PE. BFD is used for fast failure detection to detect fast failure of BGP Peer or static route.

4-25Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 61: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options Native IP-Connected Access

Figure 4-9 Native IP-Connected Access

2972

71

PE(ASR 9000)

PE(ASR 9000)

G0/1100.192.3.1

G0/0/1/7

100.192.3.2

G0/0/1/7100.192.3.2

CPE(Branch/Campus

Router)

BGP/StaticBFD

MPLSNetwork

EthernetNetwork

4-26Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 62: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options Native IP-Connected Access

Native IP-connected configuration is shown in Table 4-3.

Table 4-3 Native IP-connected Configuration

PE1 Config PE2 Config CPE Configinterface GigabitEthernet0/0/1/7

vrf BUS-VPN2

ipv4 address 100.192.30.1 255.255.255.0

ipv6 address 2001:100:192:30::1/64

!***Configure eBGP peering with BFD***

router bgp 101 <snip> vrf BUS-VPN2 !***Setup eBGP peering to CE***

neighbor 100.192.30.3 remote-as 65002

!***Enables BFD for BGP to neighbor for VRF***

bfd fast-detect bfd multiplier 3 bfd minimum-interval 50 address-family ipv4 unicast ! bfd interface GigabitEthernet0/0/1/7 !***Disables BFD echo mode on interface***

echo disable

interface GigabitEthernet0/0/1/7

vrf BUS-VPN2

ipv4 address 100.192.30.2 255.255.255.0

ipv6 address 2001:100:192:30::2/64

!***Configure eBGP peering with BFD*** !

router bgp 101 <snip> vrf BUS-VPN2 !***Setup eBGP peering to CE***

neighbor 100.192.30.3 remote-as 65002 !

***Enables BFD for BGP to neighbor for VRF***

bfd fast-detect bfd multiplier 3 bfd minimum-interval 50 address-family ipv4 unicast ! bfd interface GigabitEthernet0/0/1/7 !***Disables BFD echo mode on interface***

echo disable

!***UNI interface towards PE***

interface GigabitEthernet0/1

ip address 100.192.30.3 255.255.255.0

duplex auto speed auto !***Enable BFD on interface*** bfd interval 50 min_rx 50 multiplier 3 ! no bfd echo !***eBGP peering with BFD*** router bgp 65002 bgp router-id 100.111.10.11 bgp log-neighbor-changes !***eBGP peering towards Primary PE***

neighbor 100.192.30.1 remote-as 101 !***Enable BFD to this BGP Peer*** neighbor 100.192.30.1 fall-over bfd !***eBGP peering towards Backup PE*** neighbor 100.192.30.2 remote-as 101

!***Enable BFD to this BGP Peer*** neighbor 100.192.30.2 fall-over bfd ! address-family ipv4 no synchronization redistribute connected !***Advertise prefix facing the LAN side of the CE router*** network 100.192.193.0 mask 255.255.255.0 neighbor 100.192.30.1 activate !***Prefer this neighbor PE1 as the primary PE neighbor 100.192.30.1 weight 100 neighbor 100.192.30.2 activate no auto-summary exit-address-family

4-27Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 63: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options MPLS Access using Pseudowire Headend

MPLS Access using Pseudowire HeadendIn MPLS Access, Enterprise access devices are connected to the ASR9000 PE devices with the MPLS-enabled network between access devices and PE devices. The branch or campus router is connected to the access device via an Ethernet 802.1Q-tagged interface. The access device is configured with a pseudowire terminating on the PE device on a Pseudowire Headend interface.

Pseudowire Headend (PWHE) is a technology that allows termination of access PWs into an L3 (VRF or global) domain, therefore eliminating the requirement of keeping separate interfaces for terminating pseudowire and L3VPN service. PWHE introduces the construct of a "pw-ether" interface on the PE device. This virtual pw-ether interface terminates the PWs carrying traffic from the CPE device and maps directly to an MPLS VPN VRF on the provider edge device. Any QoS and ACLs are applied to the pw-ether interface.

All traffic between CE and PE is tunneled in this pseudowire. Access network runs its LDP/IGP domain along with Labeled BGP, as mentioned in Large Scale Network Design and Implementation, page 3-16, and learns PE loopback address accordingly for PW connectivity. The access device can initiate this pseudowire using two methods:

• Per Access Port Method in which the pseudowire is configured directly on the interface connecting to CPE or

• Per Access Node Method in which the pseudowire is configured on SVI corresponding, therefore taking traffic from multiple ports in a single pseudowire.

This guide focuses on the Per Access Port topology.

Access device is configured with XConnect on the interface connecting to the branch/campus router. The XConnect peer is configured as the PE loopback address. On PE PW-ether, an interface is created on which the XConnect is terminating. The same PW-ether interface is also configured with VRF and L3VPN service is configured on it. The PE and CE can use any routing protocol to exchange route information over PW-Ether Interface. BFD is used between PE and CE for fast failure detection.

PWHE configuration is described as below.

Figure 4-10 depicts MPLS Access using PWHE.

Figure 4-10 MPLS Access using Pseudowire Headend

Access Device Configuration

Step 1 Configure PW class on the access device.

pseudowire-class BUS_PWHEencapsulation mpls ! control-word

2972

72

PE(ASR 9000)

BGP/StaticBFD

MPLSNetwork

MPLSAccess Network

TenG0/0/0/0TenG0/0/0/3

PW-Ether 100

G0/2

G0/4

CPE(Branch/Campus

Router)AccessDevice

PWE3

4-28Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 64: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options MPLS Access using Pseudowire Headend

Step 2 Enter Interface configuration of CE-connecting interface.

interface GigabitEthernet0/4

Step 3 Configure XConnect on the Access device towards PE with encapsulation MPLS and PW-class BUS_PWHE to inherit its parameters.

xconnect 100.111.11.1 130901100 encapsulation mpls pw-class BUS_PWHE! mtu 1500!

PE Configuration

Step 1 Configure PWHE interface.

interface PW-Ether100

Step 2 Configure VRF under PWHE interface.

vrf BUS-VPN2ipv4 address 100.13.9.1 255.255.255.252 ipv6 address 2001:13:9:1::1/64ipv6 enable!

Step 3 Attach interface list to the PWHE interface.

attach generic-interface-list BUS_PWHE!

Step 4 Attach the interfaces to the list.

generic-interface-list BUS_PWHE

Step 5 Assign interfaces to the list.

interface TenGigE0/0/0/0 interface TenGigE0/0/0/3!

Step 6 Configure BGP in AS 101.

router bgp 101

Step 7 Enter VRF configuration under BGP.

vrf BUS-VPN2 rd 8000:8002

Step 8 Configure neighbor address as PE.

neighbor 100.13.9.10

Step 9 Configure remote AS as CE AS.

remote-as 105

Step 10 Enable BFD to detect failures in the path between adjacent forwarding engines.

bfd fast-detect

4-29Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 65: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options MPLS Access using Pseudowire Headend

Step 11 Configure BFD multiplier.

bfd multiplier 3

Step 12 Configure Minimum Interval between sending BFD hello packets to the neighbor.

bfd minimum-interval 50

Step 13 Enters IPv4 address family.

address-family ipv4 unicast

Step 14 Configure route-filter to permit all incoming routes.

route-policy pass-all in

Step 15 Configure route-filter to permit all outgoing routes.

route-policy pass-all outneighbor 2001:13:9:9::2 remote-as 105 bfd fast-detect bfd multiplier 3 bfd minimum-interval 50 address-family ipv6 unicast route-policy pass-all in route-policy pass-all out!!

Step 16 Enter L2VPN configuration mode.

l2vpn

Step 17 Configure pw-class.

pw-class BUS_PWHE encapsulation mpls control-word

Step 18 Configure XConnect on the PWHE interface PW-Ether100 and mentioning access device as neighbor.

xconnect group BUS_PWHE100 p2p PWHE-K1309-Static interface PW-Ether100 neighbor 100.111.13.9!!

Step 19 Configure route-policy.

route-policy pass-all

Step 20 Pass all routes.

pass end-policy

4-30Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 66: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 4 PE-to-CE Design Options MPLS Access using Pseudowire Headend

CE Configuration

Step 1 Interface connecting to the Access device.

interface GigabitEthernet0/2.110encapsulation dot1Q 110ip address 100.13.9.10 255.255.255.252ipv6 address 2001:13:9:9::2/64ipv6 enable

Step 2 Configure BFD for fast failure detection.

bfd interval 50 min_rx 50 multiplier 3 no bfd echo!

Step 3 Configure router bgp.

router bgp 105bgp router-id 100.13.9.10bgp log-neighbor-changes

Step 4 Ipv6 PE neighbor with remote as 101.

neighbor 2001:13:9:1::1 remote-as 101neighbor 2001:13:9:1::1 fall-over bfd

Step 5 Ipv4 PE neighbor with remote as 101.

neighbor 100.13.9.1 remote-as 101neighbor 100.13.9.1 fall-over bfd address-family ipv4no synchronizationnetwork 218.10.4.0 mask 255.255.255.252redistribute connectedneighbor 100.13.9.1 activate!no auto-summaryexit-address-familyaddress-family ipv6redistribute connectedno synchronizationnetwork 2001:10:4:1::/64neighbor 2001:13:9:1::1 activate !exit-address-family

To achieve PE level redundancy, another link can be used between the CPE and the access node and on that link, the access node can be configured with another pseudowire terminating at another PE.

4-31Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 67: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Design and Implementation Guide

C H A P T E R 5

PE UNI QoS

This chapter includes the following major topics:

• PE UNI QoS Configuration, page 5-2

• PE UNI QoS Configuration with PWHE Access, page 5-4

Enterprise Virtual Networks consists of traffic types that include voice, video, Critical applications traffic, and end user web traffic. All these traffic require different priorities and treatments based upon their nature and how critical to the business they are. While traffic is sent and received between PE and CE, QoS implementation on ASR9000 PE uses DSCP field in the IP header to ensure that traffic is properly treated as per its priority defined by DSCP. Two-level H-QoS is configured on the PE for both ingress and egress policies. In nV access topologies, the ingress QoS function, configured on the host for Virtual satellite access port, is offloaded to satellite so that only committed traffic enters the nV network and Fabric link oversubscription can be avoided.

The mapping shown in Table 5-1 is used for different traffic classes to DSCP.

PE configuration for QoS includes configuring class-maps for respective traffic classes and mapping them to the appropriate DSCP. Two-level ingress QOS does policing of traffic in individual classes of child policy. Parent policy is configured with keyword "child-conform-aware" to prevent the parent policer from dropping any ingress traffic that conforms to the maximum rate specified in the child policer. While configuring egress policy map, real-time traffic class CMAP-RT-dscp is configured with highest priority 1 and is policed to ensure low latency expedited forwarding. Rest classes are assigned with respective required bandwidth. WRED is used as congestion avoidance mechanism for Exp 1 and 2 traffic in the Enterprise critical class CMAP-EC-EXP. Shaping is configured on the Parent egress policy to ensure overall traffic does not exceed the committed bit rate (CBR). The ingress and egress policy-maps are applied to the PE interface connecting to CE in respective directions.

Table 5-1 Mapping for Different Traffic Classes to DSCP

Traffic Class PHB DSCP

Enterprise Voice and Real-time EF 46

Enterprise Video Distribution AF 32

Enterprise Critical: In Contract AF 16

Enterprise Critical: Out of Contract AF 8

Enterprise Best Effort BE 0

5-1Cisco Enterprise L3 Virtualization

Page 68: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 5 PE UNI QoS PE UNI QoS Configuration

PE UNI QoS Configuration

Step 1 Configure class-map for business-critical traffic.

class-map match-any CMAP-BC-dscp

Step 2 Match DSCP 8 and 16.

match dscp 8 16

Step 3 Configure class-map for video traffic.

class-map match-any CMAP-BC-video-dscp

Step 4 Match DSCP 32.

match dscp 32

Step 5 Configure class-map for real-time traffic.

class-map match-any CMAP-RT-dscp

Step 6 Match DSCP expedited forwarding.

match dscp ef

Step 7 Configure Child Egress policy-map.

policy-map PMAP-BUS-CE-Child-E

Step 8 Configure RT class-map under policy-map.

class CMAP-RT-dscp

Step 9 Configure priority level 1 for RT class.

priority level 1

Step 10 Police traffic in RT class.

police rate 200 mbps!

Step 11 Configure business-critical class under policy.

class CMAP-BC-dscp

Step 12 Assign Bandwidth to the class.

bandwidth percent 5

Step 13 Configure Video class under policy.

class CMAP-BC-video-dscp

Step 14 Assign Bandwidth to the class.

bandwidth percent 10

Step 15 Configure class-default for rest of the traffic.

class class-defaultend-policy-map!!

5-2Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 69: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 5 PE UNI QoS PE UNI QoS Configuration

Step 16 Configure parent egress policy-map.

policy-map PMAP-BUS-CE-Parent-E

Step 17 Configure class-default for the policy-map.

class class-default

Step 18 Configure child policy under class-default.

service-policy PMAP-BUS-CE-Child-E

Step 19 Configure shaping to ensure egress traffic does not exceed CBR.

shape average 500 mbps

Step 20 Configure bandwidth for the class.

bandwidth 300 mbps-policy-map

Step 21 Configure ingress child policy-map.

policy-map PMAP-BUS-CE-Child-I

Step 22 Configures real-time class-map under policy-map.

class CMAP-RT-dscp

Step 23 Configure priority level 1 for real-time class.

priority level 1

Step 24 Police traffic in real-time class.

police rate 50 mbps!

Step 25 Configure video class-map under policy-map.

class CMAP-BC-video-dscp

Step 26 Configure priority level 2 for video class.

priority level 2

Step 27 Police traffic in video class.

police rate 100 mbps!

Step 28 Configure business-critical class-map under policy-map.

class CMAP-BC-dscp

Step 29 Police traffic in business-critical class.

police rate 100 mbps peak-rate 200 mbps exceed-action transmit violate-action drop!

Step 30 Configures class-default class-map under policy-map.

class class-default

Step 31 Police traffic in default class.

5-3Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 70: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 5 PE UNI QoS PE UNI QoS Configuration with PWHE Access

police rate 50 mbpsexceed-action transmitend-policy-map!!

Step 32 Configure parent egress policy-map.

policy-map PMAP-BUS-CE-Parent-I

Step 33 Configure class-default for the policy-map.

class class-default

Step 34 Child policy under class-default.

service-policy PMAP-BUS-CE-Child-I

Step 35 Configure policing to ensure ingress traffic does not exceed CBR.

police rate 500 mbps

Step 36 Configure child-conform-aware under class.

child-conform-awareend-policy-map

In case of PWHE access, QoS is implemented on PE based on MPLS EXP bits as the received traffic is labeled.

PE UNI QoS Configuration with PWHE Access

Step 1 Configure business-critical class.

class-map match-any CMAP-BC-EXP

Step 2 Match MPLS EXP of topmost label as 1,2.

match mpls experimental topmost 1 2 end-class-map

Step 3 Configure real-time class.

class-map match-any CMAP-RT-EXP

Step 4 Match MPLS EXP of topmost label as 5.

match mpls experimental topmost 5 end-class-map

Step 5 Configures video class.

class-map match-any CMAP-BUS-video-EXP

Step 6 Match MPLS EXP of topmost label as 3.

match mpls experimental topmost 3 end-class-map

Step 7 Configure ingress child policy-map.

5-4Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 71: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 5 PE UNI QoS PE UNI QoS Configuration with PWHE Access

policy-map PMAP-PWHE-NNI-C-I

Step 8 Configure real-time class-map under policy-map.

class CMAP-RT-EXP

Step 9 Configure priority level 1 for real-time class.

priority level 1

Step 10 Police traffic in real-time class.

police rate 50 mbps!

Step 11 Configure video class-map under policy-map.

class CMAP-BUS-video-EXP

Step 12 Configure priority level 2 for video class.

priority level 2

Step 13 Police traffic in video class.

police rate 100 mbps!

Step 14 Configure business-critical class-map under policy-map.

class CMAP-BC-EXP

Step 15 Configure priority level 1 for business-critical class.

police rate 100 mbps peak-rate 200 mbps

Step 16 Police traffic in business-critical class.

exceed-action transmit violate-action drop

Step 17 Configure class-default class-map under policy-map.

class class-default!

Step 18 Police traffic in default class.

police rate 50 mbpsexceed-action transmitend-policy-map!!

Step 19 Configure parent egress policy-map.

policy-map PMAP-PWHE-NNI-P-I

Step 20 Configure class-default for the policy-map.

class class-default

Step 21 Configure child policy under class-default.

service-policy PMAP-PWHE-NNI-C-I

Step 22 Configure policing to ensure ingress traffic does not exceed CBR.

police rate 500 mbps

5-5Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 72: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 5 PE UNI QoS PE UNI QoS Configuration with PWHE Access

Step 23 Configure child-conform-aware under class.

child-conform-awareend-policy-map

Step 24 Configure child egress policy-map.

policy-map PMAP-PWHE-NNI-C-E

Step 25 Configure real-time class-map under policy-map.

class CMAP-RT-EXP

Step 26 Configure priority level 1 for real-time class.

priority level 1

Step 27 Police traffic in real-time class.

police rate 50 mbps!

Step 28 Configure real-time class-map under policy-map.

class CMAP-BUS-video-EXP

Step 29 Configure priority level 2 for video class.

priority level 2

Step 30 Police traffic in video class.

police rate 100 mbps

Step 31 Configure WRED to congestion avoidance.

random-detect discard-class 3 80 ms 100 ms!

Step 32 Configure business-critical class-map under policy-map.

class CMAP-BC-EXP

Step 33 Configure bandwidth for business-critical class.

bandwidth remaining percent 60

Step 34 Configure WRED to congestion avoidance for discard-class 2.

random-detect discard-class 2 60 ms 70 ms

Step 35 Configure WRED to congestion avoidance for discard-class 1.

random-detect discard-class 1 40 ms 50 ms!class class-defaultend-policy-map!

Step 36 Configure parent egress policy-map.

policy-map PMAP-PWHE-NNI-P-E

Step 37 Configure class-default for the policy-map.

class class-default

5-6Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 73: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 5 PE UNI QoS PE UNI QoS Configuration with PWHE Access

Step 38 Configure child policy under class-default.

service-policy PMAP-PWHE-NNI-C-E

Step 39 Configure shaping to ensure egress traffic does not exceed CBR.

shape average 500000000 bpsend-policy-map

Step 40 Service policies applied under PW-Ether interface.

interface PW-Ether100service-policy input PMAP-PWHE-NNI-P-I service-policy output PMAP-PWHE-NNI-P-E vrf BUS-VPN2

5-7Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 74: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Design and Implementation Guide

C H A P T E R 6

Performance and Scale

This chapter includes the following major topics:

• Internet Peering Application, page 6-2

• 100G Edge and Core-Facing Ports, page 6-5

Two types of scalability numbers exist for L3VPN: 1-Dimensional (1D) and Multi-Dimensional (MD). The 1D scale numbers only show scale of L3VPN as a single service running on ASR9000, which is not realistic from a deployment standpoint because a L3VPN PE in an Enterprise or service provider network would usually have mixed services and features, hence we tested and certified the MD scale profile for L3VPN PE.

Table 6-1 captures the MD scale numbers of L3VPN PE Profile with all services and features enabled simultaneously on a PE in a realistic deployment environment.

Table 6-1 ASR9k L3VPN PE Profile Multi-Dimensional Scale Numbers

Feature Parameters Scale

L3 Interfaces Qot1q, qinq, Ethernet 4k

ATM, POS, FR, CE, TDM, HDLC, etc. 6k

MPLS VPNv4 IPv4 VRF Sessions (2 to 3 interfaces per VRF) 4k

VPNv4 Prefixes 2M

PE-CE Routing

eBGP with NSR, MD5, and lower KA-HT 4k

OSPF with MD5 and sham links 1k

Staticv4 4750

EIGRPv4 250

MPLS VPNv6 IPv6 VRF Sessions (2 interfaces per VRF) 4k

VPNv6 Prefixes 500k

PE-CE Routing

eBGP with NSR, MD5, and lower KA-HT 4k

OSPF with MD5 and sham links 1k

Staticv6 4750

EIGRPv6 250

6-1Cisco Enterprise L3 Virtualization

Page 75: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 6 Performance and Scale Internet Peering Application

Internet Peering ApplicationASR9K is used extensively in Internet Peering, Inter-Connect and RR applications because of its rich BGP features, stability of XR SW, and high scale. We have designed and tested the following profiles:

• ASR 9001 as RR

• ASR9k as peering and Enterprise, DC or SP inter-connect platform

MVPN MVPN IPv4/IPv6 500

IPv4 Mroutes, IPv6 Mroutes 32k, 16k

P2MP-TE Headend LSP 1k

uRPF Ipv4, IPv6 10k, 10k

IGMP Snooping BDs, Snooping Entries 1k, 32k

MLD Snooping BDs, Snooping Entries 1k, 32k

L2 Interfaces Ethernet (Phy, Bundle-Ether, BVI, PW-HE)

POS and Serial

L2VPN AToM VPWS 1k

FRoMPLS 1k

FR to Eth IWoMPLS 1k

VPWS PWs 15k

VPWS ACs (1000 each on Eth, BE, PW-HE) 3k

VPLS PWs (w/ 5 neighoring PEs) 15k

VPLS ACs (1000 each on 10GigE, BE, PW-HE) 3k

VPLS PWs to Simulated PEs 34k

VPLS ACs for Simulated PEs (GigE, 10GigE) 2k

MAC address 2M

QoS Interfaces w/ Ingress Policy 10k

Interfaces w/ Egress Policy 10k

ACLs IPv4 ACLs on interface 10k

IPv6 ACLs on interface 10k

MPLS TE Headend LSP with FRR 3k

Midpoint LSP 50k

BFD IPv4 echo 10k

IPv6 Async 10k

Table 6-1 ASR9k L3VPN PE Profile Multi-Dimensional Scale Numbers (continued)

Feature Parameters Scale

6-2Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 76: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 6 Performance and Scale Internet Peering Application

ASR 9001 RR-tested scalability numbers are summarized in Table 6-2.

In the Internet Peering and Inter-Connect profile, we used the topology described in Figure 6-1 to test Enterprise, Data Center and SP peering and inter-connect use cases with scalability. The following key features were tested in this profile:

• Inter-AS option B and C Unicast Routing

• BGP Flowspec

• NetFlow 1:10k Sampling for IPv4, IPv6 and MPLS

• VXLAN L3VPN/L2VPN Gateway handoff between Inter-AS Core

• RFC 3107 PIC, BGP PIC edge for VPNv4, 6VPE, 6PE etc.

• LFA, rLFA

• Inter-AS option C L2VPN VPWS/VPLS with BGP AD, Inter-AS MS-PW, FAT-PW

• Inter-Area/Inter-AS MPLS TE, P2MP TE

• Inter-AS Native IPv4/v6 Multicast, Rosen-mGRE-MVPNv4/v6, mLDP-MVPNv4/v6

• Native IPv4/v6, VPNv4/v6, VPWS/VPLS, Native IPv4/v6 Multicast, mGRE-MVPNv4/v6, PBB-EVPN over CsC

• Next-generation Routing LISP, LISP-MPLS Gateway

• Next-generation MVPN LSM with BGP C-mcast, Dynamic P2MP-TE MVPN, BGP SAFI 2, 129, 5

• Next-generation L2VPN PBB-EVPN

• Next-generation L2 Multicast: VPLS LSM

• TI-MoFRR, MPLS-TP, Bi-Directional TE LSPs (aka. Flex-LSP)

Table 6-2 ASR9k Route Reflector Scale Numbers

Feature Scale

eBGP sessions with 3 BGP instances 5k

eBGP routes with 3 BGP instances Total Route Scale = 14M routes

IPv4 = 6M

VPNv4 = 5M

IPv6 = 1.5M

VPNv6 = 1.5M

iBGP sessions with 2 BGP instances 5k

iBGP routes with 2 BGP instances Total Route Scale = 10M

Ipv4 = 402k

VPNv4 = 7.6M

VPNv6 = 2M

6-3Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 77: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 6 Performance and Scale Internet Peering Application

Figure 6-1 ASR9k Internet Peering and Inter-Connect Profile Topology

The ASR9k scalability test results of Internet Peering and Inter-Connect Profile are shown in Table 6-3.

2972

73

ASBR4

ASBR3

ASBR2

ASBR1

PE4

PE3

PE2

PE1

PE8

PE6

CE3

CE1

CE2

PE5

RR2RR1

AS200

AS200

AS100IXIAIXIA

IXIA

IXIAIXIA

IXIA IXIA IXIA

IXIA IXIA IXIA

IXIA

IXIA

IXIA

IP/LDP LFA + 3107 PIC

Inter AS eBGP for VPNv4, 6PF/6VPF,VPWS/VPLS AD, MDT, MVPN BGP-AD

MVPN BGP C-Multicast, PBB-EVPN

IP/LDP LFA + 3107 PICeBGP + 3107 PIC

ASR 9922

CSC-CE

Table 6-3 ASR9k Internet Peering and Inter-Connect Profile Scale Numbers

Feature PE1 PE2 PE3 ASBR1 ASBR2 ASBR3 PE8

Global FIB v4 512k 512k 512k 512k 512k 512k 512k

Global FIB v6 18k 128k 128k 128k 128k 128k 128k

VRF (v4+v6) 4k 4k 4k 4k

VRF FIB v4 2M 2M 2M 2M

VRF FIB v6 256k 256k 256k 256k

LFIB 512k 512k 512k

L3 interfaces 8k 8k 8k 8k

ARP Adjacencies 32k 32k 32k 32k 32k 32k 32k

BGP session V4 3k 3k 3k 3k 3k 3k 3k

BGP session V6 256k 256k 256k 256k 256k 256k 256k

Labeled-BGP routes 10k 10k 10k 10k 10k 10k 10k

OSPFv2 adjacency 32k 32k 32k 32k 32k 32k 32k

OSPFv3 adjacency 32k 32k 32k 32k 32k 32k 32k

OSPFv2 routes 5k 5k 5k 5k 5k 5k 5k

OSPFv3 routes 10k 10k 10k 10k 10k 10k 10k

ISISv4 adjacency 32k 32k 32k 32k 32k 32k 32k

ISISv6 adjacency 32k 32k 32k 32k 32k 32k 32k

ISISv4 routes 5k 5k 5k 5k 5k 5k 5k

ISISv6 routes 10k 10k 10k 10k 10k 10k 10k

IGP LFA 10k 10k 10k 10k 10k 10k 10k

VRRP/HSRP 400k 400k 400k 400k 400k 400k 400k

ECMP 8k 8k 8k 8k 8k 8k 8k

MPLS label 512k 512k 512k 512k 512k 512k 512k

6-4Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 78: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 6 Performance and Scale 100G Edge and Core-Facing Ports

100G Edge and Core-Facing Ports ASR9k is being positioned as the 100G routing platform in Enterprise, SP, Data Center, and Public Sector segments as the de facto platform for UNI or edge services and NNI or core-facing connectivity.

Table 6-4 describes 100G density and performance testing results based on UNI and NNI testing configurations of ASR9k.

Intra-Area MPLS TE 1k 1k 1k 1k 1k 1k 1k

Inter-Area MPLS TE 1k 1k 1k 1k 1k 1k 1k

Intra-AS MPLS TE 1k 1k 1k 1k 1k 1k 1k

ACL 10k 10k 10k 10k 10k 10k 10k

L2 interfaces (physical) 16k 16k 16k 16k

L2 interfaces (bundle) 16k 16k 16k 16k

PW 32k 32k 32k 32k 32k 32k 32k

MS-PW 4k 4k 4k 4k 4k 4k 4k

BD/VFI 4k 4k 4k 4k 4k 4k 4k

MAC 512k 512k 512k 512k 512k 512k 512k

CFM MEP 4k 4k 4k 4k 4k 4k 4k

CFM MIP 4k 4k 4k 4k 4k 4k 4k

MPLS-TP 1k 1k 1k 1k 1k 1k 1k

Policy-map 1k 1k 1k 1k 1k 1k 1k

Class-map 1k 1k 1k 1k 1k 1k 1k

Policers 32k 32k 32k 32k 32k 32k 32k

Ingress Queue 64k 64k 64k 64k 64k 64k 64k

Egress Queue 64k 64k 64k 64k 64k 64k 64k

Table 6-3 ASR9k Internet Peering and Inter-Connect Profile Scale Numbers (continued)

Feature PE1 PE2 PE3 ASBR1 ASBR2 ASBR3 PE8

Table 6-4 Summary of 100G Support for UNI and NNI on ASR9K

Parameter Typhoon

No. of 100G ports per slot 2X100G line rate

SW support XR 4.2.1

No. of 100G ports per slice 1x100G

Bi-directional bandwidth 200Gbps 100Gbps per NPU

Bi-directional PPS 90Mpps/direction

UNI or Edge-facing service termination on 100G Yes

NNI or Core-facing for 100G transport Yes

nV cluster Yes

nV satellite Yes

6-5Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 79: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 6 Performance and Scale 100G Edge and Core-Facing Ports

We have validated the 100G line card throughput and latency of ASR9k Typhoon line cards in the following two roles and summarized the performance in Table 6-5.

• UNI or edge-facing L2/L3/Multicast VPN services with features

• NNI or core-facing transport with features

The 100G deployment profiles we covered included MPLS, IPv4 and IPv6 in these applications: Internet Peering, DCI PE, SP Edge PE, Metro-Ethernet PE and P, Wan-Core PE and P router and general purpose Core P router.

MACSEC Suite B+ No

MACSEC over Cloud No

100G Pro-active Protection Yes

CPAK Optics No

L2FIB MAC address 2M

L3FIB IPv4/IPv6 address 4M/2M

Bridge domain 64k

Table 6-4 Summary of 100G Support for UNI and NNI on ASR9K (continued)

Parameter Typhoon

Table 6-5 Typhoon 100G Forwarding Chain Performance

SW Ver Feature

UNI/Edge or NNI/Core Facing Role Sub-Feature Linecard

Linerate Packet Size (bytes)

Min Latency (us)

5.1.0 MPLS NNI/Core mpls_swap A9K-2x100GE-SE 130 15

5.1.0 MPLS NNI/Core mpls_depo A9K-2x100GE-SE 176 14

5.1.0 MPLS NNI/Core mpls_impo A9K-2x100GE-SE 175 14

5.1.0 IPv4 NNI/Core IPv4 10K BGP route A9K-2x100GE-SE 136 14

5.1.0 IPv4 NNI/Core IPv4 500K BGP+uRPF A9K-2x100GE-SE 212 15

5.1.0 IPv4 NNI/Core IPv4 non recursive A9K-2x100GE-SE 114 14

5.1.0 IPv4 NNI/Core IPv4 500K BGP route A9K-2x100GE-SE 160 16

5.1.0 IPv6 NNI/Core IPv6_50K BGP route + QoS A9K-2x100GE-SE 384 18

5.1.0 IPv6 NNI/Core IPv6_nonrcur udp NH A9K-2x100GE-SE 196 14

5.1.0 IPv6 NNI/Core IPv6_50K BGP route A9K-2x100GE-SE 361 17

5.1.0 IPv6 NNI/Core IPv6_10K BGP route + QoS A9K-2x100GE-SE 359 17

5.1.0 IPv6 NNI/Core IPv6_50K BGP route + QoS A9K-2x100GE-SE 384 18

5.1.0 L3VPN NNI/Edge L3VPN_30vrf A9K-2x100GE-SE 232 15

5.1.0 IPv4 ACL UNI/Edge output_acl A9K-2x100GE-SE 140 15

5.1.0 IPv4 ACL NNI/Core input_acl A9K-2x100GE-SE 199 15

5.1.0 IPv4 ACL NNI/Core in+out_acl A9K-2x100GE-SE 333 16

5.1.0 IPv4 QoS NNI/Core in+out_policy A9K-2x100GE-SE 230 16

6-6Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 80: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Chapter 6 Performance and Scale 100G Edge and Core-Facing Ports

5.1.0 IPv4 QoS NNI/Core out shaper A9K-2x100GE-SE 168 15

5.1.0 IPv4 QoS NNI/Core inpol+outshap A9K-2x100GE-SE 218 16

5.1.0 IPv4 QoS NNI/Core IPv4 500K BGP route_inpol+outshap

A9K-2x100GE-SE 264 16

5.1.0 IPv4 QoS NNI/Core input_policy A9K-2x100GE-SE 223 16

5.1.0 IPv4 QoS NNI/Core output_policy A9K-2x100GE-SE 209 15

5.1.0 L2 UNI/Edge Bridge A9K-2x100GE-SE 129 14

5.1.0 L2 UNI/Edge xconnect A9K-2x100GE-SE 113 13

5.1.0 Multicast UNI/Edge mcast_IPv4 A9K-2x100GE-SE 277 15

5.1.0 Multicast UNI/Edge mcast_IPv6 A9K-2x100GE-SE 516 14

5.1.0 BVI UNI/Edge L2 EFP BVI L3_2K BVI A9K-2x100GE-SE 592 17

5.1.0 mVPN UNI/Edge mVPN 12vrf_100mroute A9K-2x100GE-SE 507 15

5.1.0 L2VPN UNI/Edge VPLS+qos A9K-2x100GE-SE 596 17

5.1.0 L2VPN UNI/Edge VPWS 3ac+3pw A9K-2x100GE-SE 319 15

5.1.0 L2VPN UNI/Edge VPLS_9BD+9ac+27pw A9K-2x100GE-SE 374 16

5.1.0 L2VPN UNI/Edge VPWS_3ac+3pw+inpol+outshap A9K-2x100GE-SE 326 15

Table 6-5 Typhoon 100G Forwarding Chain Performance (continued)

SW Ver Feature

UNI/Edge or NNI/Core Facing Role Sub-Feature Linecard

Linerate Packet Size (bytes)

Min Latency (us)

6-7Cisco Enterprise L3 Virtualization

Design and Implementation Guide

Page 81: ASR 9000 Enterprise L3VPN Design and Implementation Guide

Design and Implementation Guide

A

P P E N D I X A Related Documents

The Cisco Enterprise L3 Virtualization Design and Implementation Guide is part of a set of resources that comprise the Cisco EPN System documentation suite. The resources include:

• EPN 3.0 System Concept Guide: Provides general information about Cisco's EPN 3.0 System architecture, its components, service models, and the functional considerations, with specific focus on the benefits it provides to operators.

• EPN 3.0 System Brochure: At-a-glance brochure of the Cisco Evolved Programmable Network (EPN).

• EPN 3.0 MEF Services Design and Implementation Guide: Design and implementation guide with configurations for deploying the Metro Ethernet Forum service transport models and use cases supported by the Cisco EPN System concept.

• EPN 3.0 Transport Infrastructure Design and Implementation Guide: Design and implementation guide with configurations for the transport models and cross-service functional components supported by the Cisco EPN System concept.

• EPN 3.0 Mobile Transport Services Design and Implementation Guide: Design and implementation guide with configurations for deploying the mobile backhaul service transport models and use cases supported by the Cisco EPN System concept.

• EPN 3.0 Residential Services Design and Implementation Guide: Design and implementation guide with configurations for deploying the consumer service models and the unified experience use cases supported by the Cisco EPN System concept.

• EPN 3.0 Enterprise Services Design and Implementation Guide: Design and implementation guide with configurations for deploying the enterprise L3VPN service models over any access and the personalized use cases supported by the Cisco EPN System concept.

Note All of the documents listed above, with the exception of the System Concept Guide and System Brochure, are considered Cisco Confidential documents. Copies of these documents may be obtained under a current Non-Disclosure Agreement with Cisco. Please contact a Cisco Sales account team representative for more information about acquiring copies of these documents.

A-1Cisco ASR 9000 Enterprise L3VPN