service provider lab guide v4...service provider lab guide (revision 4.7) velocloud networks...

78
Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential not to be redistributed Page 1 of 78 Service Provider Lab Guide V4.7

Upload: others

Post on 02-Jun-2020

15 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 1 of 78

Service Provider

Lab Guide V4.7

Page 2: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 2 of 78

Page 3: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 3 of 78

Table of Content

1. OBJECTIVES ...................................................................................................................................................... 5

2. LAB TOPOLOGY ............................................................................................................................................... 5

3. LAB SETUP ......................................................................................................................................................... 6

3.1. PHYSICAL LAB SETUP ........................................................................................................................................... 6

3.2. ACCESSING THE LAB ........................................................................................................................................... 6

3.2.1 ACCESSING THE ORCHESTRATOR PORTAL ................................................................................................................ 6

3.2.2 ACCESSING THE ENTERPRISE PORTAL ....................................................................................................................... 8

3.2.3 ACCESS THE HOST SYSTEM ....................................................................................................................................... 9

3.2.4 ACCESSING A CONTAINER CONSOLE ..................................................................................................................... 10

3.2.5 ACCESSING THE ROUTING DAEMON ...................................................................................................................... 12

4. EXPLORE AND VERIFY .................................................................................................................................... 13

4.1. VERIFY EDGES ARE ACTIVE ................................................................................................................................. 13

4.2. VERIFY GATEWAY ASSIGNMENT .......................................................................................................................... 13

4.3. CONFIRM THAT ON PREMISE GATEWAYS ARE ACTIVE ............................................................................................. 14

4.4. CONFIRM ENROLLMENT IN A GATEWAY POOL ....................................................................................................... 15

4.5. CONFIRM ENTERPRISE CONFIGURATION ............................................................................................................... 17

4.6. CONFIRM GATEWAY ASSIGNMENT PER EDGE ........................................................................................................ 20

4.7. EXPLORE THE ASSIGNED PROFILE ......................................................................................................................... 20

4.8. CONFIRM OVERLAY FLOW CONTROL ROUTES ...................................................................................................... 21

4.9. CONFIRM BGP SESSION ESTABLISHMENT ............................................................................................................. 22

4.10. INSPECT THE POP PE ROUTERS ......................................................................................................................... 26

4.11. INSPECT THE POP PARTNER GATEWAY .............................................................................................................. 28

5. DECONSTRUCT AND REBUILD ........................................................................................................................ 34

5.1. DE-ACTIVATE THE GATEWAY .............................................................................................................................. 34

5.2. DELETE GATEWAY FROM ORCHESTRATOR ............................................................................................................. 35

5.3. PROVISION THE GATEWAY ................................................................................................................................. 36

5.4. CONFIGURE BGP IN THE CUSTOMER CONTEXT ...................................................................................................... 40

5.5. ASSIGN TO EDGES ............................................................................................................................................ 43

5.6. VERIFY END2END TRAFFIC PATHS ......................................................................................................................... 44

5.7. FAIL THE PRIMARY PARTNER GATEWAY ................................................................................................................. 45

5.8. GATEWAY SELECTION VARIATIONS ..................................................................................................................... 47

5.9. GATEWAY MONITORING ................................................................................................................................... 49

6. BGP PATH INFLUENCING ............................................................................................................................... 52

6.1. INFLUENCING OUTBOUND VCG SELECTION WITH AS-PATH PREPEND ..................................................................... 53

6.2. INFLUENCING OUTBOUND VCG SELECTION WITH LOCAL PREFERENCE ..................................................................... 56

6.3. INFLUENCING INBOUND GATEWAY SELECTION VIA COMMUNITIES ........................................................................... 58

Page 4: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 4 of 78

7. ORCHESTRATOR POST-INSTALLATION STEPS ................................................................................................ 62

7.1. UPLOAD EDGE SOFTWARE IMAGE ........................................................................................................................ 62

7.2. CREATE AN OPERATOR PROFILE ......................................................................................................................... 62

7.3. SYSTEM PROPERTIES .......................................................................................................................................... 63

7.3.1 TWILIO ................................................................................................................................................................. 64

7.3.2 MAXMIND ........................................................................................................................................................... 64

7.4. ADD GATEWAY POOLS AND GATEWAYS ............................................................................................................... 64

7.5. PROVISION OPERATOR USERS ............................................................................................................................ 65

7.6. ORCHESTRATOR MONITORING ........................................................................................................................... 65

7.7. ORCHESTRATOR DISASTER RECOVERY ................................................................................................................. 66

8. FREEFORM EXERCISE ...................................................................................................................................... 71

Page 5: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 5 of 78

1. Objectives

The lab you are about to start will provide insight into essential gateway installation and operation. Once

familiarized with these topics, you will configure a partner handoff, facilitating an interconnection between SD-

WAN services and an MPLS or IPVPN Service. Focus will be placed on BGP operation and how learned

routes can be influences to accommodate customized routing scenarios.

2. Lab Topology

The network topology used throughout this lab exercise consists of three Points of Presence (San Francisco –

SFO, New York - NYC and Atlanta - ATL). The SFO and NYC POPs are each equipped with a VeloCloud

Cloud Gateway (vcg1’s) that solely provide access to SaaS applications, as well as a VeloCloud Partner

Gateway (vcg2’s) that provide additional access to the Service Provider core network and attached resources.

To facilitate the interconnection, the partner gateways are each connected to a PE (MPLS Provider Edge)

router. The PE routers, in turn, interconnect over a private backbone using iBGP. The ATL POP is connected

to both the SFO & NYC PE routers but also has a service node directly attached, emulating a hosted value add

service like e.g. a VoIP service.

Partner gateways differ from Cloud Gateways in their ability to interconnect to private customer VRF

resources. A Cloud gateway is only capable of connecting to these resources using standard based IPsec

tunnels, while the Partner Gateway, in addition to IPsec, can also provide this connectivity through a dedicated

interface into the PE router hosting the customer VRF.

The common thread through the exercises is that we want to establish connectivity from a client attached to

the LAN side of an Edge towards the emulated value add service node. This node can also be thought of as a

customer legacy site CE router, directly and solely attached with an MPLS WAN link.

Besides the POPs, a redundant set of Orchestrators as well as emulated SaaS services is available. Given

that two SaaS services are emulated, policies can be set to show different network handling behavior.

In addition to the POPs, 2 branch sites are available, each equipped with a virtual edge and a directly attached

client. The edges have 2 internet WAN links, simulating diversified access. Clients will receive a DHCP

address from the VeloCloud edge and will be able to access a variety of resources:

• Internet based destinations outside of the lab environment. Depending on business policy, these may

go first through an assigned Gateway.

• VeloCloud Orchestrator, providing control plane services to both edges and gateways.

• Simulated SaaS services, subject to a different business policy treatment

• Remote branch clients through Edge2Edge Cloud VPN (using the gateway)

• Simulated resources downstream of the partner gateways. Availability of this resource is advertised

through BGP to the MPLS core and VeloCloud gateways as well as edges.

The topology visually represents as follows:

Page 6: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 6 of 78

3. Lab Setup

3.1. Physical Lab Setup

All lab components and networking segments are implemented using LXC containers to provide a lightweight

working environment where system changes can be easily enacted without the overhead typically associated

with full sized Virtual Machines. Direct access to the host system that manage these containers is provided

and you will be able to access the console of each of the containerized functions.

The lab topology has been implemented on a single host system providing sufficient compute and memory

resources to host all functions needed in this exercise. The containers are pre-configured and a working

topology with 2 edges and active BGP PE routers and gateways will be immediately available at the start of the

class.

3.2. Accessing the Lab

Students will be assigned a pod number at the start of the class in the form of sp-322-<N>.lab.velocloud.org

All resources needed for this exercise can be accessed through variations of this URL.

For accessing the lab an HTML5 compatible browser is required. It is however strongly advised to use Google

Chrome browser to complete the exercises and for optimal experience. No other tools beyond a browser will

be needed

The lab is CLI centric but access to the Orchestrator Operator portal is also commonly used.

3.2.1 Accessing the Orchestrator Portal

An Orchestrator is installed in a container and its portal is exposed via the container host system directly. An

operator account has been provisioned that allows Superuser Operator access to the Orchestrator:

Page 7: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 7 of 78

From a browser navigate to http://sp-322-<N>.lab.velocloud.org/operator

Login with ‘[email protected]’ and use ‘Welcome2Velocloud’ as the password

When accessing the Orchestrator Operator Portal, the browser may prompt with a security violation. It is safe

to ignore this warning and proceed to the Orchestrator portal. The warning is because of the fact that no SSL

certificate is in place as this is not needed to complete any of the exercises.

Ensure that you see the login for ‘Welcome to VeloCloud Operations Console’ and not ‘Welcome To

VeloCloud Network Orchestrator’. If the latter is seen on the login screen, ‘/operator’ is omitted form the URL

and you are accessing the Enterprise portal targeted to provide control to end-users.

After logging in, the following screen will appear, providing an overview of all customers present on the

Orchestrator:

Page 8: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 8 of 78

Only one customer is configured on the Orchestrator and will be used during all the exercises.

Access to the enterprise account is available through Manage Customers | ACME Corporation

3.2.2 Accessing the Enterprise Portal [Please skip this section 3.2.2 for

this lab]

Beyond the Operator level portal, a lower level Enterprise portal is also available for access but will not be

used during the training session. The enterprise portal provides enterprise administrator access for a single

tenant provisioned on the Orchestrator.

For reference, this portal is available at http://sp-322-<N>.lab.velocloud.org

To Access the Acme account, log in with ‘[email protected]’ and use ‘Welcome2Velocloud’ as the password

Ensure that the portal shows the ‘Welcome To VeloCloud Network Orchestrator’ message. If not, you may still

be connected to the Operator portal and need to log out from this first before proceeding to connect to the

Enterprise portal.

Page 9: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 9 of 78

After logging in, you will be immediately directed to the Enterprise portal that is associated with the used

account:

3.2.3 Access the host system

The container (LXC) host system is available through an in-browser webconsole at

https://sp-322-<N>.lab.velocloud.org:4200

Note that it is important to connect using HTTPS to this URL. Connecting with unprotected HTTP will not

provide access to the web-console.

The login for the instance is ‘training’ and the password is ‘Welcome2Velocloud’

Page 10: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 10 of 78

Going forward in the document, this system will be referred to as the ‘host system’, on which all commands

outlined in the training are execute. However, commands do not directly execute at the host system but in a

console of the container that can be accessed via the host system.

It is advisable to have multiple tabs open in a browser to allow multiple sessions to be used simultaneously.

This can avoid repeated access in and out of containers.

Note that it is possible in some browser and keyboard combinations that the ‘-‘ sign does not work in this

console. If this happens, use the ‘-‘ sign on the numeric portion on top of the keyboard versus the keypad on

the right of the keyboard. If this still fails, an on-screen keyboard can be enabled by right clicking in the

webconsole and selecting ‘Onscreen Keyboard’. A keyboard icon will present itself in the top right corner that,

if clicked, will present the on-screen keyboard.

3.2.4 Accessing a container console

To access a container console to control a function in the topology, you must first be logged on to the host

system web console.

Page 11: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 11 of 78

From the host system, you can list which containers are currently active by using the ‘lxc-ls -f’ command. Note

that one has to execute this command as root. If for any reason, the host system account in use is not root,

use ‘sudo -i’ to access the root account again.

root@pod:~# lxc-ls -f

NAME STATE IPV4 IPV6 AUTOSTART

-------------------------------------------------------------------------------------------

lab-atl-pe RUNNING 192.168.31.1, 192.168.32.1, 192.168.40.254 - NO

lab-client1 RUNNING 10.0.0.201 - NO

lab-client2 RUNNING 10.128.0.217 - NO

lab-core RUNNING 104.8.8.1, 184.1.1.1, 192.168.1.1, 208.6.1.1, … - NO

lab-edge1 RUNNING 10.0.0.1, 10.0.0.2, 169.254.129.2, 208.6.1.31, … - NO

lab-edge2 RUNNING 10.128.0.1, 10.128.0.2, 104.8.8.31, 169.254.129.3, … - NO

lab-nyc-pe RUNNING 192.168.22.1, 192.168.32.254, 192.168.33.12 - NO

lab-nyc-vcg1 RUNNING 169.254.129.1, 24.12.0.10 - NO

lab-nyc-vcg2 RUNNING 169.254.129.1, 192.168.22.254, 24.12.0.20 - NO

lab-saas1 RUNNING 24.17.0.21 - NO

lab-saas2 RUNNING 24.17.0.22 - NO

lab-sfo-pe RUNNING 192.168.21.1, 192.168.31.254, 192.168.33.11 - NO

lab-sfo-vcg1 RUNNING 169.254.129.1, 24.11.0.10 - NO

lab-sfo-vcg2 RUNNING 169.254.129.1, 192.168.21.254, 24.11.0.20 - NO

lab-srv-sp RUNNING 192.168.40.1 - NO

lab-vco1 RUNNING 24.17.0.11 - NO

lab-vco2 RUNNING 24.17.0.12 - NO

You can also list only the running containers using this command: root@pod:~# lxc-ls --running

lab-atl-pe lab-client2 lab-edge1 lab-nyc-pe lab-nyc-vcg2 lab-saas2 lab-sfo-vcg1 lab-srv-sp lab-vco2

lab-client1 lab-core lab-edge2 lab-nyc-vcg1 lab-saas1 lab-sfo-pe lab-sfo-vcg2 lab-vco1

Note that in the container environment, the hostnames of each of the components in the topology are

prepended with ‘lab-‘. The full container name as listed will be needed to access the console with the following

command:

root@lab-sfo-vcg1:~# /opt/vc/sbin/gwd -v

VCG Info

========

Version: 3.2.2

Build rev: R322-20190308-GA-VCG

Build Date: 2019-03-08_23-54-46

The example listed above will log on to an interactive console of lab-sfo-vcg1 and list the version of the

VeloCloud Gateway daemon. Typing ‘exit’ will disconnect the console from the container and return back to the

host system.

Since console access is provided to the containers, no username and passwords are needed to access the

container function. Accessing the console will directly provide root access to the containers.

As a shortcut, commands like the examples above can also be executed as a single command from the host

system. After execution, you will return to the host system prompt: root@pod:~# lxc-attach -n lab-sfo-vcg1 -- /opt/vc/sbin/gwd -v

VCG Info

========

Version: 3.2.2

Build rev: R322-20190308-GA-VCG

Build Date: 2017-05-28_00-02-50

root@pod:~#

Page 12: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 12 of 78

3.2.5 Accessing the routing daemon

Both gateways as well as any router in the lab environment (lab-<pop>-vcg<n>, lab-<pop>-pe, lab-srv-sp)

have Quagga installed. This is an open source routing daemon that provide Cisco CLI like access to control

and monitor the routing protocols. During the lab, only BGP will be utilized.

Quagga is also used, although a modified version, in the VeloCloud gateways to provide BGP routing

functionality. Throughout the lab, we will use quagga as a lightweight router.

‘vtysh‘ is the command that provides access to the quagga CLI, which can be executed from the container

console: root@pod:~# lxc-attach -n lab-atl-pe

root@lab-atl-pe:~# vtysh

Hello, this is Quagga (version 0.99.24.1).

Copyright 1996-2005 Kunihiro Ishiguro, et al.

lab-atl-pe# show running-config

Building configuration...

Current configuration:

!

interface eth0

ipv6 nd suppress-ra

no link-detect

!

interface eth1

ipv6 nd suppress-ra

no link-detect

!

interface eth2

ipv6 nd suppress-ra

no link-detect

!

router bgp 150

bgp router-id 192.168.40.254

bgp log-neighbor-changes

neighbor 192.168.31.254 remote-as 150

neighbor 192.168.31.254 next-hop-self

neighbor 192.168.32.254 remote-as 150

neighbor 192.168.32.254 next-hop-self

neighbor 192.168.40.1 remote-as 250

!

ip route 192.168.21.254/32 192.168.31.254

ip route 192.168.22.254/32 192.168.32.254

!

end

lab-atl-pe# exit

root@lab-atl-pe:~# exit

root@pod:~#

A shortcut is available to directly execute commands inside the quagga daemon from the container console,

without going into an interactive CLI of the daemon:

root@pod:~# lxc-attach -n lab-atl-pe

root@lab-atl-pe:~# vtysh -c "show ip bgp summary"

BGP router identifier 192.168.40.254, local AS number 150

RIB entries 11, using 1232 bytes of memory

Peers 3, using 13 KiB of memory

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd

Page 13: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 13 of 78

192.168.31.254 4 150 1291 1291 0 0 0 21:26:30 5

192.168.32.254 4 150 1291 1291 0 0 0 21:26:25 5

192.168.40.1 4 250 1290 1293 0 0 0 21:26:27 1

root@lab-atl-pe:~# exit

root@pod:~#

The two shortcuts can furthermore be combined to ease navigation and command execution between the

different containers: root@pod:~# lxc-attach -n lab-atl-pe -- vtysh -c 'show ip bgp summary'

BGP router identifier 192.168.40.254, local AS number 150

RIB entries 11, using 1232 bytes of memory

Peers 3, using 13 KiB of memory

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd

192.168.31.254 4 150 48 47 0 0 0 00:42:01 5

192.168.32.254 4 150 48 47 0 0 0 00:42:04 5

192.168.40.1 4 250 46 52 0 0 0 00:42:01 1

4. Explore and Verify

What You Will Learn:

▪ How the lab topology is operating and all of the components interact with each other

▪ How the gateway modes differ (Cloud Gateway vs. Partner Gateway)

▪ How eBGP is established between the Partner Gateway (VCG) and the attached PE router

▪ How iBGP is established between the PE routers

▪ How edges are assigned to partner gateway

▪ How traffic flows through all participating elements

4.1. Verify edges are active

Log into the Orchestrator as an operator (http://sp-322-<N>.lab.velocloud.org/operator), go into the ACME

Corporation account and select Monitor | Edges from the navigation pane on the left side of the screen.

Both edges are active, each with 2 internet links attached. The links attached to the edges are unrestricted and

measure the interface speed against the gateway. No network impairments will be added during the course of

this lab exercise. Explore the attached links on the edges.

4.2. Verify gateway assignment

We will now look at how the two already activated edges in the topology are assigned to partner gateways and

Page 14: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 14 of 78

gradually look at all the attached components to get a clear understanding what needs to be configured and

how they interact.

Click on the ‘View’ button to expose the gateways associated to the edge. Note that this is a function which

can only be exercised when logged in through the operator portal. As an enterprise user, this insight will not be

available as the concept of a gateway is abstracted from the enterprise users and is only relevant to operators

of the service.

The edge has both a primary and secondary (Cloud) Gateway to access SaaS services. This is traffic that will

be NAT’d out of the public interface of the gateway and these gateways will provide protection for business-

critical SaaS services, as determined by policy.

Alternatively, you can assign a primary and secondary On-Premise gateway for providing reliable access to

either applications hosted by the owner of the On-Premise gateways or towards a private core network (e.g.

MPLS) where other enterprise legacy site using a CE router may be attached to as well.

The gateway and on-premise gateway allocations are logical functions and can be overloaded on the same

physical (or virtual) gateway instance.

Let’s step backwards to how these assignments were achieved

4.3. Confirm that On Premise gateways are active

In SuperUser Operator | Gateways, confirm that the gateways are operational:

Page 15: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 15 of 78

From this dialog we see 4 active gateways, 2 of them are configured as Partner (on-premise) Gateways. The

remaining two are configured as Cloud gateways. In the next exercises, we’ll focus our attention on the Partner

Gateway. The difference between these is as follows:

• Cloud Gateways operate with a single interface and ingest overlay traffic i.e. business critical traffic

from edges. Traffic is decapsulated from the overlay and NAT is performed before sending the traffic

to the internet over the same interface.

• Partner Gateways (also called On-premise gateways) operate with dual interfaces. A VCMP

(VeloCloud Multi Path protocol) interface faces the edges and a WAN interface faces towards a PE

router in the service provider network. With this installation, traffic can still be NAT’d to the internet as

is the case with a NAT Gateway. In addition, BGP peering with a PE router can provide access to

hosted services, attached to the core or legacy enterprise sites, not currently using a VeloCloud Edge.

Edge are manually associated to each of the gateways and to their respective functions to facilitate predictable

VCMP termination. Click on the ‘View’ text for Edges (SaaS traffic) and Partner Gateway:

The serial numbers in the above dialog show up as MAC addresses, which is as a result of the use of virtual

edges. Physical devices will list the actual serial number as listed on the back of the devices as well as on the

shipping packaging. The logical ID listed represents the current function assigned to the hardware of virtual

edge. In the event of an RMA, the branch site would retain its logical ID and assign this to the replacement

hardware (or software) edge Serial Number. All associated configuration with the RMA’d edge will be re-loaded

into the newly deployed device.

4.4. Confirm enrollment in a gateway pool

Now confirm that the gateways are added to the gateway pool that is assigned to an enterprise account. This

will allow edges provisioned in the account to utilize the gateways in the pool.

The NAT gateways are associated to the Default pool and the Partner Gateways are associated to the Partner

Gateway Pool.

Go to SuperUser Operator | Gateway Pool and select the Partner Gateway Pool:

Page 16: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 16 of 78

The gateway pool is configured to only contain partner (on premise) gateways but it can also be set up to allow

partner gateways so that a hybrid set of functions can be done on distinct sets of gateways.

After closing the dialog, click on ‘lab-sfo-vcg2’ to explore more details on how the participating gateway is set

up:

Page 17: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 17 of 78

It will show that a default route is set up allowing for any traffic that is directed to the gateway to be NAT’d back

out towards the internet. This is granted that no more specific BGP routes are advertised into the gateway that

would direct flows towards the attached PE router.

No ICMP probe or responder is enabled in this exercise as that is only relevant for cases where the PE route

would not be able to exchange state over BGP and would need to rely on static routing. The ICMP responder

can track state to the next hop and facilitate a failover in case of failure.

Note that you can select which roles a partner gateway can serve to the branches.

• The Partner Gateway function is enabled to allow this gateway to use a partner handoff towards the

PE router. The associated BGP session will be configured on a per-customer basis.

• The Secure VPN Gateway role is enabled to allow non-VeloCloud sites to connect to the gateway with

the use of a standards-based IPsec tunnel.

Control and Dataplane functions cannot be disabled as these are essential to edge operation when they use

the gateway to protect business critical traffic flows.

4.5. Confirm Enterprise configuration

The BGP configuration of the Partner Gateway is done on a per tenant basis and as such can be found in the

enterprise configuration section. The Gateway does not participate in MP-BGP but rather enables multi-

instance BGP, which effectively a separate BGP process per participating tenant. The BGP configuration on

the Partner Gateway will be driven by the Orchestrator and distribution of routes towards edges is done by the

VeloCloud management protocol through the Orchestrator.

After selecting the ACME Corporation enterprise again, go to Configuration | Customer and verify that BGP is

enabled as an enterprise wide feature. This is a dialog that is only available to operators on the Orchestrator.

In the same dialog, an enterprise wide security policy can also be configured that will set the cipher strength

and protocols used when edges securely exchange traffic to the gateway or in between other edges.

Further down in the page, a gateway pool is assigned that contains the partner gateways and as such is

associated with the enterprise account so edges can make use of the listed gateways. Note that the gateway

pool must contain active gateways before it can be assigned to an enterprise.

If partner gateway functionality is required for the enterprise, Partner HandOff must be enabled after which an

additional dialog will appear allowing BGP configuration of the handoff.

Page 18: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 18 of 78

Each of the partner gateways can be configured with relevant enterprise BGP setting to allow a peering

session between the Partner Gateway and an attached PE router. Optionally, all gateways can be configured

in an identical fashion, so each gateway has the same peering session towards its attached gateway.

In this example, we will individually configure each of the gateways in the pool with customer specific BGP

settings.

Explore more in detail how BGP is set up by clicking on the Edit button for lab-sfo-vcg2:

Page 19: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 19 of 78

Here you can see that the Partner Gateway has been set with IP 192.168.21.254/24 as the router IP. This IP

address resides inside the customer context or VRF. The customer context will use a private ASN 65111 for

the purpose of peering with the PE router.

The attached PE router (lab-sfo-pe) will be in ASN 150 and have BGP active on IP address 192.168.21.1.

Inbound & outbound BGP filters will be addressed later in the exercise.

Note that the ‘Secure BGP Routes’ checkbox is checked, which enables encryption for all traffic from the Edge

towards the Partner Gateway if the destination IP matches a route that was advertised by BGP to the Partner

Gateway. If the Partner Gateway is protecting publicly available resources, this can be disabled. A similar

checkbox is also present in the Static Routes section, where you set this behavior on an individual prefix basis.

An important selection is the Tag Type, which is the encapsulation in which the VCG hands off customer

tenant traffic to the PE router. This can be:

• None i.e. untagged. This is only useful in the event of single tenant handoff or a handoff towards a

shared services VRF, as is the case in the lab topology

• 802.1q i.e. traditional single VLAN tag

• 802.1ad / QinQ i.e. double VLAN tagging

Traditionally, traffic will be segregated and the use of 802.1q and 802.1ad is strongly recommended. In this

particular setup we have selected ‘none’ which means that all customer traffic will merge on the PE router and

there effectively is no customer segregation. Multiple customers can be configured on the partner handoff but

must each operate on unique address space.

If explicit tagging is selected, the configuration changes are limited to the adjacent PE router. The VCG will

automatically tag the traffic according to settings without any modification of the VM (at an OS level) that the

Page 20: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 20 of 78

device resides on.

Optionally inspect the configuration on lab-nyc-vcg2.

4.6. Confirm gateway assignment per edge

Confirm that the edges are assigned gateways out of the configured pool of the enterprise. This assignment

can be found on the Configure | Edges | lab-edge1 | Device tab. Find the Partner Gateway Assignment

section.

When working with larger deployments it is advised that manual load balancing is employed where primary

and secondary gateways are reversed for blocks of edges. This policy can also be enforced easily through API

calls and ensures static assignment that allows for easier capacity planning.

The default behavior is that the Gateway 1 selection will be used as the primary partner gateway for the edge.

Gateway 2 will be used in the event of a failure of the primary gateway only but remains active. If traffic flows

through Gateway 2 for this edge, it will still be delivered to the branch.

4.7. Explore the assigned profile

Explore the profile that is currently assigned to the edge by going to Configure | Edges:

Click on the ‘Enterprise Profile’ hyperlink to access the assigned profile. The profile is assigned at the time an

edge is provisioned by either an enterprise admin or an operator.

Once in the profile configuration, select the ‘Device’ tab and confirm that Cloud VPN is enabled.

This will allow branch to branch communication through the Partner Gateway while still retaining the ability to

serve traffic towards the partner handoff to protect traffic to the DC resource (e.g.: lab-srv-sp)

Page 21: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 21 of 78

Dynamic Branch to Branch communication is disabled to force the use of the VeloCloud Gateway for all inter-

branch traffic. If enabled, edges would be able to directly form a tunnel without sending traffic through the

gateways.

Select the ‘Business Policy’ tab next. Here we find that 2 additional rules have been added to the default set of

rules (starting at line 3) that are pre-configured in each Enterprise account at the time of creation.

• Traffic destined for the IP address of SaaS1 (24.17.0.21) is configured as Business Critical and

marked as High Priority and uses DMPO protections through the VeloCloud Gateway to ensure

optimal delivery of the application

• Traffic destined for the IP address of SaaS2 (24.17.0.22) is configured as Non-Business Critical and

marked as Low Priority and instructed to be sent out directly over an attached internet link to its final

destination.

4.8. Confirm Overlay Flow Control Routes

BGP sessions between the partner gateways and their respective PE routers as well as to downstream

resources are already established. We can confirm what is observed by the Orchestrator by navigating to

Configure | Overlay Flow Control.

Overlay Flow Control (OFC) is an Enterprise wide routing table that outlines which prefixes are available and

what egress node (gateway, edge, hub) these can be reached through. It abstracts all the routing complexity of

the underlying transport and you can consider the SD-WAN solution as one large router, where the branches

are ports in this router. OFC is the routing table for this analogous router.

In this dialog, you can optionally influence how the Orchestrator advertises learned and attached routes to

external components.

The OFC table will list the subnets of both edges as well as a redundant set of routes for the subnet of lab-srv-

sp (192.168.40.0/24). Click on the ‘adjacencies’ and ‘metrics’ hyperlinks and more details will become

available, identifying the neighbor and attributes seen on the specific learned route.

Page 22: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 22 of 78

In this example, we see the lab-srv-sp subnet (192.168.40.0/24) being made available through eBGP via both

lab-sfo-vcg2 and lab-nyc-vcg2. AS-PATH length as well as Local Preference for the route is listed and show

that no specific preference is present at this point to favor one partner gateway over the other to reach the

prefix.

4.9. Confirm BGP session establishment

Through the Orchestrator portal we can easily confirm if the BGP sessions are established by going to

Monitoring | Network Services. Find the BGP Neighbor State section:

Further event detail can be obtained by clicking on the ‘view’ hyperlink:

Let’s now turn our attention to the console of the participating components and confirm how peering is

configured and what routes are exchanged. As a reminder, first go to Configure | Customer and edit the lab-

nyc-vcg2’s Partner HandOff configuration.

Connect to the web console at https://sp-training-pod<N>:4200 and access the quagga CLI of the lab-srv-sp

device that is set up to emulate a CE router and eBGP peers with lab-atl-pe.

root@pod:~# lxc-attach -n lab-srv-sp

root@lab-srv-sp:~# vtysh

Hello, this is Quagga (version 0.99.22.4).

Copyright 1996-2005 Kunihiro Ishiguro, et al.

lab-srv-sp# sh run

Building configuration...

!

interface eth0

ipv6 nd suppress-ra

!

interface lo

!

Page 23: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 23 of 78

router bgp 250

bgp router-id 192.168.40.1

network 192.168.40.0/24

neighbor 192.168.40.254 remote-as 150

neighbor 192.168.40.254 next-hop-self

!

end

Note that the interfaces in this listing are not configured. The Quagga (BGP routing) daemon will pick these up

from the kernel as demonstrated with the following command that shows the connected subnets: lab-srv-sp# sh ip bgp scan

BGP scan is running

BGP scan interval is 60

Current BGP nexthop cache:

BGP connected route:

192.168.40.0/24

Optionally, the interfaces at the OS level can be inspected. You will need to exit the vtysh prompt first: root@lab-srv-sp:~# ifconfig -a

eth0 Link encap:Ethernet HWaddr 00:ba:be:27:3a:57

inet addr:192.168.40.1 Bcast:192.168.40.255 Mask:255.255.255.0

inet6 addr: fe80::2ba:beff:fe27:3a57/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:3809 errors:0 dropped:0 overruns:0 frame:0

TX packets:3686 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:233044 (233.0 KB) TX bytes:224340 (224.3 KB)

lo Link encap:Local Loopback

inet addr:127.0.0.1 Mask:255.0.0.0

inet6 addr: ::1/128 Scope:Host

UP LOOPBACK RUNNING MTU:65536 Metric:1

RX packets:8 errors:0 dropped:0 overruns:0 frame:0

TX packets:8 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:560 (560.0 B) TX bytes:560 (560.0 B)

Let’s also inspect the routing table that is installed at the OS level: root@lab-srv-sp:~# ip route

default via 192.168.40.254 dev eth0 onlink

10.0.0.0/24 via 192.168.40.254 dev eth0 proto zebra

10.0.0.2 via 192.168.40.254 dev eth0 proto zebra

10.128.0.0/24 via 192.168.40.254 dev eth0 proto zebra

10.128.0.2 via 192.168.40.254 dev eth0 proto zebra

192.168.21.0/24 via 192.168.40.254 dev eth0 proto zebra

192.168.22.0/24 via 192.168.40.254 dev eth0 proto zebra

192.168.40.0/24 dev eth0 proto kernel scope link src 192.168.40.1

Any routes marked with ‘proto zebra’ originate from the Quagga routing daemon and have been injected into

the OS routing table. We can now see that the router has learned multiple subnets:

• 10.0.0.0/24 and 10.128.0.0/24 networks that are advertised through the Partner Gateway and provide

reachability back to the branches

• Upstream segments 192.168.21.0/24 and 192.168.22.0/24 that interconnect the PE router and the

Partner Gateway.

From the previously listed configuration we can get that the attached network 192.168.40.0/24 is advertised to

adjacent routers.

The routing table can also be queried from within the Quagga CLI (vtysh), which provides additional detail as

to which protocol has contributed the routes to the RIB.

Page 24: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 24 of 78

lab-srv-sp# show ip route

Codes: K - kernel route, C - connected, S - static, R - RIP,

O - OSPF, I - IS-IS, B - BGP, A - Babel,

> - selected route, * - FIB route

K>* 0.0.0.0/0 via 192.168.40.254, eth0

B>* 10.0.0.0/24 [20/0] via 192.168.40.254, eth0, 20:38:06

B>* 10.0.0.2/32 [20/0] via 192.168.40.254, eth0, 20:38:06

B>* 10.128.0.0/24 [20/0] via 192.168.40.254, eth0, 20:38:06

B>* 10.128.0.2/32 [20/0] via 192.168.40.254, eth0, 20:38:06

C>* 127.0.0.0/8 is directly connected, lo

B>* 192.168.21.0/24 [20/0] via 192.168.40.254, eth0, 20:38:06

B>* 192.168.22.0/24 [20/0] via 192.168.40.254, eth0, 20:37:36

C>* 192.168.40.0/24 is directly connected, eth0

Confirm that the neighbors are actively peering over BGP with the Atlanta PE router (lab-atl-pe): lab-srv-sp# show ip bgp summary

BGP router identifier 192.168.40.1, local AS number 250

RIB entries 11, using 1232 bytes of memory

Peers 1, using 4560 bytes of memory

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd

192.168.40.254 4 150 1248 1244 0 0 0 20:40:02 6

Next, let’s inspect the routes that we have learned from the attached PE routers: lab-srv-sp# sh ip bgp

BGP table version is 0, local router ID is 192.168.40.1

Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,

i internal, r RIB-failure, S Stale, R Removed

Origin codes: i - IGP, e - EGP, ? - incomplete

Network Next Hop Metric LocPrf Weight Path

*> 10.0.0.0/24 192.168.40.254 0 150 65111 ?

*> 10.0.0.2/32 192.168.40.254 0 150 65111 ?

*> 10.128.0.0/24 192.168.40.254 0 150 65111 ?

*> 10.128.0.2/32 192.168.40.254 0 150 65111 ?

*> 192.168.21.0 192.168.40.254 0 150 65111 i

*> 192.168.22.0 192.168.40.254 0 150 65222 i

*> 192.168.40.0 0.0.0.0 0 32768 i

All of the routes marked as ‘>’, indicating a ‘best’ route will be propagated to the OS kernel RIB (routing table)

and become effective for forwarding decisions.

One interesting observation in this table is that we see both prefixes for the edges originate from ASN 65111,

which means that both edges will be using the SFO POP to receive traffic from the lab-srv-sp router. This is a

direct result from the manual assignment of the Partner Gateways to the Edges, as inspected earlier in the

exercise. Both edges were set with lab-sfo-vcg2 as its primary partner gateway and this reflects here in the

routing table.

In the lab topology, the edges are operating on different ASN’s. They can be placed in the same ASN in which

case, the PE routers would need to engage in allowas-in or as-override functions to ensure routes can be

imported without BGP rejecting prefixes that contain the Partner Gateway ASN in the AS PATH of the received

prefix.

For an even more detailed view of which routes are advertised and received, we can issue the following

command: lab-srv-sp# show ip bgp neighbors 192.168.40.254 advertised-routes

BGP table version is 0, local router ID is 192.168.40.1

Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,

Page 25: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 25 of 78

r RIB-failure, S Stale, R Removed

Origin codes: i - IGP, e - EGP, ? - incomplete

Network Next Hop Metric LocPrf Weight Path

*> 192.168.40.0 192.168.40.1 0 32768 i

lab-srv-sp# sh ip bgp neighbors 192.168.40.254 received-routes

BGP table version is 0, local router ID is 192.168.40.1

Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,

i internal, r RIB-failure, S Stale, R Removed

Origin codes: i - IGP, e - EGP, ? - incomplete

Network Next Hop Metric LocPrf Weight Path

*> 10.0.0.0/24 192.168.40.254 0 150 65111 ?

*> 10.0.0.2/32 192.168.40.254 0 150 65111 ?

*> 10.128.0.0/24 192.168.40.254 0 150 65111 ?

*> 10.128.0.2/32 192.168.40.254 0 150 65111 ?

*> 192.168.21.0 192.168.40.254 0 150 65111 i

*> 192.168.22.0 192.168.40.254 0 150 65222 i

If in the above command to show the received routes, you may get the following error lab-srv-sp# sh ip bgp neighbors 192.168.40.254 received-routes

% Inbound soft reconfiguration not enabled

To show received routes, inbound soft reconfiguration needs to be enabled on the neighbor. This can be done

by adding the following configuration line in the lab-srv-sp router: lab-srv-sp(config)# router bgp 250

lab-srv-sp(config-router)# neighbor 192.168.40.254 soft-reconfiguration inbound

lab-srv-sp(config-router)# ^Z

lab-srv-sp# write memory

Next, let’s work our way upstream and move our focus to the lab-atl-pe router and confirm that it has

established peering sessions: root@pod:~# lxc-attach -n lab-atl-pe -- vtysh -c "show ip bgp summary"

BGP router identifier 192.168.40.254, local AS number 150

RIB entries 11, using 1232 bytes of memory

Peers 3, using 13 KiB of memory

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd

192.168.31.254 4 150 1441 1438 0 0 0 23:53:25 5

192.168.32.254 4 150 1441 1438 0 0 0 23:53:20 5

192.168.40.1 4 250 1438 1446 0 0 0 23:53:22 1

Three peering sessions are active with lab-srv-sp as well as the two upstream PE routers in each of the POPs.

Also look at the configuration of the router to see how the peering is set up. Map the configuration on the

network topology. root@pod:~# lxc-attach -n lab-atl-pe -- vtysh -c "show running"

Building configuration...

!

router bgp 150

bgp router-id 192.168.40.254

bgp log-neighbor-changes

neighbor 192.168.31.254 remote-as 150

neighbor 192.168.31.254 next-hop-self

neighbor 192.168.32.254 remote-as 150

neighbor 192.168.32.254 next-hop-self

neighbor 192.168.40.1 remote-as 250

!

ip route 192.168.21.254/32 192.168.31.254

ip route 192.168.22.254/32 192.168.32.254

!

Page 26: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 26 of 78

end

Next, let’s look at the BGP tables as well as into the routing table (RIB): root@pod:~# lxc-attach -n lab-atl-pe -- vtysh -c "show ip bgp"

BGP table version is 0, local router ID is 192.168.40.254

Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,

i internal, r RIB-failure, S Stale, R Removed

Origin codes: i - IGP, e - EGP, ? - incomplete

Network Next Hop Metric LocPrf Weight Path

*>i10.0.0.0/24 192.168.31.254 1 100 0 65111 ?

* i 192.168.32.254 2 100 0 65222 ?

*>i10.0.0.2/32 192.168.31.254 1 100 0 65111 ?

* i 192.168.32.254 2 100 0 65222 ?

*>i10.128.0.0/24 192.168.31.254 1 100 0 65111 ?

* i 192.168.32.254 2 100 0 65222 ?

*>i10.128.0.2/32 192.168.31.254 1 100 0 65111 ?

* i 192.168.32.254 2 100 0 65222 ?

*>i192.168.21.0 192.168.31.254 0 100 0 65111 i

*>i192.168.22.0 192.168.32.254 0 100 0 65222 i

*> 192.168.40.0 192.168.40.1 0 0 250 i

root@pod:~# lxc-attach -n lab-atl-pe -- vtysh -c "show ip route"

Codes: K - kernel route, C - connected, S - static, R - RIP,

O - OSPF, I - IS-IS, B - BGP, P - PIM, A - Babel,

> - selected route, * - FIB route

B>* 10.0.0.0/24 [200/1] via 192.168.31.254, eth0, 02:20:42

B>* 10.0.0.2/32 [200/1] via 192.168.31.254, eth0, 02:20:42

B>* 10.128.0.0/24 [200/1] via 192.168.31.254, eth0, 02:20:42

B>* 10.128.0.2/32 [200/1] via 192.168.31.254, eth0, 02:20:42

C>* 127.0.0.0/8 is directly connected, lo

B>* 192.168.21.0/24 [200/0] via 192.168.31.254, eth0, 23:54:16

S>* 192.168.21.254/32 [1/0] via 192.168.31.254, eth0

B>* 192.168.22.0/24 [200/0] via 192.168.32.254, eth1, 23:54:16

S>* 192.168.22.254/32 [1/0] via 192.168.32.254, eth1

C>* 192.168.31.0/24 is directly connected, eth0

C>* 192.168.32.0/24 is directly connected, eth1

B 192.168.40.0/24 [20/0] via 192.168.40.1 inactive, 23:55:54

C>* 192.168.40.0/24 is directly connected, eth2

All edge routes are currently using the SFO POP to egress traffic. Confirm the next hops in the routing tables

to get a better sense where traffic will be sent to.

4.10. Inspect the POP PE routers

Let’s continue upstream and find where the 10.0.0.0/24 and 10.128.0.0/24 networks are advertised from as

these belong to the branches. We will also pay close attention to how the 192.168.40.0/24 belonging to the

simulated CE (lab-srv-sp) propagates through the routers.

First, let us inspect the state of the SFO POP PE router: root@pod:~# lxc-attach -n lab-sfo-pe -- vtysh -c "show ip bgp summary"

BGP router identifier 192.168.33.11, local AS number 150

RIB entries 11, using 1232 bytes of memory

Peers 3, using 13 KiB of memory

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd

192.168.21.254 4 65111 1443 1442 0 0 0 23:56:19 5

192.168.31.1 4 150 1440 1446 0 0 0 23:57:58 1

192.168.33.12 4 150 1446 1448 0 0 0 23:59:19 5

root@pod:~# lxc-attach -n lab-sfo-pe -- vtysh -c "show ip bgp"

Page 27: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 27 of 78

BGP table version is 0, local router ID is 192.168.33.11

Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,

i internal, r RIB-failure, S Stale, R Removed

Origin codes: i - IGP, e - EGP, ? - incomplete

Network Next Hop Metric LocPrf Weight Path

* i10.0.0.0/24 192.168.33.12 2 100 0 65222 ?

*> 192.168.21.254 1 0 65111 ?

* i10.0.0.2/32 192.168.33.12 2 100 0 65222 ?

*> 192.168.21.254 1 0 65111 ?

* i10.128.0.0/24 192.168.33.12 2 100 0 65222 ?

*> 192.168.21.254 1 0 65111 ?

* i10.128.0.2/32 192.168.33.12 2 100 0 65222 ?

*> 192.168.21.254 1 0 65111 ?

*> 192.168.21.0 192.168.21.254 0 0 65111 i

*>i192.168.22.0 192.168.33.12 0 100 0 65222 i

*>i192.168.40.0 192.168.31.1 0 100 0 250 i

The 10.x.0.0/24 networks are still shown as advertised from an upstream Partner Gateway and originating

from ASN 65111. Note that there are also secondary routes picked up for each prefix that are pointing to lab-

nyc-pe over the direct connected link. These routes are shown to originate from ASN 65222 i.e. through the

lab-nyc-vcg2 Partner Gateway.

The 192.168.40.0/24 points downstream to lab-atl-pe as expected. It shows correctly originated from ASN 250.

Both routes to the edges are ingested via eBGP.

Contrast this with the NYC POP PE router: root@pod:~# lxc-attach -n lab-nyc-pe -- vtysh -c "show ip bgp summary"

BGP router identifier 192.168.33.12, local AS number 150

RIB entries 11, using 1232 bytes of memory

Peers 3, using 13 KiB of memory

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd

192.168.22.254 4 65222 1448 1447 0 0 0 1d00h01m 5

192.168.32.1 4 150 1445 1451 0 0 0 1d00h02m 1

192.168.33.11 4 150 1452 1453 0 0 0 1d00h04m 5

root@pod:~# lxc-attach -n lab-nyc-pe -- vtysh -c "show ip bgp"

BGP table version is 0, local router ID is 192.168.33.12

Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,

i internal, r RIB-failure, S Stale, R Removed

Origin codes: i - IGP, e - EGP, ? - incomplete

Network Next Hop Metric LocPrf Weight Path

* i10.0.0.0/24 192.168.33.11 1 100 0 65111 ?

*> 192.168.22.254 2 0 65222 ?

* i10.0.0.2/32 192.168.33.11 1 100 0 65111 ?

*> 192.168.22.254 2 0 65222 ?

* i10.128.0.0/24 192.168.33.11 1 100 0 65111 ?

*> 192.168.22.254 2 0 65222 ?

* i10.128.0.2/32 192.168.33.11 1 100 0 65111 ?

*> 192.168.22.254 2 0 65222 ?

*>i192.168.21.0 192.168.33.11 0 100 0 65111 i

*> 192.168.22.0 192.168.22.254 0 0 65222 i

*>i192.168.40.0 192.168.32.1 0 100 0 250 i

In this router, we also see the redundant set of routes. Remember from the edge assignment configuration that

lab-sfo-vcg2 was set as the primary gateway and the lab-nyc-vcg was configured as a secondary node and

that the lab-atl-pe router was using the SFO POP for egressing traffic.

In the event that return traffic to the edge LAN subnets does end up on the lab-nyc-pe router, the routing will

take the shortest path back to the edge and send this traffic northbound to lab-nyc-vcg2 instead of backhauling

Page 28: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 28 of 78

it through the preferred lab-sfo-vcg2, which would be a longer path. Note that the selected BGP route in lab-

nyc-vcg2 has a higher metric compared to the lab-sfo-vcg2 partner gateway.

Looking at the final routing table, it will exclude all routes not marked as best and provide a more concise view

into the routing decisions: root@pod:~# lxc-attach -n lab-nyc-pe -- vtysh -c 'show ip route'

Codes: K - kernel route, C - connected, S - static, R - RIP,

O - OSPF, I - IS-IS, B - BGP, P - PIM, A - Babel,

> - selected route, * - FIB route

B>* 10.0.0.0/24 [20/2] via 192.168.22.254, eth0, 11:55:18

B>* 10.0.0.2/32 [20/2] via 192.168.22.254, eth0, 11:55:18

B>* 10.128.0.0/24 [20/2] via 192.168.22.254, eth0, 11:55:18

B>* 10.128.0.2/32 [20/2] via 192.168.22.254, eth0, 11:55:18

C>* 127.0.0.0/8 is directly connected, lo

B>* 192.168.21.0/24 [200/0] via 192.168.33.11, eth2, 11:55:43

B 192.168.22.0/24 [20/0] via 192.168.22.254 inactive, 11:55:18

C>* 192.168.22.0/24 is directly connected, eth0

C>* 192.168.32.0/24 is directly connected, eth1

C>* 192.168.33.0/24 is directly connected, eth2

B>* 192.168.40.0/24 [200/0] via 192.168.32.1, eth1, 12:24:09

4.11. Inspect the POP Partner Gateway

While the Partner Gateway is fully configured and driven by the VeloCloud Orchestrator, it may be useful to

review the configuration it has received from the Orchestrator. Note that at no time, changes should be

manually made to the Partner Gateway routing configuration as this may disrupt normal operation. root@pod:~# lxc-attach -n lab-sfo-vcg2 -- vtysh -c "show running"

Building configuration...

Current configuration:

!

router bgp 65111 view 26684182-c5a2-46e8-81d5-77fbfb57303b:0

bgp router-id 192.168.21.254

network 192.168.21.0/24

redistribute static

neighbor 192.168.21.1 remote-as 150

neighbor 192.168.21.1 soft-reconfiguration inbound

!

end

We see that a BGP process is automatically configured and constrained to a view, which is equivalent to a

Virtual Routing and Forwarding context (VRF). Peering with the downstream PE routers is automatically

configured, and static and attached networks are advertised (or redistributed).

There is also a mapping available that shows the view or VRF id associated to each customer: root@pod:~# lxc-attach -n lab-sfo-vcg2 -- /opt/vc/bin/debug.py --vrf

{

"vrf_dump": [

{

"c_tag": 0,

"enteprise_id": "26684182-c5a2-46e8-81d5-77fbfb57303b",

"enterprise_name": "ACME Corporation",

"lan_vlan_transport_mode": "NONE",

"s_tag": 0,

"vlan_vrf_mode": "NONE",

"vrf_ip": "192.168.21.254/24"

}

]

}

Page 29: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 29 of 78

root@lab-sfo-vcg2:~#

No routes appear to be present. These are injected in the routing daemon by the gateway process and in turn

are delivered over a management protocol by the Orchestrator to the gateway.

root@pod:~# lxc-attach -n lab-sfo-vcg2 -- vtysh -c "show ip bgp"

BGP table version is 0, local router ID is 192.168.21.254

Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,

i internal, r RIB-failure, S Stale, R Removed

Origin codes: i - IGP, e - EGP, ? - incomplete

Network Next Hop Metric LocPrf Weight Path

*> 10.0.0.0/24 0.0.0.0 1 32768 ?

*> 10.0.0.2/32 0.0.0.0 1 32768 ?

*> 10.128.0.0/24 0.0.0.0 1 32768 ?

*> 10.128.0.2/32 0.0.0.0 1 32768 ?

*> 192.168.21.0 0.0.0.0 0 32768 i

*> 192.168.22.0 192.168.21.1 0 150 65222 i

*> 192.168.40.0 192.168.21.1 0 150 250 i

While this command is performed in the global VRF context, when multiple customers are active on a partner

gateway, it is more useful to only inspect routes in a specific customer context. This can be done by specifying

the view ID as found in either the configuration or through the ‘debug.py –vrf’ command: lab-sfo-vcg2# show ip bgp view 26684182-c5a2-46e8-81d5-77fbfb57303b

BGP table version is 0, local router ID is 192.168.21.254

Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,

i internal, r RIB-failure, S Stale, R Removed

Origin codes: i - IGP, e - EGP, ? - incomplete

Network Next Hop Metric LocPrf Weight Path

*> 10.0.0.0/24 0.0.0.0 1 32768 ?

*> 10.0.0.2/32 0.0.0.0 1 32768 ?

*> 10.128.0.0/24 0.0.0.0 1 32768 ?

*> 10.128.0.2/32 0.0.0.0 1 32768 ?

*> 192.168.21.0 0.0.0.0 0 32768 i

*> 192.168.22.0 192.168.21.1 0 150 65222 i

*> 192.168.40.0 192.168.21.1 0 150 250 i

Total number of prefixes 7

The fact that both edge subnets are present in the routing table implies already that each of the edges have

formed overlay tunnels to this Partner Gateway.

To contrast this with the NYC gateway: root@pod:~# lxc-attach -n lab-nyc-vcg2 -- vtysh -c "show ip bgp"

BGP table version is 0, local router ID is 192.168.22.254

Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,

i internal, r RIB-failure, S Stale, R Removed

Origin codes: i - IGP, e - EGP, ? - incomplete

Network Next Hop Metric LocPrf Weight Path

*> 10.0.0.0/24 0.0.0.0 2 32768 ?

*> 10.0.0.2/32 0.0.0.0 2 32768 ?

*> 10.128.0.0/24 0.0.0.0 2 32768 ?

*> 10.128.0.2/32 0.0.0.0 2 32768 ?

*> 192.168.21.0 192.168.22.1 0 150 65111 i

*> 192.168.22.0 0.0.0.0 0 32768 i

*> 192.168.40.0 192.168.22.1 0 150 250 i

Page 30: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 30 of 78

It can be observed that the same routes in lab-nyc-vcg2 have a higher metric in the BGP tables, which makes

them less preferred. This is as a direct result of the gateway being configured as a secondary Gateway during

the assignment as reviewed earlier in the exercise. These metrics are injected by the Orchestrator into the

Gateway process and, in turn, into the quagga routing daemon.

We can also see the lab-srv-sp advertised subnet from where we started the exercise and now show an AS-

PATH of ‘150 250 i’ as it travels through the various AS’.

Switching back to lab-sfo-vcg2, let’s also confirm that the BGP session is indeed established: root@pod:~# lxc-attach -n lab-sfo-vcg2 -- vtysh -c "show ip bgp summary"

BGP view name 26684182-c5a2-46e8-81d5-77fbfb57303b

BGP router identifier 192.168.21.254, local AS number 65111

RIB entries 11, using 1232 bytes of memory

Peers 1, using 4568 bytes of memory

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd

192.168.21.1 4 150 1462 1466 0 0 0 1d00h17m 2

From within the gateway process, we can look at where the routes are originating from: root@pod:~# lxc-attach -n lab-sfo-vcg2 -- /opt/vc/bin/debug.py --routes

Address Netmask Type Destination Reachable Metric Preference C-Tag S-Tag Handoff Mode Age

10.128.0.2 255.255.255.255 edge2edge 2f403151-0b69-48fd-ac71-fdc727992849 True 0 0 1 0 N/A N/A 9911

10.0.0.2 255.255.255.255 edge2edge 44f8936d-2180-49e5-bb45-cfb1862fb08e True 0 0 1 0 N/A N/A 9927

10.128.0.0 255.255.255.0 edge2edge 2f403151-0b69-48fd-ac71-fdc727992849 True 0 0 1 0 N/A N/A 9911

10.0.0.0 255.255.255.0 edge2edge 44f8936d-2180-49e5-bb45-cfb1862fb08e True 0 0 1 0 N/A N/A 9927

192.168.22.0 255.255.255.0 cloud 0.0.0.0 True 0 512 0 0 VLAN NONE 87475

192.168.40.0 255.255.255.0 cloud 0.0.0.0 True 0 512 0 0 VLAN NONE 87505

0.0.0.0 0.0.0.0 cloud 0.0.0.0 True 0 0 0 0 NAT N/A 87514

P - PG, B - BGP, D - DCE, L - LAN SR, C - Connected, O - External, W - WAN SR, S - SecureEligible, R - Remote,

s - self, H - HA, m - Management, n - nonVelocloud, v - ViaVeloCloud

Lets’ take a closer look at 3 prefixes of interest that are present on lab-sfo-vcg2: Address Netmask Type Destination Metric Handoff

10.0.0.0 255.255.255.0 edge2edge 44f8936d-2180-49e5-bb45-cfb1862fb08e 0 N/A

192.168.40.0 255.255.255.0 cloud 0.0.0.0 0 VLAN

0.0.0.0 0.0.0.0 cloud 0.0.0.0 0 NAT

The first route belongs to the LAN subnet of an edge and is pointing to a logical-id of that device. We can list

all the logical-id’s of edges through the following command: root@pod:~# lxc-attach -n lab-sfo-vcg2 -- /opt/vc/bin/debug.py --list_edges 3

Name Enterprise Logical ID VC Private IP

lab-edge1 ACME Corporation 44f8936d-2180-49e5-bb45-cfb1862fb08e 169.254.129.2

lab-edge2 ACME Corporation 2f403151-0b69-48fd-ac71-fdc727992849 169.254.129.3

Once all attached edges are listed with their logical-id, we can confirm that this is a subnet that belongs to lab-

edge1. The next hop is a logical-id and not a traditional IP address since the reachability of the prefix takes

place over the VeloCloud overlay tunnel which abstracts the underlying transport links.

The second route belongs to lab-srv-sp and is pointing to 0.0.0.0 (local) and as such the routing decision is

delegated to the quagga daemon. It is marked as a ‘cloud’ route and as such will be sent to an attached

Partner Gateway, where the prefix is available through a VLAN handoff.

The third route (a default route) shares the same fate and will be sent to the Partner Gateway but note that it is

configured to use a NAT handoff and is not sent deeper into the SP network. It is, however, important to point

out that this is a route that will be advertised to the connected edges. The edges make independent decisions

based on the configured Business Policy whether to use this route.

As a last step, let’s also confirm that the routes are present in the edge itself: root@pod:~# lxc-attach -n lab-edge1 -- /opt/vc/bin/debug.py --routes

Address Netmask Type Gateway Next Hop ID Dst LogicalId Reachable Metric

Preference Flags Vlan Intf Sub IntfId MTU

208.6.1.31 255.255.255.255 any any N/A N/A True 0

0 Ss 0 GE4 N/A N/A

169.254.129.2 255.255.255.255 any any N/A N/A True 0

0 sm 0 any N/A N/A

Page 31: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 31 of 78

169.254.129.1 255.255.255.255 cloud any 14000b18-0000-0000-0000-000000000000 14000b18-0000-0000-0000-000000000000 True 0

0 m 0 any N/A N/A

24.5.1.31 255.255.255.255 any any N/A N/A True 0

0 Ss 0 GE3 N/A N/A

10.128.0.2 255.255.255.255 edge2edge any 14000b18-0000-0000-0000-000000000000 2f403151-0b69-48fd-ac71-fdc727992849 True 0

0 SRm 1 any N/A 1500

10.128.0.2 255.255.255.255 edge2edge any 14000c18-0000-0000-0000-000000000000 2f403151-0b69-48fd-ac71-fdc727992849 True 0

0 SRm 1 any N/A 1500

10.0.0.2 255.255.255.255 any any N/A N/A True 0

0 CSm 1 br-network1 N/A N/A

208.6.1.0 255.255.255.0 any any N/A N/A True 0

0 CS 0 GE4 N/A N/A

192.168.40.0 255.255.255.0 cloud any 14000b18-0000-0000-0000-000000000000 14000b18-0000-0000-0000-000000000000 True 0

320 PSBR 0 any N/A N/A

192.168.40.0 255.255.255.0 cloud any 14000c18-0000-0000-0000-000000000000 14000c18-0000-0000-0000-000000000000 True 0

128 PSBR 0 any N/A N/A

192.168.22.0 255.255.255.0 cloud any 14000b18-0000-0000-0000-000000000000 14000b18-0000-0000-0000-000000000000 True 0

128 PSBR 0 any N/A N/A

192.168.21.0 255.255.255.0 cloud any 14000c18-0000-0000-0000-000000000000 14000c18-0000-0000-0000-000000000000 True 0

128 PSBR 0 any N/A N/A

24.5.1.0 255.255.255.0 any any N/A N/A True 0

0 CS 0 GE3 N/A N/A

10.128.0.0 255.255.255.0 edge2edge any 14000b18-0000-0000-0000-000000000000 2f403151-0b69-48fd-ac71-fdc727992849 True 0

0 SR 1 any N/A 1500

10.128.0.0 255.255.255.0 edge2edge any 14000c18-0000-0000-0000-000000000000 2f403151-0b69-48fd-ac71-fdc727992849 True 0

0 SR 1 any N/A 1500

10.0.0.0 255.255.255.0 any any N/A N/A True 0

0 CS 1 br-network1 N/A N/A

0.0.0.0 0.0.0.0 cloud any 14000b18-0000-0000-0000-000000000000 14000b18-0000-0000-0000-000000000000 True 0

0 PR 0 any N/A N/A

0.0.0.0 0.0.0.0 cloud any 14000c18-0000-0000-0000-000000000000 14000c18-0000-0000-0000-000000000000 True 0

0 PR 0 any N/A N/A

0.0.0.0 0.0.0.0 cloud 24.11.0.20 14000b18-0000-0000-0000-000000000000 14000b18-0000-0000-0000-000000000000 True 255

0 v 0 any N/A N/A

0.0.0.0 0.0.0.0 cloud 24.5.1.1 N/A N/A True 5

0 S 0 GE3 N/A N/A

0.0.0.0 0.0.0.0 cloud 208.6.1.1 N/A N/A True 6

0 S 0 GE4 N/A N/A

P - PG, B - BGP, D - DCE, L - LAN SR, C - Connected, O - External, W - WAN SR, S - SecureEligible, R - Remote, s - self, H - HA, m - Management, n - nonVelocloud,

v - ViaVeloCloud

We’ll highlight the same routes as we just did in the partner gateway: Address Netmask Type Gateway Next Hop ID Intf

10.0.0.0 255.255.255.0 any any N/A br-network1

192.168.40.0 255.255.255.0 cloud any 14000b18-0000-0000-0000-000000000000 any

192.168.40.0 255.255.255.0 cloud any 14000c18-0000-0000-0000-000000000000 any

The next hop ID is the reverse hex coded IP address of the receiving gateway:

• 0x14000b18 == 24.11.0.20 (lab-sfo-vcg2)

• 0x14000c18 == 24.12.0.20 (lab-nyc-vcg2)

This indicates the subnet is available through the two gateways but has a primary, more preferred route to lab-

sfo-vcg2 (since it is listed first).

We can also see routes for the remote branch network (lab-edge2): Address Type Next Hop ID Dst LogicalId

10.128.0.0/24 edge2edge 14000b18-0000-0000-0000-000000000000 2f403151-0b69-48fd-ac71-fdc727992849

10.128.0.0/24 edge2edge 14000c18-0000-0000-0000-000000000000 2f403151-0b69-48fd-ac71-fdc727992849

Verify the above routes corresponding to the appropriate partner gateways as per the network diagram.

Here we see that there are two paths to reach the remote branch, over both Partner gateways (Next Hop ID),

with a preference for the SFO POP and the destination Logical ID represents the lab-edge2 device.

As a final step in our inspection rounds, let’s examine how the partner gateway is installed.

First, see what networking interfaces are present in the container and map them to the network topology: root@lab-nyc-vcg2:~# ifconfig

eth0 Link encap:Ethernet HWaddr 00:ba:be:6e:81:04

inet addr:24.12.0.20 Bcast:24.12.0.255 Mask:255.255.255.0

inet6 addr: fe80::2ba:beff:fe6e:8104/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:1246841 errors:0 dropped:0 overruns:0 frame:0

TX packets:1211985 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:4096

RX bytes:148169975 (148.1 MB) TX bytes:146295859 (146.2 MB)

Page 32: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 32 of 78

eth1 Link encap:Ethernet HWaddr 00:ba:be:15:66:1f

inet addr:192.168.22.254 Bcast:192.168.22.255 Mask:255.255.255.0

inet6 addr: fe80::2ba:beff:fe15:661f/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:19708 errors:0 dropped:0 overruns:0 frame:0

TX packets:20935 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:4096

RX bytes:899168 (899.1 KB) TX bytes:980231 (980.2 KB)

gwd1 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00

inet addr:169.254.129.1 P-t-P:169.254.129.1 Mask:255.255.255.255

UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1

RX packets:0 errors:0 dropped:0 overruns:0 frame:0

TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:4096

RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

The following interfaces are listed:

• eth0: “vcmp” interface facing the edges & internet and responsible for receiving overlay traffic from the

attached edges

• eth1: “wan” interface facing the SP core, typically using VLAN handoffs to segregate the customers

towards the PE router.

• gwd1: a logical interface created by the gateway daemon for the purpose of encapsulating flows in a

VCMP header (VeloCloud MultiPath protocol)

Confirm that all gateway related processes are running: root@lab-nyc-vcg2:~# ps -ef | grep vc

root 813 1 0 Jun03 ? 00:00:00 /usr/bin/python /opt/vc/bin/vc_procmon restart

root 822 813 0 Jun03 ? 00:01:19 /usr/bin/python /opt/vc/sbin/mgd

root 824 813 0 Jun03 ? 00:00:00 /opt/vc/sbin/vcsyscmd

root 834 813 2 Jun03 ? 00:43:48 /opt/vc/sbin/gwd -F/etc/config/gatewayd

root 1575 1 0 Jun03 ? 00:00:01 /usr/lib/quagga/watchquagga -dz -T 5 -R

/opt/vc/bin/gwd_watchquagga_script.sh bgpd

root 2163 813 0 Jun03 ? 00:00:02 /opt/vc/sbin/natd -F/etc/config/natd

root 6897 6879 0 09:21 ? 00:00:00 grep --color=auto vc

• ‘gatewayd’ is the main process responsible for processing and forwarding traffic (VCMP & IPsec)

• ‘natd’ is a state machine that keeps track of any NAT entries created to provide access to cloud

applications. The daemon however does not perform the NAT operation. This is done by gatewayd.

• ‘mgd’ is the management daemon that communicates with the Orchestrator, and through which the

VCO injects state into the gateways. The gateways themselves have no persistent state.

• ‘vc_procmon’ is a daemon monitoring process that can be used to orderly start and stop the service. It

is also responsible for restarting any of the above daemons in the event they should fail. This makes

restarts very fast and well below sub-second impact. Since the NAT state is preserved throughout

reloads, existing sessions will stall for that duration but never terminate.

If a gateway needs to be restarted (due to e.g. changes made to any of the configuration files) then this can be

done by the following command: /usr/bin/python /opt/vc/bin/vc_procmon restart

Next, inspect the following three config files

Interface roles should be set manually in the gatewayd config file when installing a partner gateway: root@lab-nyc-vcg2:~# more /etc/config/gatewayd

{

"global" : {

"wan": ["eth1"], <<< SP CORE FACING INTERFACE

"vcmp.port": "2426", <<< UDP port in use for VCMP

Page 33: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 33 of 78

"vcmp.interfaces": ["eth0"], <<< PUBLIC / EDGE FACING INTERFACE

"ip_mode_enable": 1,

"use_nclock_scheduler": 5,

"link_sched_type": 5,

"handoff_qlimit" : 1,

"netschenqdeq_ratio" : 50,

"lnkschenqdeq_ratio" : 50,

"path_reseq_timeout_us" : 2000,

"debug_level" : "MSG",

"drop_duplicate_td_version": 0,

"ike_debug_level" : 4,

<snip>

}

The participating interfaces should also be set in gatewayd-tunnel configuration file. Dedicated management

interfaces can be present in the system and if not included in this file, will be left unaffected.

root@lab-nyc-vcg2:~# cat /etc/config/gatewayd-tunnel

# [.section tun_setup.sh] - used by tun_setup.sh script

wan="eth0 eth1" <<< space delimited list of all participating interface

wan_phys_if="" <<< physical interfaces of underlying tranport, if it is a bond/bridge intf.

ipif=gwd1

#ipif_ip="192.1.1.1"

#ipif_nm="255.255.255.0"

ip_icmp=1

ip_mark="0x2"

ip_table_id=200

ipif_mtu=1500

wan_mtu=1500

ip_tcp_ports="443"

ip_udp_ports="53"

nat_remote_port=32000

Also ensure that all default blocked subnets have been removed. This is a provision that is in place after a

default installation of a Cloud Gateway that prevents traffic destined to private address blocks to leak to the

internet. Such traffic is blocked from going through the NAT process and dropped. This is a security restriction

that does not apply to Partner Gateways although it can still be used if deemed necessary.

This file should contain an empty json blob: root@lxc-host:~# lxc-attach -n lab-nyc-vcg2 -- cat /opt/vc/etc/vc_blocked_subnets.json

[

]

By default, 10.0.0.0/8 and 172.16.0.0./16 are dropped. Traffic that matches these subnets but is destined to

sites connected to the CloudVPN will still be forwarded.

If any changes are made in any of the mentioned files, a vc_procmon restart is in order to make these

effective.

Also scan the installed packages on the partner gateway: root@lab-nyc-vcg2:~# dpkg --list | grep gatewayd

ii gatewayd 2.4.1-154-R241-20170528-QA amd64 VeloCloud Gateway Daemon

root@lab-nyc-vcg2:~# dpkg --list | grep quagga

ii quagga 0.99.24.1-3 amd64 BGP/OSPF/RIP routing daemon

• gatewayd is the package providing VeloCloud gateway functions (both NAT mode as well as partner

gateway mode)

• quagga is the routing daemon, for which a modified version will be made available by VeloCloud. A

standard repository version of the daemon will not interoperate with gatewayd.

Page 34: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 34 of 78

The Cloud Gateways will only have a single interface serving both VCMP protected traffic from the Edges as

well as sending decapsulated traffic from edges back to the Internet towards their final destination.

5. Deconstruct and rebuild

In this section, we will deconstruct lab-nyc-vcg2 out of the NYC POP and rebuild it after which a traffic failover

will be simulated.

5.1. De-activate the gateway

A hidden registration file is present in /opt/vc and is called .gateway.info. If this file is deleted and the gateway

is restarted, the registration to the Orchestrator will lapse and the edges will no longer connect to the gateway,

nor will they ingest and process traffic: root@lab-nyc-vcg2:/opt/vc# /opt/vc/bin/is_activated.py

True

root@lab-nyc-vcg2:~# cd /opt/vc

root@lab-nyc-vcg2:/opt/vc# ll

total 28

drwxrwxr-x 6 root root 4096 Sep 17 07:00 ./

drwxr-xr-x 3 root root 4096 Aug 29 18:26 ../

drwxrwxr-x 2 root root 4096 Sep 13 08:57 bin/

drwxrwxr-x 2 root root 4096 Sep 17 06:59 etc/

-rw-rw-rw- 1 root root 873 Sep 17 07:00 .gateway.info

drwxrwxr-x 3 root root 4096 Sep 13 08:57 lib/

drwxrwxr-x 2 root root 4096 Sep 13 08:57 sbin/

root@lab-nyc-vcg2:/opt/vc# rm -rf .gateway.info

root@lab-nyc-vcg2:/opt/vc# /usr/bin/python /opt/vc/bin/vc_procmon restart

root@lab-nyc-vcg2:/opt/vc# /opt/vc/bin/is_activated.py

False

root@lab-nyc-vcg2:/opt/vc# ps -ef | grep vc

root 29578 1 0 07:15 ? 00:00:00 /usr/bin/python /opt/vc/bin/vc_procmon restart

root 29586 29578 1 07:15 ? 00:00:00 /usr/bin/python /opt/vc/sbin/mgd

root 29593 29578 2 07:15 ? 00:00:00 /opt/vc/sbin/natd -F/etc/config/natd

root 29605 29578 6 07:15 ? 00:00:00 /opt/vc/sbin/gwd -F/etc/config/gatewayd

As can be seen from the process listing, the gateway is still running on the instance but is no longer associated

with the Orchestrator and as such not serving traffic anymore from edges.

This change will now be reflected in the Orchestrator when navigating to Superuser Operator | Gateways:

Note that it takes up to 2 minutes before the gateway state is accurately reflected in the Orchestrator. As a

default setting, edges and gateways check in with the Orchestrator every 30 seconds to provide status updates

and liveliness checks, and every every 5 minutes to provide statistics. This is a configurable property if there is

a requirement to speed these exchanges if needed, however, it is highly recommended to keep the defaults.

Page 35: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 35 of 78

5.2. Delete gateway from Orchestrator

Now that the Partner Gateway has been de-activated and un-installed, we can continue to remove it from the

Orchestrator as well.

The gateway cannot be directly deleted from the Orchestrator as it needs to be detached from all resources

first. At this time, the Orchestrator has lost contact to the partner gateway which can still be due to a network

failure that could recover. Therefore, all edges remain attached to this failed gateway.

Go to ACME Corporation | Configure | Edge and for both edges (lab-edge1 and lab-edge2) go into the Edge

Overview tab and set Gateway 2 in the Partner Gateway Assignment to None:

After this step has completed, we can confirm that the edges are no longer attached to this gateway through

Monitor | Edges | lab-edge1 and select the ‘view’ text:

We no longer see the edge attached to the lab-nyc-vcg2 gateway and it is now exclusively attached to a single

gateway in the SFO POP.

Next go to Superuser operator | Gateway Pools and remove the gateway from the Partner gateways pool.

Click on the ‘Manage’ button to modify the pool participation. Do not forget to click on ‘Save Changes’ to make

the modifications effective in the Orchestrator.

If this does not work, the gateway is likely still attached to one of the edges, so go back and verify this is not

the case.

Navigate to the Gateway page in the Orchestrator, select the gateway and delete it.

Page 36: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 36 of 78

The final state will look as follows:

At this time all processes will be active again and the gateway will be in an un-activated state: root@lab-nyc-vcg2:~# ps -ef | grep vc

root 30246 1 0 07:43 ? 00:00:00 /usr/bin/python /opt/vc/bin/vc_procmon restart

root 30256 30246 0 07:43 ? 00:00:00 /usr/bin/python /opt/vc/sbin/mgd

root 30266 30246 0 07:43 ? 00:00:00 /opt/vc/sbin/natd -F/etc/config/natd

root 30278 30246 3 07:43 ? 00:00:04 /opt/vc/sbin/gwd -F/etc/config/gatewayd

root 30592 28692 0 07:45 ? 00:00:00 grep --color=auto vc

root@lab-nyc-vcg2:~# ps -ef | grep quagga

root 30582 1 0 07:43 ? 00:00:00 /usr/lib/quagga/bgpd --daemon -A 127.0.0.1

root 30589 1 0 07:43 ? 00:00:00 /usr/lib/quagga/watchquagga -dz -T 5 -R

/usr/lib/quagga/bgpd --daemon -A 127.0.0.1 bgpd

root 30594 28692 0 07:45 ? 00:00:00 grep --color=auto quagga

root@lab-nyc-vcg2:~# /opt/vc/bin/is_activated.py

False

Before we can activate the gateway, we will need to provision it in the Orchestrator. Before we do that, inspect

the three configuration files we earlier looked at to prepare the installation for use of a partner gateway.

5.3. Provision the gateway

At this point, we would need to generate an activation key to allow the gateway to enroll in the Orchestrator.

This is very similar as to edges are provisioned in the Orchestrator.

In the Orchestrator Operator portal, go to Gateways and add a new gateway:

Page 37: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 37 of 78

Provide the following information in the pop-up dialog:

We will place the gateway in service after activation and immediately place it in the Partner Gateway Pool that

is already associated to the Acme account.

In production platforms, it is advisable to first enroll a gateway as Out-of-service and add it to a staging pool

before attaching it to a production gateway pool with active customers and edges.

At this time, an activation key is generated at the top of the new screen that can be used to onboard the

Gateway onto the SD-WAN solution:

Page 38: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 38 of 78

A few additional changes need to be made to make the gateway a partner gateway:

• Correct the geolocation to New York. By default, the Gateway will be geolocated, but it is not

uncommon to see discrepancies, especially for datacenter hosted blocks.

• Under Gateway Roles, check the Partner Gateway box so that the instance can hand off traffic through

the already configured ‘wan’ interface (facing the SP core).

After selecting the partner gateway role, a series of static routes will appear. Here we only want to supply a

default route and set the handoff to NAT so that traffic destined for the internet will be sent back to the public

interface and NAT’d to the gateway’s public IP. Static routes will not be used for this exercise. BGP will be re-

established to dynamically inject the routes to the partner Gateway. Any business-critical traffic that is sent to

the gateway but is not reachable through advertised BGP routes will be sent out to the public interface.

All other VLAN based routes can be removed by clicking the minus button next to them.

The dialog should now look as follows:

Page 39: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 39 of 78

Note that the default route does not have the ‘Encrypt’ checkbox enabled which means that SaaS traffic

flowing through this gateway will not be encrypted as it typically is already encrypted at the application layer.

Last, take note of the activation key and issue the following command in lab-nyc-vcg2 to activate the gateway: root@lab-nyc-vcg2:/# /opt/vc/bin/activate.py -i -s 24.17.0.11 2HSN-Y9AD-4FNC-XXXX

Activation successful

root@lab-nyc-vcg2:/# /opt/vc/bin/is_activated.py

True

IP address 24.17.0.11 used in the command is the IP address of the Orchestrator. In production installations, it

is recommended to use FQDN’s for this field and abstract the service endpoint from the physical machine with

a DNS CNAME entry.

The gateway is now activated, and we can look at the generated activation file: root@lab-nyc-vcg2:/# cat /opt/vc/.gateway.info | python -m json.tool

{

"activated": true,

"configuration": {

"managementPlane": {

"data": {

"heartBeatSeconds": 30,

"managementPlaneProxy": {

"drHeartbeatSecs": 60,

"primary": "24.17.0.11",

"secondary": "24.17.0.12"

},

"statsUploadSeconds": 300,

"timeSliceSeconds": 300

},

"module": "managementPlane",

"schemaVersion": "2.0.0",

"version": "1496430586364"

}

},

"gatewayInfo": {

"activationState": "ACTIVATED",

Page 40: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 40 of 78

"buildNumber": "",

"deviceId": "00:ba:be:6e:81:04",

"lastContact": "2017-06-05T05:53:59.000Z",

"logicalId": "gatewaye0b03460-b0f6-4694-9036-4149b8f7856a",

"name": "lab-nyc-vcg2",

<snip>

The activation file mentions the used Orchestrator that the gateway is activated against. This shall now reflect

in the Orchestrator in the Gateways page:

Note that there is no Partner gateway view text available for lab-nyc-vcg2 as no edges have been assigned

yet.

5.4. Configure BGP in the customer context

Now that the partner gateway is made available, we need to set up the BGP configuration inside the customer

tenancy. This is done via Manage Customer | ACME Corporation | Configure | Customer. You will find it in the

following state:

The lab-nyc-vcg2 is available to be used as a partner gateway but is not configured. The existing lab-sfo-vcg2

was previously configured and still operational. No changes are needed on his gateway.

Page 41: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 41 of 78

Select the lab-nyc-vcg2 in the ‘Per Gateway’ Hand Off Configuration and select ‘Click here to configure’.

Complete the dialog as follows:

• Deselect ‘Use for private tunnels’. This is needed in a pure MPLS scenario where the provided local IP

address is used for VCMP communication to the gateway rather than the interface IP address on the

VCMP interface.

• Select ‘Enable BGP’

• Set the Local IP address (IP that the gateway will use to initiate BGP from) to 192.168.22.254/24

• Set the customer ASN to 65222 (typically a private ASN)

• Set the neighbor IP to 192.168.22.1 and ASN to 150

• The Secure BGP routes is checked and indicates that traffic for destinations learned through this BGP

session will be sent encrypted to the gateway.

• Click the ‘Update’ button and select ‘Save Changes’ in the parent page.

Page 42: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 42 of 78

Now we can verify on lab-nyc-vcg2 if the BGP session is installed and established: root@pod:~# lxc-attach -n lab-nyc-vcg2 -- vtysh -c 'show run'

Building configuration...

!

bgp multiple-instance

!

router bgp 65222 view 926c2a10-d000-4645-8fd2-15fa1c8df1a0

bgp router-id 127.0.0.1

network 192.168.22.0/24

redistribute static

neighbor 192.168.22.1 remote-as 150

neighbor 192.168.22.1 soft-reconfiguration inbound

!

End

root@pod:~# lxc-attach -n lab-nyc-vcg2 -- vtysh -c 'show ip bgp summary'

BGP view name 26684182-c5a2-46e8-81d5-77fbfb57303b

BGP router identifier 192.168.22.254, local AS number 65222

RIB entries 11, using 1232 bytes of memory

Peers 1, using 4568 bytes of memory

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd

192.168.22.1 4 150 57 56 0 0 0 00:51:25 6

All routes should be learned again through BGP: root@pod:~# lxc-attach -n lab-nyc-vcg2 -- vtysh -c 'show ip bgp'

BGP table version is 0, local router ID is 192.168.22.254

Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,

i internal, r RIB-failure, S Stale, R Removed

Origin codes: i - IGP, e - EGP, ? - incomplete

Network Next Hop Metric LocPrf Weight Path

*> 10.0.0.0/24 192.168.22.1 0 150 65111 ?

*> 10.0.0.2/32 192.168.22.1 0 150 65111 ?

*> 10.128.0.0/24 192.168.22.1 0 150 65111 ?

Page 43: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 43 of 78

*> 10.128.0.2/32 192.168.22.1 0 150 65111 ?

*> 192.168.21.0 192.168.22.1 0 150 65111 i

*> 192.168.22.0 0.0.0.0 0 32768 i

*> 192.168.40.0 192.168.22.1 0 150 250 i

Although the edges have not been associated with the gateway, the LAN subnets for each edge do appear in

the routing table. The routes do however use a next hop of the attached PE router and are being sourced from

ASN 65111, which reflects the SFO POP Partner Gateway.

Confirm on the lab-nyc-pe router that the routes are indeed directing traffic destined for the edge LAN subnets

over the direct interconnect link between the SFO and NYC POPs and sending the traffic through the lab-sfo-

vcg2 Partner Gateway.

5.5. Assign to edges

Re-attach the edges to lab-nyc-vcg2 and set it up as a secondary gateway. Navigate to Manage Customers |

ACME Corporation | Configure | Edges and for both edges add the gateway in the Partner Gateway

Assignment section:

Select ‘save changes’ to commit the selection. If we navigate to the gateway page, we will see that we can

review the assigned edges:

Click on the ‘view’ hyperlink in the Edges column to see which edges are now using the Partner Gateway:

Once more, inspect the BGP tables on lab-nyc-vcg2 and we should now see the advertised routes from the

attached edges: root@pod:~# lxc-attach -n lab-nyc-vcg2 -- vtysh -c 'show ip bgp'

BGP table version is 0, local router ID is 192.168.22.254

Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,

i internal, r RIB-failure, S Stale, R Removed

Origin codes: i - IGP, e - EGP, ? - incomplete

Network Next Hop Metric LocPrf Weight Path

*> 10.0.0.0/24 0.0.0.0 2 32768 ?

*> 10.0.0.2/32 0.0.0.0 2 32768 ?

*> 10.128.0.0/24 0.0.0.0 2 32768 ?

*> 10.128.0.2/32 0.0.0.0 2 32768 ?

*> 192.168.21.0 192.168.22.1 0 150 65111 i

*> 192.168.22.0 0.0.0.0 0 32768 i

*> 192.168.40.0 192.168.22.1 0 150 250 i

Routes to the LAN subnets of the edges are now sourced local (0.0.0.0) and traffic will utilize the overlay

tunnels to reach the edge device. The routes to the lab-srv-sp subnet remain unchanged.

Page 44: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 44 of 78

5.6. Verify end2end traffic paths

The gateway is now installed and active again. For the sake of completeness, verify connectivity to a few

common destinations via lab-client1 in branch1:

General internet access: Given that that ICMP is by default protected via VCMP, this traffic will pass through

the partner gateway. root@pod:~# lxc-attach -n lab-client1 -- ping -c5 velocloud.com

PING velocloud.com (104.25.183.34) 56(84) bytes of data.

64 bytes from 104.25.183.34: icmp_seq=1 ttl=48 time=1.68 ms

64 bytes from 104.25.183.34: icmp_seq=2 ttl=48 time=1.46 ms

64 bytes from 104.25.183.34: icmp_seq=3 ttl=48 time=1.47 ms

Note that the first time public internet destinations are pinged from the pod’s, ARP caches will be empty and

you may experience loss in the first 1~2 packets.

lab-vco1: Orchestrator access will be protected through VCMP and be routed through the partner handoff and

use the configured default route to NAT back out to the final destination: root@pod:~# lxc-attach -n lab-client1 -- ping -c3 24.17.0.11

PING 24.17.0.11 (24.17.0.11) 56(84) bytes of data.

64 bytes from 24.17.0.11: icmp_seq=1 ttl=61 time=10.9 ms

64 bytes from 24.17.0.11: icmp_seq=2 ttl=61 time=11.1 ms

64 bytes from 24.17.0.11: icmp_seq=3 ttl=61 time=10.9 ms

A traceroute confirms that the traffic is going through the Partner Gateway, although it will not respond to ICMP

messages: root@pod:~# lxc-attach -n lab-client1 -- traceroute 24.17.0.11

traceroute to 24.17.0.11 (24.17.0.11), 30 hops max, 60 byte packets

1 10.0.0.1 0.085 ms 0.048 ms 0.047 ms

2 * * *

3 24.11.0.254 11.692 ms 13.884 ms 13.903 ms

4 24.17.0.11 13.982 ms 14.178 ms 14.245 ms

lab-saas1: This simulated SaaS service was set as high priority and will use the multi-path service through the

gateway to protect the application. Confirm this with ping & traceroute to IP address 24.17.0.21. It should yield

the same results as the Orchestrator destined traffic flow.

lab-saas2: This simulated SaaS service was set via business policy as low priority and to use the direct path.

No Gateway will be used in this case which can be confirmed through a traceroute: root@pod:~# lxc-attach -n lab-client1 -- ping -c3 24.17.0.22

root@lab-client1:~# ping 24.17.0.22

PING 24.17.0.22 (24.17.0.22) 56(84) bytes of data.

64 bytes from 24.17.0.22: icmp_seq=1 ttl=62 time=11.0 ms

64 bytes from 24.17.0.22: icmp_seq=2 ttl=62 time=10.7 ms

64 bytes from 24.17.0.22: icmp_seq=3 ttl=62 time=10.1 ms

root@lab-client1:~# traceroute -n 24.17.0.22

traceroute to 24.17.0.22 (24.17.0.22), 30 hops max, 60 byte packets

1 10.0.0.1 0.085 ms 0.047 ms 0.060 ms <<< edge

2 * * * <<< core router

3 24.17.0.22 11.605 ms 11.587 ms 11.627 ms <<< saas2

lab-srv-sp: Verify the path to the simulated legacy site or shared hosted service. This subnet is advertised via

BGP into the gateways and will be served through the partner gateway to the edges: root@pod:~# lxc-attach -n lab-client1 -- ping -c3 192.168.40.1

PING 192.168.40.1 (192.168.40.1) 56(84) bytes of data.

64 bytes from 192.168.40.1: icmp_seq=1 ttl=60 time=0.888 ms

64 bytes from 192.168.40.1: icmp_seq=2 ttl=60 time=0.697 ms

64 bytes from 192.168.40.1: icmp_seq=3 ttl=60 time=0.731 ms

Page 45: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 45 of 78

A traceroute shows the path through the various elements, trace this on the topology diagram: rot@pod:~# lxc-attach -n lab-client1 -- traceroute -n 192.168.40.1

traceroute to 192.168.40.1 (192.168.40.1), 30 hops max, 60 byte packets

1 10.0.0.1 0.107 ms 0.066 ms 0.036 ms

2 * * *

3 192.168.21.1 10.933 ms 11.330 ms 11.470 ms

4 192.168.31.1 11.695 ms 11.679 ms 12.899 ms

5 192.168.40.1 13.048 ms 13.521 ms 13.609 ms

lab-client2: edge2edge Cloud VPN traffic that will be protected and facilitated by the partner gateway: root@pod:~# lxc-attach -n lab-client1 -- ping -c3 10.128.0.217

PING 10.128.0.217 (10.128.0.217) 56(84) bytes of data.

64 bytes from 10.128.0.217: icmp_seq=1 ttl=61 time=30.2 ms

64 bytes from 10.128.0.217: icmp_seq=2 ttl=61 time=29.7 ms

64 bytes from 10.128.0.217: icmp_seq=3 ttl=61 time=29.7 ms

Optionally, you can do tcpdump on components where traffic is expected in order to validate the used path.

However, this will be done in a later part of the exercise as well (BGP influencing)

We can, however, easily identify active flows on the Partner as well as Cloud Gateways, as these are tracked

in real-time. You will need to keep a separate console window open and keep a ping to the lab-client2 going to

see the flow establish: root@pod:~# lxc-attach -n lab-sfo-vcg2

root@lab-sfo-vcg2:~# /opt/vc/bin/debug.py --flow_dump 10.128.0.217

VCE LFID RFID FDSN MAX_RECV_FDSN FDSN_READ

LAST_LATE_FDSN SRC IP DEST IP SRC PORT DEST PORT PROTO PRIORITY ROUTE-POL

LINK-POL TRAFFIC-TYPE FLAGS IDLE TIME MS

44f8936d-2180-49e5-bb45-cfb1862fb08e 5059 0 41 40 40

0 10.0.0.201 10.128.0.217 1268 0 1 normal gateway loadbalance

transactional 0x200000002 422

As expected, no flow could be found in lab-nyc-vcg2 since the SFO Partner Gateway was selected as the

primary gateway: root@pod:~# lxc-attach -n lab-nyc-vcg2 -- /opt/vc/bin/debug.py --flow_dump 10.128.0.217

VCE LFID RFID FDSN MAX_RECV_FDSN FDSN_READ LAST_LATE_FDSN SRC IP DEST IP SRC PORT

DEST PORT PROTO PRIORITY ROUTE-POL LINK-POL TRAFFIC-TYPE FLAGS IDLE TIME MS

5.7. Fail the primary partner gateway

In this section, we will trigger a failure on the primary partner VCG (lab-sfo-vcg2) of the edge and observe

traffic failing over to the secondary gateway.

Before failing lab-sfo-vcg2, start a continuous ping on lab-client1 towards the service (192.168.40.1) available

through the partner gateway (lab-srv-sp): root@pod:~# lxc-attach -n lab-client1 -- ping 192.168.40.1

PING 10.128.0.44 (10.128.0.44) 56(84) bytes of data.

64 bytes from 10.128.0.44: icmp_seq=1 ttl=62 time=1.48 ms

64 bytes from 10.128.0.44: icmp_seq=2 ttl=62 time=1.03 ms

On the lab-atl-pe router we can confirm that that reachability towards the lab-client1 subnet is directed to lab-

sfo-vcg2: root@pod:~# lxc-attach -n lab-atl-pe -- vtysh -c 'show ip bgp'

BGP table version is 0, local router ID is 192.168.40.254

Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,

i internal, r RIB-failure, S Stale, R Removed

Origin codes: i - IGP, e - EGP, ? - incomplete

Network Next Hop Metric LocPrf Weight Path

* i10.0.0.0/24 192.168.32.254 2 100 0 65222 ?

Page 46: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 46 of 78

*>i 192.168.31.254 1 100 0 65111 ?

* i10.0.0.2/32 192.168.32.254 2 100 0 65222 ?

*>i 192.168.31.254 1 100 0 65111 ?

* i10.128.0.0/24 192.168.32.254 2 100 0 65222 ?

*>i 192.168.31.254 1 100 0 65111 ?

* i10.128.0.2/32 192.168.32.254 2 100 0 65222 ?

*>i 192.168.31.254 1 100 0 65111 ?

*>i192.168.21.0 192.168.31.254 0 100 0 65111 i

*>i192.168.22.0 192.168.32.254 0 100 0 65222 i

*> 192.168.40.0 192.168.40.1 0 0 250 i

We can verify traffic is handed off through the partner handoff at lab-sfo-vcg2: root@lab-sfo-vcg2:~# tcpdump -i eth1 icmp

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes

08:27:16.101771 IP 10.0.0.190 > 192.168.40.1: ICMP echo request, id 959, seq 28, length 64

08:27:16.101838 IP 192.168.40.1 > 10.0.0.190: ICMP echo reply, id 959, seq 28, length 64

08:27:17.102792 IP 10.0.0.190 > 192.168.40.1: ICMP echo request, id 959, seq 29, length 64

08:27:17.102880 IP 192.168.40.1 > 10.0.0.190: ICMP echo reply, id 959, seq 29, length 64

Note that if you don’t see any activity for a few seconds on the tcpdump command, this can be a buffering side

effect of the container environment we’re using. Press ctrl-c to break to stop the tcpdump and show the

buffered packets.

While the traffic is running stop the VCG service on lab-sfo-vcg2, simulating a failure: root@pod:~# lxc-attach -n lab-sfo-vcg2 -- /usr/bin/python /opt/vc/bin/vc_procmon stop

Keep on observing the ping traffic. A traffic forwarding gap up to 60 seconds could be observed before BGP

makes routing adjustments. This window can be modified on the PE by adjusting BGP dampening timers. 64 bytes from 192.168.40.1: icmp_seq=139 ttl=60 time=1.04 ms

64 bytes from 192.168.40.1: icmp_seq=140 ttl=60 time=0.973 ms

64 bytes from 192.168.40.1: icmp_seq=141 ttl=60 time=1.01 ms

64 bytes from 192.168.40.1: icmp_seq=187 ttl=60 time=1.07 ms

64 bytes from 192.168.40.1: icmp_seq=188 ttl=60 time=0.977 ms

64 bytes from 192.168.40.1: icmp_seq=189 ttl=60 time=0.987 ms

The lab-nyc-vcg2 is now forwarding the traffic to the end destination due to the fact that this was selected as

the secondary partner gateway: root@lab-nyc-vcg2:/# tcpdump -i eth1 icmp

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes

08:34:31.375454 IP 10.0.0.190 > 192.168.40.1: ICMP echo request, id 962, seq 33, length 64

08:34:31.375520 IP 192.168.40.1 > 10.0.0.190: ICMP echo reply, id 962, seq 33, length 64

08:34:32.374597 IP 10.0.0.190 > 192.168.40.1: ICMP echo request, id 962, seq 34, length 64

08:34:32.374666 IP 192.168.40.1 > 10.0.0.190: ICMP echo reply, id 962, seq 34, length 64

Confirm this by listing the active flows on both of the POP gateways.

The BGP routing table in lab-atl-pe confirms the change: lab-atl-pe# sh ip bgp

BGP table version is 0, local router ID is 192.168.40.254

Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,

i internal, r RIB-failure, S Stale, R Removed

Origin codes: i - IGP, e - EGP, ? - incomplete

Network Next Hop Metric LocPrf Weight Path

*>i10.0.0.0/24 192.168.32.254 2 100 0 65222 ?

*>i10.0.0.2/32 192.168.32.254 2 100 0 65222 ?

*>i10.128.0.0/24 192.168.32.254 2 100 0 65222 ?

*>i10.128.0.2/32 192.168.32.254 2 100 0 65222 ?

*>i192.168.22.0 192.168.32.254 0 100 0 65222 i

Page 47: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 47 of 78

*> 192.168.40.0 192.168.40.1 0 0 250 i

The branch network is now marked as reachable via 192.168.32.254 which points to lab-nyc-vcg2

At this point, we will re-enable the service on lab-sfo-vcg2 to let it resume its role as primary gateway: root@pod:~# lxc-attach -n lab-sfo-vcg2 -- /usr/bin/python /opt/vc/bin/vc_procmon restart

All daemons have restarted, and traffic is directed back to lab-sfo-vcg2: 64 bytes from 192.168.40.1: icmp_seq=418 ttl=60 time=0.849 ms

64 bytes from 192.168.40.1: icmp_seq=419 ttl=60 time=0.978 ms

64 bytes from 192.168.40.1: icmp_seq=442 ttl=60 time=0.958 ms

64 bytes from 192.168.40.1: icmp_seq=443 ttl=60 time=0.965 ms

root@lab-sfo-vcg2:~# tcpdump -i eth1 icmp

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes

08:45:46.232216 IP 10.0.0.190 > 192.168.40.1: ICMP echo request, id 965, seq 32, length 64

08:45:46.232309 IP 192.168.40.1 > 10.0.0.190: ICMP echo reply, id 965, seq 32, length 64

08:45:47.233161 IP 10.0.0.190 > 192.168.40.1: ICMP echo request, id 965, seq 33, length 64

Confirm that the BGP session on lab-sfo-pe has indeed re-established to lab-sfo-vcg2.

And confirm BGP has re-established on lab-sfo-vcg2 towards lab-sfo-pe: lab-sfo-pe# show ip bgp summary

BGP router identifier 192.168.33.11, local AS number 150

RIB entries 11, using 1232 bytes of memory

Peers 3, using 13 KiB of memory

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd

192.168.21.254 4 65111 535 538 0 0 0 00:02:13 5

192.168.31.1 4 150 838 853 0 0 0 08:49:23 1

192.168.33.12 4 150 842 854 0 0 0 08:49:20 1

lab-sfo-pe# show ip bgp

BGP table version is 0, local router ID is 192.168.33.11

Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,

i internal, r RIB-failure, S Stale, R Removed

Origin codes: i - IGP, e - EGP, ? - incomplete

Network Next Hop Metric LocPrf Weight Path

*> 10.0.0.0/24 192.168.21.254 1 0 65111 ?

* i 192.168.33.12 2 100 0 65222 ?

*> 10.0.0.2/32 192.168.21.254 1 0 65111 ?

* i 192.168.33.12 2 100 0 65222 ?

*> 10.128.0.0/24 192.168.21.254 1 0 65111 ?

* i 192.168.33.12 2 100 0 65222 ?

*> 10.128.0.2/32 192.168.21.254 1 0 65111 ?

* i 192.168.33.12 2 100 0 65222 ?

*> 192.168.21.0 192.168.21.254 0 0 65111 i

*>i192.168.22.0 192.168.33.12 0 100 0 65222 i

*>i192.168.40.0 192.168.31.1 0 100 0 250 i

5.8. Gateway Selection Variations

Up to this point in the lab exercise, the primary and secondary Partner gateway for each of the edges have

been identical, which makes traffic exchange between the edges trivial. As long as the edges share a common

gateway (primary or secondary), they will use this common gateway to establish the VPN path between the

sites.

Once a rollout starts scaling out, it will not always be the case that edges share a common gateway. If that is

Page 48: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 48 of 78

the case, the desire is to leverage the ISP core network to exchange traffic between the sites. The Partner

Gateway will still provide Dynamic Multipath Optimization to protect the traffic but will send it over the ISP core

to the Partner Gateway that connects to the destination site.

In this section, we will force such a scenario by attaching lab-edge1 exclusively to lab-sfo-vcg2 and lab-edge2

to lab-nyc-vcg2. No secondary gateway will be provisioned so that the edges do not have a common gateway.

First, let’s change the Gateway assignment for edge1 in Manage Customers | ACME Corporation |

Configuration | lab-edge1

Do the same action for lab-edge2 but associate this to lab-nyc-vcg2, as follows:

Don’t forget to click ‘Save Changes’ after modifying the Gateway assignment.

To verify that the new settings took effect, we can go to the Gateway page and we will now see that only 1

edge is attached to each of the Partner Gateways:

Click on the ‘View’ hyperlink to verify the assignments. We can also complete this step on the gateways

directly: root@pod:~# lxc-attach -n lab-sfo-vcg2 -- /opt/vc/bin/debug.py --list_edge 3

Name Enterprise Logical ID VC Private IP

lab-edge1 ACME Corporation 44f8936d-2180-49e5-bb45-cfb1862fb08e 169.254.129.2

root@pod:~# lxc-attach -n lab-nyc-vcg2 -- /opt/vc/bin/debug.py --list_edge 3

Name Enterprise Logical ID VC Private IP

lab-edge2 ACME Corporation 2f403151-0b69-48fd-ac71-fdc727992849 169.254.129.3

Connect back to lab-client1 and sanity check the paths to lab-client2 and lab-srv-sp: root@lab-client1:~# ping 10.128.0.217

PING 10.128.0.217 (10.128.0.217) 56(84) bytes of data.

64 bytes from 10.128.0.217: icmp_seq=1 ttl=58 time=30.0 ms

64 bytes from 10.128.0.217: icmp_seq=2 ttl=58 time=30.0 ms

root@lab-client1:~# ping 192.168.40.1

PING 192.168.40.1 (192.168.40.1) 56(84) bytes of data.

64 bytes from 192.168.40.1: icmp_seq=1 ttl=60 time=11.0 ms

64 bytes from 192.168.40.1: icmp_seq=2 ttl=60 time=11.0 ms

All paths are still operational, but issue a traceroute to lab-client2 to see the changed behavior: root@lab-client1:~# traceroute -n 10.128.0.217

traceroute to 10.128.0.217 (10.128.0.217), 30 hops max, 60 byte packets

Page 49: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 49 of 78

1 10.0.0.1 0.089 ms 0.038 ms 0.025 ms

2 * * *

3 192.168.21.1 11.490 ms 11.848 ms 12.062 ms

4 192.168.33.12 12.226 ms 12.424 ms 13.096 ms

5 * * *

6 * * *

7 10.128.0.217 33.535 ms 29.651 ms 29.614 ms

While several nodes in the path do not respond to ICMP messages, we can clearly see that the traffic is

ingested by lab-sfo-vcg2 and handed off to lab-sfo-pe (hop 3), which in turn delivers it to the NYC POP (hop4).

This mechanism is commonly used for facilitating inter-region connectivity where an international backbone is

established but where the last mile is delivered through SD-WAN. Edges in this scenario will not have a

common gateway that they can rely on to exchange VPN traffic.

Let’s also take a closer look at the routing tables of the SFO POP elements: lab-sfo-vcg2# show ip bgp

BGP table version is 0, local router ID is 192.168.21.254

Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,

i internal, r RIB-failure, S Stale, R Removed

Origin codes: i - IGP, e - EGP, ? - incomplete

Network Next Hop Metric LocPrf Weight Path

*> 10.0.0.0/24 0.0.0.0 1 32768 ?

*> 10.0.0.2/32 0.0.0.0 1 32768 ?

*> 10.128.0.0/24 192.168.21.1 0 150 65222 ?

*> 10.128.0.2/32 192.168.21.1 0 150 65222 ?

*> 192.168.21.0 0.0.0.0 0 32768 i

*> 192.168.22.0 192.168.21.1 0 150 65222 i

*> 192.168.40.0 192.168.21.1 0 150 250 i

lab-sfo-pe# show ip bgp

BGP table version is 0, local router ID is 192.168.33.11

Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,

i internal, r RIB-failure, S Stale, R Removed

Origin codes: i - IGP, e - EGP, ? - incomplete

Network Next Hop Metric LocPrf Weight Path

*> 10.0.0.0/24 192.168.21.254 1 0 65111 ?

*> 10.0.0.2/32 192.168.21.254 1 0 65111 ?

*>i10.128.0.0/24 192.168.33.12 1 100 0 65222 ?

*>i10.128.0.2/32 192.168.33.12 1 100 0 65222 ?

*> 192.168.21.0 192.168.21.254 0 0 65111 i

*>i192.168.22.0 192.168.33.12 0 100 0 65222 i

*>i192.168.40.0 192.168.31.1 0 100 0 250 i

To continue the exercise, revert the gateway assignments for both edges back to their original setting and set

lab-sfo-vcg2 as the primary gateway and lab-nyc-vcg2 as the secondary gateway:

It is important for the remainder to the exercise that this action is completed on both edges.

Optionally, do the steps in the beginning of this paragraph for the other edge to confirm that both edges are

attached to each of the partner gateways.

5.9. Gateway Monitoring

Let’s now spend some time taking a closer look at what needs to be monitored to keep gateways in operational

Page 50: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 50 of 78

parameters. First let’s start with important log file locations.

The gateway daemon writes all logs to /var/log. The following files are present and should be inspected:

• /var/log/activation.log : contains all events surrounding the activation of the gateway against an

orchestrator

• /var/log/gwd.log : contains activity of the data path towards the edges including IPsec tunnel

establishment, VMCP path measurement and establishment

• /var/log/mgd.log : contains activity of the management daemon such as policy updates and heartbeats

to the orchestrator

• /var/log/natd.log : contains NAT entry transactions

• /var/log/vc_procmon.log : contains activity around process restart of any of the service daemons as

well as memory usage over time

• /var/log/quagga/bgp.log : contains all debugging information for the quagga routing daemon

• /var/log/syslog : contains basic routing daemon events and system events

We can for example review the activation that was completed in a previous exercise: root@lab-nyc-vcg2:/var/log# cat activation.log

2016-08-30T22:43:17.345 INFO [config (3429:MainThread:7fc9078e5740)] Updating configurations for

managementPlane

2016-08-30T22:43:17.345 INFO [activate (3429:MainThread:7fc9078e5740)] Activating Gateway, args =

Namespace(activation_key='GEQY-3ZLF-N74M-MV4D', asJson=False, force=False, ignorecerterror=True,

model=None, outfile=None, progress=True, resume_download=False, server='24.17.0.11')

2016-08-30T22:43:17.345 INFO [mgd (3429:MainThread:7fc9078e5740)] VCO primary = 192.168.17.10,

secondary = None

2016-08-30T22:43:17.521 ERROR [mgd (3429:MainThread:7fc9078e5740)] VCO Failed to send an imageUpdate

policy, assuming 'No Update' policy

2016-08-30T22:43:17.521 INFO [mgd (3429:MainThread:7fc9078e5740)] VCO primary = 24.17.0.11,

secondary = None

2016-08-30T22:43:17.521 DEBUG [heartbeat (3429:MainThread:7fc9078e5740)] Testing Heartbeat

2016-08-30T22:43:17.531 DEBUG [heartbeat (3429:MainThread:7fc9078e5740)] Heartbeat OK

2016-08-30T22:43:17.531 INFO [mgd (3429:MainThread:7fc9078e5740)] Activation successful

2016-08-30T22:43:17.545 INFO [mgd (3429:MainThread:7fc9078e5740)] Pushing 1 activation progress

event(s)

2016-08-30T22:43:17.545 INFO [mgd (3429:MainThread:7fc9078e5740)] Saving activation configuration

2016-08-30T22:43:17.548 DEBUG [mgd (3429:MainThread:7fc9078e5740)] Activation progress:

result={'activation_key': 'GEQY-3ZLF-N74M-MV4D', 'activated': True, 'vco': '24.17.0.11',

'ignorecerterror': True, 'progress': 100, 'message': 'Activation successful'}

2016-09-17T08:05:52.239 INFO [config (31009:MainThread:7ff40706a740)] Updating configurations for

managementPlane

2016-09-17T08:05:52.240 INFO [activate (31009:MainThread:7ff40706a740)] Activating Gateway, args =

Namespace(activation_key='2HSN-Y9AD-4FNC-DNZY', asJson=False, force=False, ignorecerterror=True,

model=None, outfile=None, progress=True, resume_download=False, server='192.168.17.10')

2016-09-17T08:05:52.240 INFO [mgd (31009:MainThread:7ff40706a740)] VCO primary = 24.17.0.11,

secondary = None

2016-09-17T08:05:52.398 ERROR [mgd (31009:MainThread:7ff40706a740)] VCO Failed to send an

imageUpdate policy, assuming 'No Update' policy

2016-09-17T08:05:52.399 INFO [mgd (31009:MainThread:7ff40706a740)] VCO primary = 24.17.0.11,

secondary = None

2016-09-17T08:05:52.399 DEBUG [heartbeat (31009:MainThread:7ff40706a740)] Testing Heartbeat

2016-09-17T08:05:52.410 DEBUG [heartbeat (31009:MainThread:7ff40706a740)] Heartbeat OK

2016-09-17T08:05:52.410 INFO [mgd (31009:MainThread:7ff40706a740)] Activation successful

2016-09-17T08:05:52.431 INFO [mgd (31009:MainThread:7ff40706a740)] Pushing 1 activation progress

event(s)

2016-09-17T08:05:52.432 INFO [mgd (31009:MainThread:7ff40706a740)] Saving activation configuration

2016-09-17T08:05:52.435 DEBUG [mgd (31009:MainThread:7ff40706a740)] Activation progress:

result={'activation_key': '2HSN-Y9AD-4FNC-DNZY', 'activated': True, 'vco': '24.17.0.11',

'ignorecerterror': True, 'progress': 100, 'message': 'Activation successful'}

Page 51: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 51 of 78

Since the gateways do not contain any persistent state, they can be considered as disposable entities which

can very easily be replaced. VeloCloud does, however, recommend monitoring several system level as well as

service level metrics. The following table will provide a list of key metrics to keep an eye on and their linux

command to obtain the value for the metric.

Metrics/Parameters Equivalent shell command Threshold Remediation

CPU load sudo uptime | awk -F',' '{print $5}' > 65.0 Add more VCG's to pool or add CPU's to VCG

CPU Usage sudo mpstat -P ALL 100% for 3s sudo service vc_process_monitor restart

Free Memory sudo free -m | grep Mem | awk '{print $4}' < 2000 sudo service vc_process_monitor restart

Used Memory sudo free -m | grep Mem | awk '{print $3}' > 3500 sudo service vc_process_monitor restart

Disk Usage sudo df -kh --total | grep total | \ awk '{print $4}'

< 8G Remove excess log files

Network Interface Errors

cat /proc/net/dev | grep eth0 | \ awk '{print $4 + $5 + $12 + $13}'

> 0 sudo dmesg | grep eth

VCG gwd Service ps -eaf | grep gwd | wc -l = 0 sudo service vc_process_monitor restart

VCG mgd Service ps -eaf | grep mgd | wc -l = 0 sudo service vc_process_monitor restart

VCG procmon Service

ps -eaf| grep procmon | wc -l = 0 sudo service vc_process_monitor restart

VCG natd Service ps -eaf | grep natd | wc -l = 0 sudo service vc_process_monitor restart

Free NAT Table count

sudo /opt/vc/bin/getcntr \ -c natd.nat_shmem_free_entries -d vcgwnat.com

< 50000 rm /dev/shm/natd.shmem rm /dev/shmem/natd.pinfo sudo service vc_process_monitor restart

Free PAT Table count

sudo /opt/vc/bin/getcntr \ -c natd.pinfo_shmem_free_entries -d vcgwnat.com

< 50000 rm /dev/shm/natd.shmem rm /dev/shmem/natd.pinfo sudo service vc_process_monitor restart

Gateway activation state

sudo /opt/vc/bin/is_activated.py = False (re)activate the VCG using /opt/vc/bin/activate.py

Activated VCO Name

sudo cat /opt/vc/.gateway.info | \ jq .configuration.managementPlane\ .data.managementPlaneProxy.primary

= null (re)activate the VCG using /opt/vc/bin/activate.py

VCG Version - gwd sudo /opt/vc/sbin/gwd -v N/A sudo service vc_process_monitor restart

VCG Version - mgd sudo /opt/vc/sbin/mgd -v N/A sudo service vc_process_monitor restart

VCG ICMP Responder sudo /opt/vc/bin/debug.py --icmp_monitor DOWN Verify “wan” interface connected to the PE

NTP Offset sudo ntpq -p > 15 sudo service ntp stop use ntpdate to manually sync with ntp server sudo service ntp start

NTP Time Zone sudo cat /etc/timezone != Etc/UTC echo "Etc/UTC" | sudo tee /etc/timezone sudo dpkg-reconfigure \ --frontend noninteractive tzdata

Execute a few sample commands on the lab-nyc-vcg2: root@lab-nyc-vcg2:~# /opt/vc/bin/getcntr -c natd.pinfo_shmem_free_entries -d vcgwnat.com

256000

Page 52: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 52 of 78

root@lab-nyc-vcg2:~# ps -eaf | grep mgd | wc -l

1

The above-mentioned command will provide insight into the number of active NAT translations that are in use

on a gateway.

Each of the commands in the list produce numerical or string values for each metric that can be ingested into a

network management system (NMS). Through the NMS, you should be able to set alarms on the various

thresholds that are listed in the table. Having access to the raw metrics will provide maximum flexibility for

integrating into a wide variety of monitoring systems.

The Orchestrator also captures essential information about the gateway status and health. This can be found

in the Gateway page in the Operator portal:

All events that occurred on the gateways are available at the ‘operator events’ section in the navigation pane:

Use the filtering functions to highlight logs of the ‘lab-nyc-vcg2’ gateway only.

Additional detail about each event can be obtained by clicking on the event type for a particular line item.

6. BGP Path Influencing

In this section, we will work through an exercise of signaling preference to a particular partner gateway or PE

router by leveraging BGP functions like AS-PATH prepend, Local preference and communities.

These are standard techniques that can be used by operators of Partner Gateways to influence the default

Page 53: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 53 of 78

path selection of the BGP algorithm. Both inbound and outbound preferences can be signaled. Keep in mind

that since we’re using multi-instance BGP, this can be done on a per-customer basis.

6.1. Influencing outbound VCG selection with AS-PATH prepend

In this exercise, we will leverage the AS-PATH prepend functionality of BGP to let traffic from clients to the lab-

srv-sp gravitate to the secondary POP through lab-nyc-vcg2. We will do this by making the primary POP

gateway (SFO) routing less attractive by adding additional ASN’s in the AS-PATH. BGP will avoid routes in its

best path selection that have longer AS-PATHs.

This influencing is done from the attached PE router (lab-sfo-pe) where we set the AS-PATH prepend on the

lab-srv-sp route (192.168.40.0/24). This prepended (i.e. longer and as such less preferred) path is then

communicated to the partner gateway as well.

Before we start, inspect the current state of the learned route to lab-srv-sp on both of the partner gateways: root@pod:~# lxc-attach -n lab-sfo-vcg2 -- vtysh -c "show ip bgp"

BGP table version is 0, local router ID is 127.0.0.1

Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,

i internal, r RIB-failure, S Stale, R Removed

Origin codes: i - IGP, e - EGP, ? - incomplete

Network Next Hop Metric LocPrf Weight Path

*> 10.0.0.0/24 0.0.0.0 2 32768 ?

*> 10.0.0.2/32 0.0.0.0 2 32768 ?

*> 10.128.0.0/24 0.0.0.0 2 32768 ?

*> 10.128.0.2/32 0.0.0.0 2 32768 ?

*> 192.168.21.0 0.0.0.0 0 32768 i

*> 192.168.22.0 192.168.21.1 0 150 65222 i

*> 192.168.40.0 192.168.21.1 0 150 250 i

root@pod:~# lxc-attach -n lab-nyc-vcg2 -- vtysh -c "show ip bgp"

BGP table version is 0, local router ID is 127.0.0.1

Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,

i internal, r RIB-failure, S Stale, R Removed

Origin codes: i - IGP, e - EGP, ? - incomplete

Network Next Hop Metric LocPrf Weight Path

*> 10.0.0.0/24 0.0.0.0 3 32768 ?

*> 10.0.0.2/32 0.0.0.0 3 32768 ?

*> 10.128.0.0/24 0.0.0.0 3 32768 ?

*> 10.128.0.2/32 0.0.0.0 3 32768 ?

*> 192.168.22.0 0.0.0.0 0 32768 i

*> 192.168.40.0 192.168.22.1 0 150 250 i

Connect to the lab-sfo-pe quagga routing daemon: root@pod:~# lxc-attach -n lab-sfo-pe

root@lab-sfo-pe:~# vtysh

Hello, this is Quagga (version 0.99.22.4).

Copyright 1996-2005 Kunihiro Ishiguro, et al.

lab-sfo-pe#

Add the following route map to prepend the learned downstream route to lab-srv-sp: lab-sfo-pe# conf t

lab-sfo-pe (config)# ip prefix-list SVC seq 5 permit 192.168.40.0/24

lab-sfo-pe (config)# route-map OUTBOUND permit 10

lab-sfo-pe (config-route-map)# match ip address prefix-list SVC

lab-sfo-pe (config-route-map)# set as-path prepend 150 150

lab-sfo-pe (config-route-map)# router bgp 150

lab-sfo-pe (config-router)# neighbor 192.168.21.254 route-map OUTBOUND out

Page 54: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 54 of 78

lab-sfo-pe (config-router)# ^Z

lab-sfo-pe # wr

The configuration should look as follows now: lab-sfo-pe# sh run

Building configuration...

!

router bgp 150

bgp router-id 192.168.33.11

redistribute static

neighbor 192.168.21.254 remote-as 65555

neighbor 192.168.21.254 next-hop-self

neighbor 192.168.21.254 route-map OUTBOUND out

neighbor 192.168.31.1 remote-as 150

neighbor 192.168.31.1 next-hop-self

neighbor 192.168.33.12 remote-as 150

neighbor 192.168.33.12 next-hop-self

!

ip prefix-list SVC seq 5 permit 192.168.40.0/24

!

route-map OUTBOUND permit 10

match ip address prefix-list SVC

set as-path prepend 150 150

!

end

We will need to soft reset the BGP session to make this change effective: lab-sfo-pe# clear ip bgp * soft

When we inspect the BGP routing table on both gateways again, we can now see that the additional ASNs

have been added to the path for the lab-srv-sp destination subnet: root@pod:~# lxc-attach -n lab-sfo-vcg2 -- vtysh -c "show ip bgp"

BGP table version is 0, local router ID is 127.0.0.1

Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,

i internal, r RIB-failure, S Stale, R Removed

Origin codes: i - IGP, e - EGP, ? - incomplete

Network Next Hop Metric LocPrf Weight Path

*> 10.0.0.0/24 0.0.0.0 1 32768 ?

*> 10.0.0.2/32 0.0.0.0 1 32768 ?

*> 10.128.0.0/24 0.0.0.0 1 32768 ?

*> 10.128.0.2/32 0.0.0.0 1 32768 ?

*> 192.168.21.0 0.0.0.0 0 32768 i

*> 192.168.40.0 192.168.21.1 0 150 150 150 250 i

root@pod:~# lxc-attach -n lab-nyc-vcg2 -- vtysh -c "show ip bgp"

BGP table version is 0, local router ID is 127.0.0.1

Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,

i internal, r RIB-failure, S Stale, R Removed

Origin codes: i - IGP, e - EGP, ? - incomplete

Network Next Hop Metric LocPrf Weight Path

*> 10.0.0.0/24 0.0.0.0 2 32768 ?

*> 10.0.0.2/32 0.0.0.0 2 32768 ?

*> 10.128.0.0/24 0.0.0.0 2 32768 ?

*> 10.128.0.2/32 0.0.0.0 2 32768 ?

*> 192.168.21.0 192.168.22.1 0 150 65111 i

*> 192.168.22.0 0.0.0.0 0 32768 i

*> 192.168.40.0 192.168.22.1 0 150 250 i

This will cause the edges to use the more preferred path (and gateway in this case) for sending traffic to lab-

srv-sp, over lab-nyc-vcg2.

Page 55: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 55 of 78

This changed behavior can also be verified in the edge routing tables: root@pod:~# lxc-attach -n lab-edge1 -- /opt/vc/bin/debug.py --routes 192.168.40.0

192.168.40.0 255.255.255.0 cloud any 14000c18-0000-0000-0000-000000000000

14000c18-0000-0000-0000-000000000000 True 0 416 PSBR 0 any

N/A N/A

192.168.40.0 255.255.255.0 cloud any 14000b18-0000-0000-0000-000000000000

14000b18-0000-0000-0000-000000000000 True 0 464 PSBR 0 any

N/A N/A

The first listed route is preferred and points to the inverted hex coded address of lab-nyc-vcg2 (24.12.0.20)

This can be confirmed through a tcpdump on lab-nyc-vcg2: root@lab-nyc-vcg2:~# tcpdump -i eth1 icmp

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes

09:22:00.789704 IP 10.0.0.190 > 192.168.40.1: ICMP echo request, id 972, seq 638, length 64

09:22:01.790934 IP 10.0.0.190 > 192.168.40.1: ICMP echo request, id 972, seq 639, length 64

09:22:02.791837 IP 10.0.0.190 > 192.168.40.1: ICMP echo request, id 972, seq 640, length 64

09:22:03.793001 IP 10.0.0.190 > 192.168.40.1: ICMP echo request, id 972, seq 641, length 64

However, we see the traffic in only one direction. Given that this an outbound influencing, the inbound path did

not change and will remain unchanged compared to the state before we added ASNs to the prefix.

In general, asymmetric routing is not preferred in service provider deployments but may be necessary to

distribute network load or to work around network problems or areas under maintenance.

This can be found by inspecting the BGP tables on lab-atl-pe: root@pod:~# lxc-attach -n lab-atl-pe -- vtysh -c 'sh ip bgp'

BGP table version is 0, local router ID is 192.168.40.254

Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,

i internal, r RIB-failure, S Stale, R Removed

Origin codes: i - IGP, e - EGP, ? - incomplete

Network Next Hop Metric LocPrf Weight Path

* i10.0.0.0/24 192.168.32.254 2 100 0 65222 ?

*>i 192.168.31.254 1 100 0 65111 ?

* i10.0.0.2/32 192.168.32.254 2 100 0 65222 ?

*>i 192.168.31.254 1 100 0 65111 ?

*>i10.128.0.0/24 192.168.31.254 1 100 0 65111 ?

* i 192.168.32.254 2 100 0 65222 ?

*>i10.128.0.2/32 192.168.31.254 1 100 0 65111 ?

* i 192.168.32.254 2 100 0 65222 ?

*>i192.168.21.0 192.168.31.254 0 100 0 65111 i

*>i192.168.22.0 192.168.32.254 0 100 0 65222 i

*> 192.168.40.0 192.168.40.1 0 0 250 i

The return path to the client is still attached to lab-sfo-vcg2. The return traffic should be found there: root@lab-sfo-vcg2:~# tcpdump -i eth1 icmp

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes

09:27:06.970724 IP 192.168.40.1 > 10.0.0.190: ICMP echo reply, id 972, seq 944, length 64

09:27:07.969866 IP 192.168.40.1 > 10.0.0.190: ICMP echo reply, id 972, seq 945, length 64

09:27:08.970790 IP 192.168.40.1 > 10.0.0.190: ICMP echo reply, id 972, seq 946, length 64

As an additional verification to see the asymmetric traffic, you can do the tcpdump on lab-atl-pe router: root@lab-atl-pe:~# tcpdump -c3 -i eth1 icmp

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes

22:23:44.857147 IP 10.0.0.201 > 192.168.40.1: ICMP echo request, id 1488, seq 10429, length 64

22:23:45.858402 IP 10.0.0.201 > 192.168.40.1: ICMP echo request, id 1488, seq 10430, length 64

22:23:46.859528 IP 10.0.0.201 > 192.168.40.1: ICMP echo request, id 1488, seq 10431, length 64

Page 56: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 56 of 78

root@lab-atl-pe:~# tcpdump -c3 -i eth0 icmp

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes

22:25:46.989269 IP 192.168.40.1 > 10.0.0.201: ICMP echo reply, id 1488, seq 10551, length 64

22:25:47.991256 IP 192.168.40.1 > 10.0.0.201: ICMP echo reply, id 1488, seq 10552, length 64

22:25:48.992128 IP 192.168.40.1 > 10.0.0.201: ICMP echo reply, id 1488, seq 10553, length 64

6.2. Influencing outbound VCG selection with Local Preference

In this exercise, we’ll override the AS-PATH selection using local preference and make lab-sfo-vcg2 the

primary gateway for client traffic destined to the service core:

This is influenced from the Orchestrator directly and no CLI interaction is required to complete this step. Go to

the ACME Corporation account and click through to Configure | Customer:

Select lab-sfo-vcg2 from the drop-down menu in the gateway pool section and click Edit. Add an inbound BGP

filter as follows:

This will set the local preference for the specific service core prefix to a local preference of 200. Higher local

preferences indicate a preferred path.

Also, set an explicit lower local preference on the route on lab-nyc-vcg2:

Page 57: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 57 of 78

After making the changes, click ‘Save Changes’ to make them effective. This will attach a Local Preference to

all routes received by the partner gateway.

Next, confirm that these changes are indeed applied to both of the partner gateways: root@pod:~# lxc-attach -n lab-sfo-vcg2 -- vtysh -c "show ip bgp"

BGP table version is 0, local router ID is 127.0.0.1

Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,

i internal, r RIB-failure, S Stale, R Removed

Origin codes: i - IGP, e - EGP, ? - incomplete

Network Next Hop Metric LocPrf Weight Path

*> 10.0.0.0/24 0.0.0.0 2 32768 ?

*> 10.0.0.2/32 0.0.0.0 2 32768 ?

*> 10.128.0.0/24 0.0.0.0 2 32768 ?

*> 10.128.0.2/32 0.0.0.0 2 32768 ?

*> 192.168.21.0 0.0.0.0 0 32768 i

*> 192.168.40.0 192.168.21.1 200 0 150 150 150 250 i

root@pod:~# lxc-attach -n lab-nyc-vcg2 -- vtysh -c "show ip bgp"

BGP table version is 0, local router ID is 127.0.0.1

Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,

i internal, r RIB-failure, S Stale, R Removed

Origin codes: i - IGP, e - EGP, ? - incomplete

Network Next Hop Metric LocPrf Weight Path

*> 10.0.0.0/24 0.0.0.0 3 32768 ?

*> 10.0.0.2/32 0.0.0.0 3 32768 ?

*> 10.128.0.0/24 0.0.0.0 3 32768 ?

*> 10.128.0.2/32 0.0.0.0 3 32768 ?

*> 192.168.22.0 0.0.0.0 0 32768 i

*> 192.168.40.0 192.168.22.1 50 0 150 250 i

This indicates that lab-sfo-vcg2 now has the preferred route to lab-srv-sp, which is propagated to all the edges.

Page 58: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 58 of 78

The traffic on the partner handoff is now symmetrical again since the original AS-PATH selection has been

overridden by the Local Preference: root@lab-sfo-vcg2:~# tcpdump -i eth1 icmp

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes

09:45:34.238787 IP 10.0.0.190 > 192.168.40.1: ICMP echo request, id 972, seq 2051, length 64

09:45:34.238864 IP 192.168.40.1 > 10.0.0.190: ICMP echo reply, id 972, seq 2051, length 64

09:45:35.239806 IP 10.0.0.190 > 192.168.40.1: ICMP echo request, id 972, seq 2052, length 64

09:45:35.239886 IP 192.168.40.1 > 10.0.0.190: ICMP echo reply, id 972, seq 2052, length 64

09:45:36.240529 IP 10.0.0.190 > 192.168.40.1: ICMP echo request, id 972, seq 2053, length 64

09:45:36.240607 IP 192.168.40.1 > 10.0.0.190: ICMP echo reply, id 972, seq 2053, length 64

We can see that lab-sfo-vcg2 is now selected to route the outbound traffic through to its final destination at lab-

srv-sp.

This can also be confirmed via the Orchestrator in the Overlay Flow Control page. Go to Acme | Configuration |

Overlay Flow Control. Note that both the AS Path length and the Local Preference have been updated.

Optionally, you can also confirm this behavior by looking at the route tables in the edge.

6.3. Influencing inbound Gateway selection via communities

As a final step in the BGP influencing exercises, we also want to highlight that an inbound preference can be

advertised to the PE routers via community strings, which can be acted on by the PE router. This influence is

again done through the Orchestrator. Go back to the BGP configuration of the gateways inside the ACME

Corporation customer context.

Add an outbound BGP filter on lab-sfo-vcg2 as follows:

Page 59: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 59 of 78

Note that it is important here to uncheck the ‘Exact match’ checkbox to allow any prefix to be matched. The

prefixes that are intended to be tagged with the communities are the subnets of the branches such as

10.0.0.0/24 and 10.128.0.0/24, and any local subnets that the gateways include in their advertisements. All of

these routes will now be tagged with a community value so that the origin of the prefix can be communicated to

downstream routers. These routers can then act on the community value.

Make a similar addition to lab-nyc-vcg2 but use a different community string:

Page 60: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 60 of 78

After making the changes, click ‘Save changes’ to make them active via the partner gateway.

As a last step, confirm that the community strings are attached to the advertised routes and are preserved

when ingested by the attached PE router.

On the POP PE routers:

root@pod:~# lxc-attach -n lab-sfo-pe -- vtysh -c "show ip bgp 10.0.0.0/24"

BGP routing table entry for 10.0.0.0/24

Paths: (1 available, best #1, table Default-IP-Routing-Table)

Advertised to non peer-group peers:

192.168.31.1 192.168.33.12

65111

192.168.21.254 from 192.168.21.254 (127.0.0.1)

Origin incomplete, metric 2, localpref 100, valid, external, best

Community: 100:111

Last update: Thu Sep 15 09:53:40 2016

root@pod:~# lxc-attach -n lab-nyc-pe -- vtysh -c "show ip bgp 10.0.0.0/24"

BGP routing table entry for 10.0.0.0/24

Paths: (2 available, best #1, table Default-IP-Routing-Table)

Advertised to non peer-group peers:

192.168.22.254

65111

192.168.33.11 (metric 1) from 192.168.33.11 (192.168.33.11)

Origin incomplete, metric 2, localpref 100, valid, internal, best

Community: 100:111

Last update: Thu Sep 15 09:53:16 2016

Page 61: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 61 of 78

Inspect the BGP route on lab-atl-pe:

lab-atl-pe# show ip bgp 10.0.0.0/24

BGP routing table entry for 10.0.0.0/24

Paths: (2 available, best #2, table Default-IP-Routing-Table)

Advertised to non peer-group peers:

192.168.40.1

65222

192.168.32.254 from 192.168.32.254 (192.168.33.12)

Origin incomplete, metric 2, localpref 100, valid, internal

Community: 100:222

Last update: Mon Jun 5 22:47:44 2017

65111

192.168.31.254 from 192.168.31.254 (192.168.33.11)

Origin incomplete, metric 1, localpref 100, valid, internal, best

Community: 100:111

Last update: Mon Jun 5 22:47:18 2017

The output shows that the route was received from both POPs and the community tag. The originating ASN

reflects the source of the received route. The route to lab-sfo-vcg2 shows as the best path, which will be

propagated to the routing table. These tags will carry through to the downstream routers.

Other routes not advertised by edges through the partner gateways remain untagged:

root@pod:~# lxc-attach -n lab-atl-pe -- vtysh -c "show ip bgp 192.168.40.0/24"

BGP routing table entry for 192.168.40.0/24

Paths: (1 available, best #1, table Default-IP-Routing-Table)

Advertised to non peer-group peers:

192.168.31.254 192.168.32.254

250

192.168.40.1 from 192.168.40.1 (192.168.40.1)

Origin IGP, metric 0, localpref 100, valid, external, best

Last update: Wed Sep 14 19:02:49 2016

The PE router could now use the community string as part of a route-map that can influence the northbound

return path to the edge subnets.

ip community-list 1 permit 100:111

ip community-list 2 permit 100:222

!

route-map SFO permit 10

match community 1

set local-preference 150

!

route-map NYC permit 20

match community 2

set local-preference 125

The above listed route map would have a downstream router to increase the local preference of the learned

edge LAN subnets so that traffic gravitates to the SFO POP.

To summarize, from a gateway perspective towards lab-srv-sp:

• Outbound routing can be influenced by adding ASNs to the service core prefix. This is done on the PE,

is picked up by the partner gateway and communicated back to the edges.

Page 62: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 62 of 78

• Outbound routing can also be influenced by setting a local preference on the Orchestrator directly,

which is installed on the partner gateways and communicated to the edges.

• Inbound routing can be influenced by allowing PE routers to act on advertised community strings that

can be attached to all or select branch subnets that the partner gateway advertises to the PE router.

7. Orchestrator Post-Installation steps

The lab will not focus on actual installation of the Orchestrator. The orchestrator is typically delivered as a

virtual appliance in OVA (for VMware) or QCOW2 (For OpenStack / KVM) format and supports cloud-init for

initial configuration. Installation of the virtual machine does not differ from any other virtual machine installation.

However, once the appliance is operational, it is required to complete the following post-install steps:

7.1. Upload edge software image

Upload an edge software image so that edges which are using older software versions can be upgraded to a

more recent revision of the code. This is done via SuperUser Operator | Software Images. Once an image is

provided, its manifest and integrity will be checked before it is added to the repository.

Image updates installed on edges will install as a new partition that becomes active at the next edge reload.

The updates are delivered in a ZIP file format and there will be an individual file per Edge family (Edge500,

Edge1000, etc …)

7.2. Create an Operator Profile

Operator profiles are associated to an enterprise when it is created and provide essential system properties.

The two most impactful are:

• Orchestrator address: This needs to be set to the IP or FQDN address on which the Orchestrator is

reachable. This will be auto-populated based on what is defined in the system property

(network.public.address). System properties are covered in section 7.3. This address will be included

in policies provided to the edges in order to establish MGD connections (Management Daemon). It is

advisable to decouple the FQDN used in this profile from the physical machine name of the

Orchestrator to allow migration and DR flexibility down the line.

• Software Version: This will control what version of the software will be delivered to edges when they

connect to the Orchestrator. Changing this will virtually immediately trigger edges to download and

upgrade their system software.

Page 63: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 63 of 78

The Operator profile assigned to customers can be changed in ACME Corporation | Configure | Customer:

7.3. System Properties

System properties control global behavior of the Orchestrator and can configure system wide services or

functionalities. Modifying system properties can be done through Superuser Operator | System Properties:

The network.public.address property MUST be set after installation to the IP address or FQDN on which the

Orchestrator is publicly reachable for edges. This is typically the first modification that needs to be done in

order to make the Orchestrator operational:

Page 64: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 64 of 78

It is recommended to define the following system properties and configure the corresponding external services

used by the Orchestrator:

7.3.1 Twilio

Twilio is used for SMS based alerting to enterprise customers to notify them of Edge or link outage events. An

account needs to be created and funded at http://www.twilio.com. VeloCloud recommends setting up the

service to auto-renew and auto-charge in order to avoid service interruptions that could impact Orchestrator

operations.

If no SMS alerting is needed than this can be omitted but keep in mind that SMS services are needed to

enable MultiFactor Authentication (MFA) on the Orchestrator. The relevant properties are called:

• service.twilio.enable: allow the service to be disabled in the event no internet access is available to the

VeloCloud Orchestrator

• service.twilio.accountSid: Specify the Twilio Service ID

• service.twilio.authToken: Specify the provided token to authenticate

• service.twilio.phoneNumber in (nnn)nnn-nnnn format

7.3.2 MaxMind

MaxMind is used to geolocate the IP addresses of last mile links and data center tunnel connections. Location

settings can always be manually set in the event the MaxMind results are inaccurate.

An account needs to be created and funded at http://www.maxmind.com. The Orchestrator uses the

ISP/City/Org Service and VeloCloud recommends setting up the service to auto-renew and auto-charge in

order to avoid service interruptions that could impact Orchestrator operations.

The relevant properties are called:

• service.maxmind.enable: allow the service to be disabled in the event no internet access is available to the

VeloCloud Orchestrator

• service.maxmind.userid: hold the user identification supplied by MaxMind during the account creation

• service.maxmind.license: holds the license key supplied by maxmind

7.4. Add gateway pools and gateways

Page 65: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 65 of 78

Without at least one active, in-service gateway in a pool, you will not be able to provision any customers. As

part of the customer provisioning, you will need to specify which Gateway Pool should be used. This cannot be

empty as edges wouldn’t have an endpoint to connect to.

7.5. Provision Operator Users

The default installed Super User account is called ‘[email protected]’ and password is set to ‘vcadm!n’. It is

highly recommended to add additional operator accounts with appropriate role assignment and disable or

delete the default accounts.

Additional operators can be added via SuperUser Operator | Operator Users:

Create an additional Operator and explore the different roles that are available. Once the user is configured,

see if you can log on with the username. If a lower privilege role has been selected, observe the more

restrictive capabilities in the portal.

Passwords for both Operator users as well as enterprise accounts can be reset in which case an email is sent

to the address on record in the Orchestrator with a reset link, inviting the user to modify their password:

7.6. Orchestrator Monitoring

The Orchestrator is self-contained and has backend processes that on a scheduled basis inspects the

operation of the gateway as well as rolls up statistics on lower resolution sets that would allow for more

Page 66: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 66 of 78

compact storage of these statistics.

VeloCloud advises monitoring the following metrics on the Orchestrator:

Metrics/Parameters Equivalent shell command Threshold Remediation

CPU Load Uptime > 70% Examine the load for each service (ps -p <pid> -o %cpu,%mem,cmd and top command) sudo service <service-name> restart

CPU Core Usage mpstat -P ALL > 70% Examine the usage for each service. (ps -p <pid> -o %cpu,%mem,cmd and top command) sudo service <service-name> restart

Memory Usage free -m >80% Examine the each service (top command and sort it by memory) service <service-name> restart

Disk Usage df -kh > 70% Check mysql database_dir location. Increase the volume based on the need. Remove excessive log files under /var/log

Internet availability

ping -c 5 8.8.8.8 > 0% loss

Check for any peering issues, internet connectivity with the ISP, do a traceroute to other known locations Check the host’s interface stats (ifconfig -a <interface name>)

HTTP Services netstat -aln | awk '$6 == "LISTEN" && $4 ~ ":80$"' netstat -aln | awk '$6 == "LISTEN" && $4 ~ ":443$"'

N/A Check the nginx service status and logs (less /var/log/access.log and error.log) sudo service nginx restart

Upload Service sudo ps aux | grep \"/usr/share/node/upload/upload.js sudo ps -elf | grep upload | wc -l

N/A Check the upload service status and logs (less /var/log/upload/exceptions.log) sudo service upload restart

Portal Service sudo ps aux | grep \"/usr/share/node/portal/portal.js sudo ps -elf | grep portal | wc -l

N/A Check the portal service status and logs (less /var/log/portal/exceptions.log) sudo service portal restart

NTP TimeZone sudo cat /etc/timezone <Etc/UTC> echo "Etc/UTC" | sudo tee /etc/timezone sudo dpkg-reconfigure --frontend noninteractive tzdata

NTP Offset sudo ntpq -p > 15 msec sudo service ntp stop use ntpdate to manually sync with ntp server sudo service ntp start

Execute a few of the above mentioned commands on lab-vco1.

Also inspect the location and content of the log files:

• /var/log/syslog : contain system level events

• /var/log/portal/velocloud.log : contains all events pertaining the front end user and operator portal

• /var/log/upload/velocloud.log : contains all events pertaining edge management connections and

statistic uploads from the edges.

• /var/log/backend/velocloud.log: contains all events pertaining to mysql statistic rollups

7.7. Orchestrator Disaster Recovery

The VeloCloud Orchestrator has a built-in disaster recovery feature that protects against failure of the primary

Orchestrator. The feature allows automatic synchronization of configuration and statistics to a standby system.

It is important to understand that this is a Disaster Recovery function and not a High Availability feature that

aims to reduce downtime. The DR function can be used since the Edges connect to an independent control

plane and dataplane and can operate for extended periods of time without a control plane being present. If this

happens, the edges simply continue to operate with their last known configuration but no dataplane services

Page 67: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 67 of 78

will be impacted.

The edges know the independent IP addresses or FQDN names of both of the orchestrators and will

simultaneously heartbeat to both systems. The Edge will know which of the two systems is the active and

accept policy updates from this system as well as send statistics and status updates. The active Orchestrator

will synchronize with the standby via a backchannel.

For this exercise, lab-vco1 will be the active/primary Orchestrator and we’ll designate lab-vco2 as the

standby/secondary Orchestrator. Lab-vco2 is a new installed system without any gateway and customer

configuration. It is the objective to enable this as the standby unit and to fail over to the instance.

To enable this functionality, a standby system needs to be designated. Connect to lab-vco2 by opening a

browser and entering https://sp-322-<N>.lab.velocloud.org/operator in the address bar.

Use the same credentials as the lab-vco1 Orchestrator ([email protected] | Welcome2Velocloud)

Once logged on, verify that you are on the correct system:

• Confirm through the network.public.address system property which should contain 24.17.0.12

• Confirm no active customers, edges nor gateways are present in this system

Note that this may automatically log you out from the lab-vco1 (primary) Orchestrator, which is an artifact of the

lab setup.

Click on ‘Replication’ in the left navigation pane and select ‘Standby’ to prepare lab-vco2 for this role. Click the

‘Enable for Standby’ button to confirm the operation. You may need to refresh the browser page to bring up the

Standby Candidate confirmation page:

Copy the Orchestrator UUID that is presented after the instance has been converted to a standby

Orchestrator. You will need to provide this in the active Orchestrator to complete the DR workflow.

From a standby perspective this is all that needs to be done. Our attention will go back to lab-vco1, which will

be the active system. Open a browser window and log back into https://sp-training-

Page 68: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 68 of 78

pod<N>.velocloud.org/operator.

Navigate to the Replication page and select ‘Active’ as the Replication role:

At this time, the primary Orchestrator will connect to the designated standby and start syncing its configuration

and statistics:

The synchronization process may take a few minutes in the case of the lab but may take longer for an already

established system with active edges and historical statistical data. After this has completed the edges will be

informed of the standby system and updated policy information will be sent out to allow the edges to connect to

the standby system. After a minute, you can observe the edges have checked in to both the active and the

standby system. You may need to refresh the replication page to update the status.

Page 69: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 69 of 78

If you log back onto the lab-vco2 / standby Orchestrator, the only information it will shows is the following:

No other actions can be done on this instance beside promoting the system to an active Orchestrator or

demoting it from the DR pair and go back into a standalone system configuration.

This workflow will automatically inform all the connected edges that a standby has been made available and

communicate the IP address of the standby system. This can be confirmed directly on the edges: root@pod:~# lxc-attach -n lab-edge1

BusyBox v1.23.2 (2017-04-22 05:17:09 UTC) built-in shell (ash)

~ # /opt/vc/bin/getpolicy.py managementPlane

{

"schemaVersion": "2.0.0",

"version": "1496658018547",

"data": {

"heartBeatSeconds": 30,

"managementPlaneProxy": {

"drHeartbeatSecs": 60,

"primary": "24.17.0.11",

"secondary": "24.17.0.12"

},

"timeSliceSeconds": 300,

"statsUploadSeconds": 300

Page 70: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 70 of 78

},

"module": "managementPlane"

}

The command above displays a subsection of the activation file (.edge.info) that outlines the connected

Orchestrator and that has been updated after the standby system has been enabled.

You can also verify the change in behavior from the log activity: ~ # cat /var/log/edged.log | grep mgmt_plane

2017-06-06T07:59:54.301 MSG [MGD] parse_mgmt_plane_policy:740 Received Primary VCO IP Address:

24.17.0.11 -- 24.17.0.11

2017-06-06T07:59:54.301 MSG [MGD] parse_mgmt_plane_policy:751 Did not receive Secondary VCO IP

Address.

2017-06-06T20:02:03.185 MSG [MGD] parse_mgmt_plane_policy:740 Received Primary VCO IP Address:

24.17.0.11 -- 24.17.0.11

2017-06-06T20:02:03.185 MSG [MGD] parse_mgmt_plane_policy:749 Received Secondary VCO IP Address:

24.17.0.12 -- 24.17.0.12

Let’s now simulate a failure of the primary system by shutting down the interface of the lab-vco1 system,

effectively disconnecting it from the attached edges. Ensure that you are attached to the lab-vco1 before

executing the ifdown command ! root@pod:~# lxc-attach -n lab-vco1

root@lab-vco1:~# ifdown eth0

root@lab-vco1:~# ip link

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

81: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast state DOWN mode DEFAULT group default

link/ether 00:ba:be:7b:11:ee brd ff:ff:ff:ff:ff:ff

Confirm that the Orchestrator can no longer be accessed through the browser and log back into lab-vco2.

After a minute, the lab-vco2 should detect that it is no longer receiving sync data from the primary system. At

this time, unlock the ‘Promote Standby’ button and select to promote the standby system to an active instance

of the Orchestrator to recover from the failure.

After the promotion has been completed, the browser page may need to be refreshed to see the standard

dashboards that we expect from an active Orchestrator.

Simultaneously, track the edged.log file on lab-edge1 and after a minute, you should start seeing the new

policy update that inform that lab-vco2 is now the sole primary Orchestrator that the edge connects to: root@pod:~# lxc-attach -n lab-edge1

~ # tail -f /var/log/edged.log

<snip>

2017-06-06T20:50:39.777 ERR [BIZ] update_mgd_route_policy:2476 Changing MGD route policy to

Page 71: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 71 of 78

GATEWAY

2017-06-06T20:51:01.194 ERR [MGD] prenotify_config:1808 prenotify_config: module =

managementPlane

2017-06-06T20:51:01.221 MSG [MGD] update_config_module:881 Received managementPlane policy update

to version 1496782100143

2017-06-06T20:51:01.221 MSG [MGD] parse_mgmt_plane_policy:740 Received Primary VCO IP Address:

24.17.0.12 -- 24.17.0.12

2017-06-06T20:51:01.221 MSG [MGD] parse_mgmt_plane_policy:751 Did not receive Secondary VCO IP

Address.

Shortly after this message is seen, the edge will connect back to the Orchestrator and the Edge status will be

displayed as online. Go to Manage Customers | ACME Corporation | Monitor Edges to confirm on lab-vco2:

Also verify that the gateways have all reconnected to the lab-vco2 via Superuser Operator | Gateways:

If we go back to the replication tab on the left-hand navigation pane, we can see that the system is no longer

configured in a DR state and that lab-vco2 has now taken on a standalone role. To bring DR back online, you

would either need to repair the lab-vco1 system or bring a new system online and add it as a standby node in

the DR configuration. Given that all information is replicated in real-time, there is no need to force a failure

back to the original active system.

8. Freeform Exercise

Should you still have time left in the lab session, consider completing the following objectives without the

previously provided detailed step-by-step guidance. Use the earlier instructions to work towards the individual

goals and keep in mind the lab-vco2 is now the primary Orchestrator on which we will need to configure

everything.

OBJECTIVE 1: De-activate both edges from the ACME Corporation account.

Use Test & Troubleshoot | Remote Actions.

OBJECTIVE 2: Create a new customer account and associate it with the Cloud Gateway pool.

Log onto the Enterpise portal.

OBEJCTIVE 3: Create a new profile, enable Cloud VPN and set a firewall policy to block access to lab-saas2.

OBJECTIVE 4: Provision and activate the edges into the new account as an Enterprise Administrator.

Hint: Use a Virtual Edge type when provision the Edges.

Hint: Use /opt/vc/bin/activate.py in the edge CLI’s to activate them.

Page 72: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 72 of 78

OBJECTIVE 5: Verify connectivity between the two sites, Verify the automatic Cloud Gateway Assignments.

OBJECTIVE 6: Change the physical location of an edge towards its most distant gateway and observe impact

of Cloud Gateway allocation.

OBJECTIVE 7: Keep a ping going from one client to another client and fail the primary cloud gateway. Monitor

the impact on the data flow and the state of the impacted Cloud Gateway.

OBJECTIVE 8: Convert (or re-install) an existing Partner Gateway to a Cloud Gateway and add it into the

Cloud Gateway Pool. Assess the impact on the Gateway assignment and confirm which Gateways the edges

connect to.

This concludes the lab exercises. VeloCloud thanks you for your time and interest!

Page 73: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 73 of 78

Page 74: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 74 of 78

Page 75: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 75 of 78

Page 76: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 76 of 78

Page 77: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 77 of 78

Page 78: Service Provider Lab Guide V4...Service Provider Lab Guide (Revision 4.7) VeloCloud Networks Confidential – not to be redistributed Page 5 of 78 1. Objectives The lab you are about

Service Provider Lab Guide (Revision 4.7)

VeloCloud Networks Confidential – not to be redistributed Page 78 of 78