configuring cisco nexus 9000 series switches for vmware ... · vmware nsx is a network...

62
© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 62 White Paper Configuring Cisco Nexus 9000 Series Switches for VMware NSX OVSDB Integration First published: January 28, 2018

Upload: vuongnhi

Post on 19-Jul-2018

330 views

Category:

Documents


2 download

TRANSCRIPT

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 62

White Paper

Configuring Cisco Nexus 9000 Series Switches for VMware NSX

OVSDB Integration

First published: January 28, 2018

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 2 of 62

Contents

Executive summary ................................................................................................................................................. 5

Use cases ................................................................................................................................................................. 6

How Cisco Nexus 9000 Series OVSDB integration with VMware NSX works ..................................................... 6

OVSDB control plane ............................................................................................................................................... 8 OVSDB protocol .................................................................................................................................................... 8 OVSDB communication between VMware NSX controllers and Cisco 9000 Series Switches .............................. 8

Packet flow for VMware NSX–connected virtual machines and physical workloads ...................................... 12 BUM traffic replication using RSNs ..................................................................................................................... 12 RSN availability: Bidirectional Forwarding Detection over VXLAN ...................................................................... 14 HW-VTEP redundancy: vPC and anycast loopback ........................................................................................... 15 Routing for VNIs that have an HW-VTEP binding ............................................................................................... 16

Configuring the Cisco Nexus 9000 Series for HW-VTEP OVSDB integration with VMware NSX .................... 17 Configuration checklist ........................................................................................................................................ 17

Supported Cisco Nexus 9300 platform switches ............................................................................................ 18 Verifying that your Cisco Nexus 9300 platform switch is running the correct Cisco NX-OS Software release 19 Installing the TP Services Package license for NXDB .................................................................................... 19 Verifying that the correct version of the plug-in and JRE are installed ........................................................... 20 Carving the TCAM to enable BFD over VXLAN on first generation Cisco Nexus 9300 platform switches ..... 21 Verifying the vPC configuration ...................................................................................................................... 22 Configuring the anycast loopback address for vPC pairs ............................................................................... 23 Configuring a username and password for use by the NXDB plug-in if desired ............................................. 24 Verifying that the VMware NSX version and configuration are supported ...................................................... 25 Verifying the required network reachability ..................................................................................................... 25 Reserving VLANs and other switch resources ............................................................................................... 26

Configuring OVSDB integration with VMware NSX ............................................................................................ 26 Configuring the required features on the switch .................................................................................................. 26 Configuring VXLAN on the switch ....................................................................................................................... 27 Configuring BFD over VXLAN ............................................................................................................................. 29 Assigning VLANs and interfaces to the controllers ............................................................................................. 29 Configuring the guest shell for the OVSDB plug-in ............................................................................................. 31 Installing the OVSDB plug-in ............................................................................................................................... 32 Configuring the OVSDB plug-in for a standalone switch ..................................................................................... 34 Configuring the OVSDB plug-in for a pair of vPC switches ................................................................................. 36 Enabling the OVSDB plug-in ............................................................................................................................... 43

Registering the HW-VTEP with VMware NSX ...................................................................................................... 46 Binding a logical switch in VMware NSX to a physical switch, physical port, and VLAN ..................................... 49

Verification and troubleshooting .......................................................................................................................... 53

Limitations of VMware NSX OVSDB integration with HW-VTEPs ...................................................................... 55

Configuring the Cisco Nexus 9000 Series Switch as the default gateway for the VNI and VLAN .................. 56 Configuring a redundant default gateway on two vPC switches acting as HW-VTEPs using HSRP ................... 56

Conclusion ............................................................................................................................................................. 58

Appendix A: Upgrading the Cisco NX-OS image and the OVSDB plug-in for vPC .......................................... 58 Check prerequisites ............................................................................................................................................ 59 Upgrade the OVSDB plug-in on the vPC secondary switch ................................................................................ 59 Upgrade the OVSDB plug-in on the vPC primary switch .................................................................................... 60 Upgrade the Cisco NX-OS image ....................................................................................................................... 61 What to do next ................................................................................................................................................... 61

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 3 of 62

Appendix B: Best-practice configurations for vPCs ........................................................................................... 61

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 4 of 62

THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT

TO CHANGE WITHOUT NOTICE. ALL STATEMENTS, INFORMATION, AND RECOMMENDATIONS IN THIS

MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND,

EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY

PRODUCTS.

THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET

FORTH IN THE INFORMATION PACKET THAT SHIPPED WITH THE PRODUCT AND ARE INCORPORATED

HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED

WARRANTY, CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY.

The Cisco implementation of TCP header compression is an adaptation of a program developed by the University

of California, Berkeley (UCB) as part of UCB's public domain version of the UNIX operating system. All rights

reserved. Copyright © 1981, Regents of the University of California.

NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF

THESE SUPPLIERS ARE PROVIDED “AS IS" WITH ALL FAULTS. CISCO AND THE ABOVE-NAMED

SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION,

THOSE OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR

ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE.

IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL,

CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR

LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF

CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual

addresses and phone numbers. Any examples, command display output, network topology diagrams, and other

figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone

numbers in illustrative content is unintentional and coincidental.

This product includes cryptographic software written by Eric Young ([email protected]).

This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit.

(http://www.openssl.org/)

This product includes software written by Tim Hudson ([email protected]).

Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other

countries. To view a list of Cisco trademarks, go to this URL: https://www.cisco.com/go/trademarks. Third-party

trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a

partnership relationship between Cisco and any other company. (1110R)

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 5 of 62

Executive summary

This document is designed for network and virtualization architects interested in deploying Open vSwitch Database

(OVSDB) for hardware Virtual Extensible LAN (VXLAN) tunnel endpoint (HW-VTEP) integration between Cisco

Nexus® 9000 Series Switches and VMware NSX.

The Cisco Nexus 9000 Series delivers proven high performance and density, low latency, and exceptional power

efficiency in a broad range of compact form factors. The Cisco Nexus 9000 Series offers an open-object API-

programmable model for provisioning Layer 2 and 3 features. It provides extensibility through a route processor

module application package, Linux containers, and Broadcom and Linux shell access. It uses the Cisco® NX-OS

Software API (NX-API) for easy-to-use programmatic access.

VMware NSX is a network virtualization platform software suite. It embeds networking and security functions that

typically are handled in hardware directly into the hypervisor. With NSX, users can reproduce data center

networking environments in software. NSX provides a set of logical networking elements and services, including

logical switching, routing, firewalling, and load balancing, for VMware vSphere virtualized environments. NSX

virtual networks can be programmatically provisioned and can run independent of the underlying hardware to

support virtual machines running on a VMware hypervisor.

In some NSX deployments, NSX logical switches must be extended into the physical environment. The goal of this

integration is to connect virtualized workloads using NSX as their networking layer with physical workloads that are

not virtualized and that need to be on the same network subnet. This integration requires the bridging of VXLAN

network identifiers (VNIs) with virtual LANs (VLANs). The OVSDB integration discussed in this document allows

the Cisco Nexus 9000 Series Switch to be an HW-VTEP, which performs the translation from VXLAN

encapsulation to VLAN encapsulation, and back, in hardware to connect workloads in the virtual and physical

environments.

The OVSDB integration allows the NSX controllers to configure VXLAN on the Cisco Nexus 9000 Series Switch.

The NSX controller maps the VXLAN segments to the VLAN segments on specific ports of the Cisco Nexus 9000

Series Switch and pushes this configuration to the switch. The NSX controller uses the OVSDB management

protocol to push the configuration changes to the switch.

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 6 of 62

Use cases

From a logical networking perspective, the OVSDB integration allows the virtual workloads running within the NSX

network and connected to VNIs to be on the same subnet and broadcast domain as physical workloads connected

to VLANs through the Cisco Nexus 9000 Series Switches. Figure 1 shows a logical view of the connectivity

achieved by the integration.

Figure 1. Logical view of VNI-to-VLAN connectivity

How Cisco Nexus 9000 Series OVSDB integration with VMware NSX works

The best way to understand the integration of OVSDB with Cisco Nexus 9000 Series Switches and NXS is to look

at the connections one step at a time. Figure 2 shows the physical locations of the workloads.

Figure 2. Physical connectivity

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 7 of 62

Figure 2 shows three virtual workloads: VM1, VM2, and VM3. Their IP addresses are 192.168.1.10 (VM1),

192.168.1.11 (VM2), and 192.168.1.12 (VM3). The three virtual machines are all connected to VNI 5000. They are

on the same subnet as the physical workload at the bottom of the figure using IP address 192.168.1.100. The

physical workload is connected to VLAN 500.

VM1 and VM2 are both on a VMware ESXi host (H1) running NSX. H1 has a VMkernel interface used by NSX as

its VTEP with the IP address 10.1.1.10. VM3 is located on H3 with a VTEP address of 10.1.1.12.

The physical workload is connected to a Cisco Nexus 9000 Series Switch (N9K) with a loopback address of

10.99.99.10. When VM1 sends data to the physical workload, a VXLAN tunnel is formed between H1and N9K, as

indicated by the purple line in Figure 3.

Figure 3. VXLAN tunnel between H1 and N9K

The tunnel shown in Figure 3 carries the traffic between VM1 and the physical workload. The packet from VM1 is

encapsulated in a VXLAN header with a source IP address of 10.1.1.10 (VTEP for H1) and a destination IP

address of 10.99.99.10 (loopback IP address for N9K) and using VNI 5000. It is up to the data center network to

deliver the packet. So long as there is Layer 3 IP reachability between the H1 VTEP and the N9K loopback IP

address, VM1 and the physical workload can communicate with each other as if they are on the same VLAN (or

VXLAN).

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 8 of 62

Figure 4 shows a simplified diagram of a packet.

Figure 4. Simplified diagram of an IP packet encapsulated by a VXLAN header

The HW-VTEP, in this case N9K, will take the VXLAN encapsulated packet received from H1, remove the outer

headers (IP, UDP, and VXLAN headers), insert a VLAN 500 header, and deliver the packet on its physical port. It

will also take all VLAN 500 encapsulated packets received from the physical workload destined for VM1, remove

the VLAN header, and insert appropriate VXLAN headers (IP, UDP, and VXLAN 5000 headers), with an IP

destination of H1 and a source IP address consisting of its loopback address. This process all is performed in

hardware on the Cisco Nexus 9000 Series Switch.

OVSDB control plane

This section describes the OVSDB control plane.

OVSDB protocol

Open vSwitch Database, or OVSDB, is a management protocol, detailed in RFC 7047, designed to be used in

Software-Defined Networking (SDN) environments. OVSDB allows the NSX controllers and the Cisco Nexus 9000

Series Switches to communicate with one another. The NSX controllers use the OVSDB protocol to communicate

with HW-VTEPs. OVSDB-based communication uses the VTEP 5 schema, which defines the data format for the

communication between an external controller and a switch.

NX-OS implements the OVSDB protocol by means of an intermediate agent in the form of a plug-in, maintained by

Cisco. This plug-in accepts the OVSDB messages sent from the NSX controllers and makes the appropriate

JavaScript Object Notation Remote Procedure Call (JSON-RPC) NX-API calls on the Cisco Nexus 9000 Series

Switch. The plug-in runs in the Cisco Nexus 9000 Series Switch’s guest shell container to provide security and

isolation between different control planes that may be running on the switch.

The OVSDB plug-in has OVSDB as its northbound interface (facing toward the NSX controllers) and the JSON-

RPC NX-API as its southbound interface (facing toward the switch).

OVSDB communication between VMware NSX controllers and Cisco 9000 Series Switches

In this integration, the NSX controllers are responsible for handling the interaction with the hardware gateways (the

Cisco Nexus 9000 Series Switches). For this purpose, a connection is established between the NSX controllers

and a dedicated piece of software called the hardware switch controller (HSC). For the Cisco implementation, the

HSC is the plug-in running in the Cisco Nexus 9000 Series Switch’s guest shell.

Figure 5 shows the communication path from the NSX controllers to the Cisco Nexus 9000 Series Switch as

described in the preceding sections.

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 9 of 62

Figure 5. Communication path from the VMware NSX controllers to the Cisco Nexus 9000 Series Switch

After the connectivity between the NSX controllers and the Cisco Nexus 9000 Series Switches is configured and

established, the NSX controllers will push the administrator-configured association between a logical switch (VNI)

and the physical switch, port, and VLAN to the Cisco Nexus 9000 Series hardware gateway through the HSC. The

NSX controller will also advertise a list of replication service nodes (RSNs), which the Cisco Nexus 9000 Series

Switches will use to forward broadcast, unknown unicast, and multicast (BUM) traffic. The RSN function is

discussed Packet flow for VMware NSX–connected virtual machines and physical workloads section of this

document.

A typical NSX deployment generally has three NSX controllers, providing redundancy and scale-out capabilities.

The HSC will open a connection to all three controllers, as shown in Figure 6.

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 10 of 62

Figure 6. Connection between the HSC running in the Cisco Nexus 9000 Series Switch and the VMware NSX controllers

The NSX controllers will advertise the list of hypervisor VTEP IP addresses relevant to the logical switches

configured on the hardware gateway to the HSC. The NSX controllers also will advertise the association between

the MAC addresses of the virtual machines in the virtual network and the VTEP through which they can be

reached.

Now return to example introduced in Figure 3 and depicted in Figure 7.

Figure 7. VXLAN tunnel between H1 and N9K

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 11 of 62

After the OVSDB relationship is established, the NSX controllers will advertise to the HW-VTEP that the MAC

addresses for VM1 and VM2 (MAC A and MAC B) are reachable at IP address 10.1.1.10 (the VTEP IP address of

H1), and that the MAC address for VM3 (MAC C) is reachable at 10.1.1.12. The Cisco Nexus 9000 Series Switch’s

HSC will advertise to the NSX controllers that the MAC address for the physical workload (MAC D) is reachable at

10.99.99.10 (the loopback address of N9K).

The NSX controllers in turn let H1 and H3, the hosts that have workloads on the VNI that is mapped to VLAN 500,

know that MAC D is reachable through N9K as required. Note that the NSX controllers use OVSDB only toward the

HW-VTEP, and the hypervisor VTEPs are programmed using VMware’s proprietary protocols.

The Cisco Nexus 9000 Series Switch running as an HW-VTEP includes entries in the MAC address table for MAC

A and MAC B pointed at 10.1.1.10 and MAC C pointed at 10.1.1.12. Listing 1 shows an example. The top MAC

address belongs to VM1, and it is reachable at H1. The second MAC address is the physical workload connected

to interface Eth1/31 on VLAN 500.

Listing 1 Example of a MAC address table on a Cisco Nexus 9000 Series Switch

N9K# sh mac address-table vlan 500

Legend:

* - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC

age - seconds since last seen,+ - primary entry using vPC Peer-Link,

(T) - True, (F) - False, C - ControlPlane MAC

VLAN MAC Address Type age Secure NTFY Ports

---------+-----------------+--------+---------+------+----+------------------

C 500 0050.569b.7e06 dynamic 0 F F nve1(10.10.1.10)

* 500 547f.ee84.327c dynamic 0 F F Eth1/31

N9K#

The same information can be seen in the NSX controller. The physical workload is reachable through the N9K

loopback address (10.99.99.10), as shown in Listing 2.

Listing 2 Example of the MAC address table on the NSX controller

nsx-controller # show control-cluster logical-switches mac-table 5000

VNI MAC VTEP-IP Connection-ID

5000 00:50:56:9b:7e:06 10.10.2.10 3

5000 54:7f:ee:84:32:7c 10.99.99.10 2

nsx-controller #

OVSDB is used by the NSX controllers to learn MAC address reachability for physical workloads connected to the

Cisco Nexus 9000 Series Switches configured as HW-VTEPs, and to push relevant MAC address reachability

information to the HW-VTEPs.

Note: Other regular interface configurations required for the daily operation of the switch, such as storm control,

Quality of Service (QoS), Access Control Lists (ACLs), and redundancy, will not be configured or monitored by

NSX. This type of everyday configuration and monitoring remains the responsibility of the network administrator.

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 12 of 62

Packet flow for VMware NSX–connected virtual machines and physical workloads

This section discusses the flow of packets for virtual machines and physical workloads connected to NSX.

BUM traffic replication using RSNs

The NSX for vSphere implementation does not use the HW-VTEP capabilities to perform replication for BUM

traffic. BUM traffic replication is performed in software on the RSNs.

Therefore, as discussed earlier in this document, when you configure OVSDB integration between NSX and

Cisco Nexus 9000 Series Switches, you must select NSX-enabled ESXi hosts as the RSNs that the hardware

gateway can use to forward BUM traffic. To better understand how BUM traffic is handled in this integration, look

at Figure 8.

Figure 8. Logical view of a broadcast domain in OVSDB integration

In the figure, the physical workload (192.168.1.100) is sending traffic to VM1. To begin communicating, the

physical workload will send an Address Resolution Protocol (ARP) request for IP 192.168.1.10 (VM1’s IP address).

The ARP request is sent as a broadcast message and is used to map IP addresses to MAC addresses.

In this example, the virtual network consists of three hypervisors. Two of those hypervisors are devices to which

the traffic must be flooded, because they have active virtual machines on the VNI 5000 logical switch. In the

current model, the hardware gateway will not use hardware replication for the frames it needs to flood to each of

those hypervisors. Instead, the NSX controller has provided a list of RSNs that the hardware gateway will use to

perform the replication in software.

The RSNs are statically defined by the NSX administrator. For each frame that needs to be flooded, the hardware

gateway picks one RSN per VNI to act as a replication server for the virtual world. The RSN then takes care of

replicating the BUM traffic in software and forwards it to the appropriate hypervisors that need to receive it.

Figure 9 illustrates the process.

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 13 of 62

Figure 9. Sending BUM traffic to the RSN

The example in Figure 9 shows that the HW-VTEP (N9K) will take the ARP broadcast packet and forward one copy

of it to the chosen RSN for VNI 5000, which in this case is H3. The host should already know the MAC address of

the physical workload (MAC D) and its location, which in this case is N9K (10.99.99.10), from the OVSDB learning

described earlier.

The second part of the RSN’s job—replication—is shown in Figure 10.

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 14 of 62

Figure 10. RSN replication

Figure 10 shows that H3 will then take the packet and send a broadcast packet to VM3, which is the only

virtual machine that is in VNI 5000 on that host. H3 will also look at its VTEP table and replicate the broadcast

ARP packet for every host in the NSX domain that has virtual machines active on VNI 5000. The replication

method used depends on the setup in the NSX manager for the particular logical switch (unicast or hybrid), but

it occurs in software. In the example here, the only other host with virtual machines active in VNI 500 is H1. When

the replicated packet arrives at H1, H1 will send a broadcast packet to VM1 and VM2 because they are both in

VNI 5000.

When VM1 sees that the ARP request is for itself, it will learn the ARP entry for the physical workload and return an

ARP reply with its MAC address (MAC A) directly to the physical workload at MAC D. When that reply reaches the

hypervisor of H1, H1 will perform a lookup for MAC D and see that it is sitting behind N9K. H1 will encapsulate the

ARP reply in a VXLAN packet with its IP address as the source and the loopback of N9K as the destination, and it

will forward the packet to the network.

N9K will receive the ARP reply, remove the VXLAN header, and forward the packet to the physical workload on the

correct VLAN. The physical workload will learn the MAC address associated with VM1 and will now be able to send

unicast packets directly to the virtual machine as described in the preceding sections.

RSN availability: Bidirectional Forwarding Detection over VXLAN

To avoid black-holing BUM traffic, the HW-VTEP needs to be able to remove a failed RSN from its replication

servers list if an RSN host fails. To use enable function, VMware requires the use of Bidirectional Forwarding

Detection (BFD).

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 15 of 62

BFD is a detection protocol designed to provide fast forwarding-path-failure detection times for all media types,

encapsulations, topologies, and routing protocols. In addition to fast forwarding-path-failure detection, BFD

provides a consistent failure detection method for the network administrator.

You can enable or disable BFD in the NSX GUI on a global basis for all defined service nodes in the controller. In

this context, it provides fast forwarding-path-failure detection times between the NSX RSNs and the HW-VTEP (the

Cisco Nexus 9300 platform switch).

In this implementation, this feature is essentially BFD over VXLAN. The BFD session is hosted on the Cisco switch

and on all the RSNs in the NSX controller. The BFD control packets are encapsulated and decapsulated by the

Cisco switch. In virtual Port Channel (vPC) mode, the BFD session is hosted only on the vPC primary device, and

the vPC secondary device can receive the encapsulated BFD control frames. In such cases, the vPC secondary

device decapsulates the VXLAN packet and sends it over the vPC peer link to the vPC primary device (the BFD

hosting device).

Explicit Ternary Content-Addressable Memory (TCAM) carving is needed on the first generation Cisco

switch 9300 switches to enable this feature. This is implemented by enabling the redirect-tunnel region as

discussed in the Carving the TCAM to enable BFD over VXLAN on first generation Cisco Nexus 9300

platform switches section of this document.

In a BFD-enabled setup, the VNIs are hashed to only those replication servers that have BFD enabled and are in a

session up state. When a BFD session goes down in a replication server, the VNIs are rehashed dynamically to the

available replication servers that have BFD enabled and are in the BFD session up state. This function provides

dynamic rebalancing of BUM traffic flows upon fast forwarding-path-failure detection.

It is important to note that forwarding of traffic requires that the BFD session be active between the hardware VTEP

and the RSNs.

HW-VTEP redundancy: vPC and anycast loopback

The Cisco Nexus 9000 HW-VTEP integration with NSX can take advantage of Cisco vPC technology to provide

high availability for connectivity of physical devices. vPCs allow links that are physically connected to two different

Cisco switches to appear to a third downstream device to be coming from a single device and as part of a single

port channel. The third device can be a switch, a server, or any other networking device that supports IEEE

802.3ad port channels. Using vPCs and an anycast loopback address, two Cisco Nexus 9000 Series Switches will

appear as a single HW-VTEP device to the NSX manager, controllers, and hypervisors, as shown in Figure 11.

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 16 of 62

Figure 11. Using vPC and anycast loopback addresses for high availability

Routing for VNIs that have an HW-VTEP binding

When an NSX logical switch is connected to an HW-VTEP using OVSDB, it cannot be attached to a Distributed

Logical Router (DLR) at the same time. This limitation exists for all NSX implementations of this feature, regardless

of the hardware vendor providing the HW-VTEP function. Traditionally, this limitation meant that the default

gateway for the virtual machines and bare-metal devices attached to the VNI and VLAN combination had be an

external device. This device could be an Edge Services Gateway (ESG) virtual machine attached to the VNI, or it

could be a traditional router connected to the VLAN (or a physical firewall or another service device). These

traditional options are shown in Figure 12.

Figure 12. Traditional methods of providing a default gateway for VNIs and VLANs

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 17 of 62

With the new cloud-scale application-specific integrated circuits (ASICs) available in newer Cisco Nexus 9000 EX

platform switches, the Cisco Nexus 9000 Series Switches performing the OVSDB integration can also be the

default gateways for the extension of the subnet. This capability allows savings in Capital Expenditures (CapEx)

because external physical routers are no longer necessary. The Cisco Nexus 9000 Series Switches can perform

routing in hardware, while also saving Operating Expenses (OpEx) by providing the default gateway and routing

capabilities using the well-known switch virtual interface (SVI) feature. Redundancy can be achieved by using a

First-Hop Redundancy Protocol (FHRP) such as Hot-Standby Router Protocol (HSRP). This new capability is

shown in Figure 13.

Figure 13. New method of providing a default gateway for VNIs and VLANs

Configuring the Cisco Nexus 9000 Series for HW-VTEP OVSDB integration with VMware NSX

This section describes how to configure the Cisco Nexus 9000 Series for HW-VTEP OVSDB integration with

VMware NSX.

Configuration checklist

Review the following checklist before installing the OVSDB plug-in and configuring the system:

● The Cisco Nexus 9300 switch must be one of the models listed in Table 1.

● The Cisco Nexus 9300 switches should be running the correct version of NX-OS.

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 18 of 62

● The Cisco Nexus 9000 Series Switches should have the Cisco Nexus Database (NXDB) license (N93-

TP1K9) installed and the feature enabled.

● The Cisco Nexus 9300 switches should have the correct version of the plug-in and Java Runtime

Environment (JRE) downloaded to their bootflash memory.

● If using first generation Cisco Nexus 9300 switches they must have been configured with the explicit TCAM

carving that is needed to enable the redirect-tunnel region. This feature is used for BFD over VXLAN.

● If high availability is needed, vPCs must be configured correctly on the Cisco Nexus 9300 switches.

● If high availability with vPC is being used, an anycast loopback address must be configured on the vPC pair.

● A separate username and password must be configured on the Cisco Nexus 9300 switches to be used by

the NXDB process to push API configuration changes, if this capability is desired.

● The correct supported version of NSX must be installed. The NSX configuration for controllers, NSX VIBs,

and VXLAN VTEPs must be completed.

● The routing for the environment must be set up. The NSX hypervisors (using the NSX VTEP interface), NSX

manager, and NSX controllers all should be able to ping the switch’s loopback address (the one that is

going to be used for HW-VTEP configuration). This address can be the anycast loopback address used if

vPC configuration is desired.

● VLANs must be reserved for use for VXLAN-to-VLAN association. Each NSX logical switch extension will

use one VLAN. Also, a VLAN will be used for BFD over VXLAN.

The following sections discuss how to configure and verify the items on this checklist.

Supported Cisco Nexus 9300 platform switches

As of the writing of this document, the only Cisco Nexus 9300 switches supported and certified to run the HW-

VTEP OVSDB Integration with NSX are the ones listed Table 1.

Table 1. Supported Cisco Nexus devices for OVSDB HW-VTEP integration with VMware NSX

Cisco Nexus 9300 platform switch Part number

Cisco Nexus 9372PX or 9372PX-E Switch N9K-C9372PX or N9K-C9372PX-E

Cisco Nexus 9372TX or 9372TX-E Switch N9K-C9372TX or N9K-C9372TX-E

Cisco Nexus 93120TX Switch N9K-C93120TX

Cisco Nexus 9332PQ Switch N9K-C9332PQ

Cisco Nexus 9396PX Switch N9K-C9396PX

Cisco Nexus 9396TX Switch N9K-C9396TX

Cisco Nexus 93128TX Switch N9K-C93128TX

Cisco Nexus 93180YC-EX N9K-C93180YC-EX

Cisco Nexus 93108TC-EX N9K-C93108TC-EX

Cisco Nexus 93180LC-EX Switch N9K-C93180LC-EX

To verify the type of hardware you have, you can log in to the Cisco Nexus 9300 switch and type the command

shown in Listing 3.

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 19 of 62

Listing 3 Sample display for a show version | inc chassis command

N9K# show version | inc chassis

cisco Nexus9000 C9332PQ chassis

N9K#

Verifying that your Cisco Nexus 9300 platform switch is running the correct Cisco NX-OS Software release

As of the writing of this document, the NX-OS release recommended for running the HW-VTEP OVSDB integration

with NSX is the one shown in Table 2.

Table 2. Recommended Cisco NX-OS Software release for OVSDB HW-VTEP integration with VMware NSX

Recommended Cisco NX-OS release Release 7.0(3)I6(1) or later in same main release

The recommended version of NX-OS is indicated by the main release number. For example, if the recommended

release is Release 7.0(3)I6(1), then any later minor release in the same main release will work: for example, in this

case Release 7.0(3)I6(6) would also be recommended. To verify the release of NX-OS in your switch, enter the

command shown in Listing 4.

Listing 4 Sample display from a show version| inc System command

N9K# show version | inc System

Cisco Nexus Operating System (NX-OS) Software

System version: 7.0(3)I6(1)

N9K#

N9k#

Installing the TP Services Package license for NXDB

The TP Services Package (TP_SERVICES_PKG) license must be installed on the Cisco Nexus 9300 switch. The

license part number is N93-TP1K9. Note that this is not an honor-based license. Without a license, you

cannot enable the OVSDB features on the switch. The command to verify the licensing is shown in Listing 5.

Listing 5 Sample display for a show license usage command

Feature Ins Lic Status Expiry Date

Comments

Count

---------------------------------------------------------------------------------

------------------------

FCOE_NPV_PKG No - Unused -

TP_SERVICES_PKG Yes - Unused Never -

NETWORK_SERVICES_PKG No - Unused -

LAN_ENTERPRISE_SERVICES_PKG No - Unused -

---------------------------------------------------------------------------------

-----------------------

N9K#

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 20 of 62

If the TP_SERVICES_PKG license is not listed as installed, fix the issue before proceeding. The Cisco NX-OS

Licensing Guide can be found at the following URL:

https://www.cisco.com/c/en/us/td/docs/switches/datacenter/sw/nx-os/licensing/guide/b_Cisco_NX-

OS_Licensing_Guide.html.

Verifying that the correct version of the plug-in and JRE are installed

As discussed earlier. the plug-in that runs in the switch’s guest shell provides communication between the NSX

controllers running OVSDB and the switch running NX-API. The plug-in uses a Java Runtime Environment, or JRE,

in the guest shell. The JRE is a set of software tools for the development of Java applications. It combines the Java

Virtual Machine (JVM), platform core classes, and supporting libraries. JRE is part of the Java Development Kit

(JDK), but it can be downloaded separately. Both the plug-in file and the JRE must be downloaded to the bootflash

memory of the switch. To verify that the plug-in and the JRE are in the bootflash memory, enter dir bootflash: as

shown in Listing 6 and look for the correct version of the plug-in and JRE.

Listing 6 Sample display for a dir bootflash: command

N9K# dir bootflash:

4096 Jul 05 21:21:35 2016 .rpmstore/

4462 Jul 05 21:23:43 2016 20160705_212325_poap_15804_init.log

2 Jul 05 19:48:07 2016 diag_bootup

53 Jul 05 20:40:01 2016 disk_log.txt

33705855 Jul 06 20:46:55 2016 jre-8u112-linux-x64.rpm

697200640 Sep 09 23:57:08 2016 nxos.7.0.3.I4.3.bin

698273280 Jul 05 21:27:58 2016 nxos.7.0.3.IVM3.1.16.bin

15274567 Sep 09 22:10:17 2016 ovsdb-plugin-2.1.0.rpm

4096 Jul 05 21:22:19 2016 scripts/

4096 Oct 03 20:22:00 2016 virt_strg_pool_bf_vdc_1/

4096 Oct 03 20:21:38 2016 virtual-instance/

59 Oct 03 20:21:34 2016 virtual-instance.conf

448 Oct 08 18:18:15 2016 vlan.dat

Usage for bootflash://sup-local

4272594944 bytes used

3696209920 bytes free

7968804864 bytes total

N9K#

As of the writing of this document, the recommended versions of the plug-in and JRE are those shown in Table 3.

Table 3. Recommended versions of the JRE and plug-in

Type Recommended version

Plug-in Version 2.1.0

JRE Version jre-8u112-linux-x64.rpm

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 21 of 62

The plug-in can be downloaded from a special file access area on Cisco.com that is available to authorized

customers. The recommended JRE can be downloaded from the Java website:

https://java.com/en/

The latest JREs are available at the following URL as of the writing of this document:

http://www.oracle.com/technetwork/java/javase/downloads/jre7-downloads-1880261.html

Carving the TCAM to enable BFD over VXLAN on first generation Cisco Nexus 9300 platform switches

NOTE: This step is NOT required for 9300 EX or newer switches.

To enable the BFD over VXLAN feature discussed earlier in this document, explicit TCAM carving is required on

the Cisco Nexus 9300 switch. Note that a reboot is mandatory after the TCAM has been recarved. The

hardware access-list tcam region value for the redirect-tunnel feature must be set to 256. To accomplish this,

you need to borrow some space from a feature that will not miss it. To display the current TCAM carving in the

Cisco Nexus 9300 switch, use the method shown in Listing 7.

Listing 7 Displaying the TCAM carving in a Cisco Nexus 9300 switch

N9K# show hardware access-list tcam region

IPV4 PACL [ifacl] size = 0

IPV6 PACL [ipv6-ifacl] size = 0

MAC PACL [mac-ifacl] size = 0

IPV4 Port QoS [qos] size = 0

---CLIP--

IPV4 RACL [racl] size = 1024

Egress IPV6 VACL [ipv6-vacl] size = 0

Egress MAC VACL [mac-vacl] size = 0

Egress IPV4 RACL [e-racl] size = 256

Egress IPV6 RACL [e-ipv6-racl] size = 0

Egress IPV4 QoS Lite [e-qos-lite] size = 0

ranger+ IPV4 QoS Lite [rp-qos-lite] size = 0

ranger+ IPV4 QoS [rp-qos] size = 256

ranger+ IPV6 QoS [rp-ipv6-qos] size = 256

ranger+ MAC QoS [rp-mac-qos] size = 256

NAT ACL[nat] size = 0

Mpls ACL size = 0

MOD RSVD size = 0

sFlow ACL [sflow] size = 0

mcast bidir ACL [mcast_bidir] size = 0

Openflow size = 0

Openflow Lite [openflow-lite] size = 0

Ingress FCoE Counters [fcoe-ingress] size = 0

Egress FCoE Counters [fcoe-egress] size = 0

Redirect-Tunnel [redirect-tunnel] size = 256

SPAN+sFlow ACL [span-sflow] size = 0

N9K#

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 22 of 62

In the example in Listing 8, the IPV4 RACL [racl] size = 1024 setting is configured to borrow some TCAM space

for the redirect-tunnel feature.

Listing 8 An example of changing the TCAM carving in a Cisco Nexus 9300 switch

N9K(config)# hardware access-list tcam region racl 1024 !<<Used as example

Warning: Please save config and reload the system for the configuration to take

effect

N9K(config)# hardware access-list tcam region e-racl 256

Warning: Please save config and reload the system for the configuration to take

effect

N9K(config)#

N9K(config)# copy running-config startup-config

[########################################] 100%

Copy complete.

N9K# reload

Be sure that any TCAM region that is you make smaller is one that is not being fully utilized.

Note that you must run copy running-config startup-config and reload the switch to make the new TCAM

template take effect.

If you are not sure whether the new TCAM template has taken effect, rerun the show hardware access-list tcam

region command. Look for Redirect-Tunnel [redirect-tunnel] size = 256 and verify that the smaller TCAM region

for the feature chosen to be changed is also correct.

Verifying the vPC configuration

vPC configuration is an optional step. As discussed earlier in this document, you can use the vPC feature to

improve high availability for the HW-VTEP. Verify that the vPC feature is configured correctly. All VLANs that will

be used for the HW-VTEP feature (as discussed later in this document) must be allowed on the vPC peer link.

The Cisco NX-OS Software Release 7.0 configuration guide contains information about configuring vPCs. It is

located at the following URL:

https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/7-

x/interfaces/configuration/guide/b_Cisco_Nexus_9000_Series_NX-

OS_Interfaces_Configuration_Guide_7x/b_Cisco_Nexus_9000_Series_NX-

OS_Interfaces_Configuration_Guide_7x_chapter_01000.html

The vPC role for the switches must be noted. Listing 9 shows an example.

Listing 9 Example of the output for a show vpc role command

N9K# show vpc role

vPC Role status

----------------------------------------------------

vPC role : primary

Dual Active Detection Status : 0

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 23 of 62

vPC system-mac : 00:23:04:ee:be:01

vPC system-priority : 32667

vPC local system-mac : 00:f6:63:80:65:bf

vPC local role-priority : 100

N9K#

The vPC primary switch will contain the master plug-in, and the secondary switch will contain the slave plug-in. You

must know which switch is primary so that you can configure the switches and their plug-ins in the correct order for

their roles.

Configuring the anycast loopback address for vPC pairs

If you are using high availability with vPC, you must configure an anycast loopback address on the vPC switch pair.

In this implementation, with this configuration, both switches in the vPC pair will advertise the same secondary IP

address for the loopback interface they are using for sourcing and receiving the VXLAN traffic. This configuration

will allow them to be seen as a single device from the perspective of the NSX control plane. This loopback address

must be reachable by all NSX components.

Listings 10 and 11 show examples of the loopback configuration for two switches that make up a vPC pair.

Listing 10 An example of anycast loopback configuration on the vPC primary switch

N9K-VPC-1# show running-config interface loopback 0

!Command: show running-config interface loopback0

!Time: Tue Dec 20 19:18:49 2016

version 7.0(3)I4(3)

interface loopback0

ip address 100.0.0.1/32

ip address 100.0.0.10/32 secondary

ip router ospf lab area 0.0.0.0

N9K-VPC-1#

Listing 11 An example of anycast loopback configuration on the vPC secondary switch

N9K-VPC-2# show running-config interface loopback 0

!Command: show running-config interface loopback0

!Time: Tue Dec 20 19:18:49 2016

version 7.0(3)I4(3)

interface loopback0

ip address 100.0.0.2/32

ip address 100.0.0.10/32 secondary

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 24 of 62

ip router ospf lab area 0.0.0.0

N9K-VPC-2#

Note that the primary IP addresses are different, but the secondary addresses are the same. Both of these

switches will advertise their primary addresses and anycast secondary addresses in the Open Shortest Path First

(OSPF) protocol.

If a separate Virtual Routing and Forwarding (VRF) instance will be used for this connectivity, you also need to

configure the loopback and associated routing accordingly.

Configuring a username and password for use by the NXDB plug-in if desired

Network administrator users can assign roles that limit access to NXDB operations on the switch.

NX-OS supports two NXDB roles for users who are configured for remote use through TACACS+:

● nxdb-admin: Allowed to run get and set JSON-RPC NX-API calls from the external controller

● nxdb-operator: Allowed to run only get JSON-RPC NX-API calls from the external controller

When NXDB is enabled, the nxdb-admin role is automatically assigned to the permanent user (admin). Network

administrator users can assign the nxdb-admin or nxdb-operator role to other users as necessary.

Note that Representational State Transfer (REST) requests using credentials received from TACACS+ will work as

expected.

If you want, you can configure a separate local username and password on the Cisco Nexus 9300 switches to be

used by the NXDB plug-in. Use the following command:

username user-id [password [0 | 5] password role {nxdb-admin | nxdb-operator}

This command configures a user account with the specified NXDB role. The user-id argument is a case-sensitive,

alphanumeric character string with a maximum length of 28 characters. Valid characters are uppercase letters A

through Z, lowercase letters a through z, numbers 0 through 9, hyphen (-), period (.), underscore (_), plus sign (+),

and equal sign (=). The at symbol (@) is supported in remote usernames but not in local usernames.

The default password is undefined. The 0 option indicates that the password is clear text, and the 5 option

indicates that the password is encrypted. The default is 0 (clear text).

Listing 12 shows an example of how to configure and verify a user account to be used by the NXDB process.

Listing 12 Defining a user to be used by the NXDB process

N9K# conf t

Enter configuration commands, one per line. End with CNTL/Z.

N9K(config)#username nxdb-user password 0 NX-DBpassword! [[OK?]] role nxdb-admin

N9K(config)#

N9K(config)# copy running-config startup-config

[########################################] 100%

Copy complete.

N9K(config)# exit

N9K# show user-account

user:admin

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 25 of 62

this user account has no expiry date

roles:network-admin

user:nxdb-user

this user account has no expiry date

roles:nxdb-admin

N9K#

Note that this step is optional. The administrator user can also be used for the NXDB configuration.

Verifying that the VMware NSX version and configuration are supported

Verify that the correct version of VMware NSX is installed. As of the writing of this document, the version of code

certified by VMware and Cisco to run this integration is the one listed in Table 4.

Table 4. Supported VMware NSX release

Supported VMware NSX release Release 6.3.3 and later on the same main release

The NSX configuration should follow the VMware configuration guidelines. Check the correct guidelines at the

VMware website.

Verifying the required network reachability

You need to set up and verify the routing for the environment before performing the rest of the configuration. The

NSX hypervisors, NSX manager, and NSX controller should be reachable from the switch’s loopback address (the

one that is going to be used for HW-VTEP configuration). Note that this address can be the anycast loopback

address if you configure vPC.

To verify the address, you can ping the IP addresses of the NSX manager, the NSX controllers, and the ESXi hosts’ VTEP VMkernel while sourcing them from the loopback address that will be used for VXLAN configuration. If you are using a separate VRF instance for this connectivity, the loopback and associated routing must be configured accordingly, and the ping should use that VRF instance. Listing 13 shows an example.

Listing 13 – Example of using ping to verify connectivity

N9K# ping 10.10.2.10 source-interface loop0

PING 10.10.2.10 (10.10.2.10): 56 data bytes

64 bytes from 10.10.2.10: icmp_seq=0 ttl=62 time=0.728 ms

64 bytes from 10.10.2.10: icmp_seq=1 ttl=62 time=0.479 ms

64 bytes from 10.10.2.10: icmp_seq=2 ttl=62 time=0.422 ms

64 bytes from 10.10.2.10: icmp_seq=3 ttl=62 time=0.428 ms

64 bytes from 10.10.2.10: icmp_seq=4 ttl=62 time=0.418 ms

--- 10.10.2.10 ping statistics ---

5 packets transmitted, 5 packets received, 0.00% packet loss

round-trip min/avg/max = 0.418/0.495/0.728 ms

N9K#

Ideally, if possible, the ping should be performed from the NSX components themselves to the loopback addresses

of the Cisco Nexus 9300 switches.

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 26 of 62

Reserving VLANs and other switch resources

As discussed earlier, VLANs must be reserved for VXLAN-to-VLAN associations. Each NSX logical switch

extension uses one VLAN on an HW-VTEP (or vPC pair). If 20 VXLAN-backed port groups are to be extended

using the OVSDB HW-VTEP integration between NSX and the Cisco Nexus 9300 switch, then 20 VLANs must be

reserved on that switch (or vPC pair of switches) for that association.

A single VLAN per switch (or vPC pair of switches) must also be reserved for BDF over VXLAN.

Note: In a vPC implementation, all the reserved VLANs (including the one reserved for BFD over VXLAN) must be

allowed on the vPC peer link.

Note the following guidelines and limitations for VLANs assigned to the external controller:

● The assigned VLANs must not already exist on the system.

● The assigned VLANs can be configured only as dedicated resources. Therefore, only the external controller

can push down VLAN-related configurations for them.

● The VLANs are either completely owned by the external controller or completely owned by the switch. If the

VLAN is owned by the external controller, the switch cannot configure the port membership for that VLAN. If

the VLAN is owned by the switch, any configuration that the controller sends down will be blocked.

Note the following guidelines and limitations for interfaces assigned to the external controller:

● The Ethernet and port-channel interfaces that are exposed to the external controller must be valid

interfaces.

● vPCs are supported.

● vPC domains should be configured with the delay peer-link timer (using the delay peer-link seconds

command). The recommended value is 600 seconds but must be adjusted based on the scale.

● The Ethernet and port-channel interfaces can be configured only as shared resources. Therefore, the

configuration for these resources can be performed from both the switch Command-Line Interface (CLI) and

the external controller.

● Trunk-mode interfaces are the only interfaces that can be shared. Access ports are not supported.

● If an interface is already assigned, it cannot be changed to an access-mode interface.

● You cannot configure the native VLAN (the untagged VLAN for trunk-mode interfaces) on the assigned

interface. Therefore, an assigned interface can only have VLAN 1 as its default VLAN.

Configuring OVSDB integration with VMware NSX

After the tasks in the checklists discussed in the previous sections are complete, you are ready to configure the

Cisco Nexus 9000 Series Switch HW-VTEP integration with VMware NSX using OVSDB. This configuration will

make the Cisco Nexus switches work as HW-VTEPs with NSX.

Configuring the required features on the switch

You must configure the following features for the integration to work:

Note: The commands shown here are the same for both standalone and vPC configurations. They must match

exactly for a vPC pair.

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 27 of 62

● feature nv overlay: Enables the VXLAN feature

● feature vn-segment-vlan-based: Configures the global mode for all VXLAN bridge domains

● feature bfd: Enables the BFD feature

● feature nxapi: Enables the plug-in to use NX-API to communicate with the switch

● feature nxdb: Enables NXDB on the switch, which allows the switch to be configured through JSON-RPC

NX-API calls

In addition to the preceding features, other features, such as vPC, Lightweight Access Control Protocol (LACP),

and routing protocols, may be required, but those should have been configured when you completed the checklist

steps.

The NX-API feature requires you to specify a VRF instance and ports. Choose the management option if the

connection to the controller is through the management VRF instance. The default option specifies the default

VRF instance. The commands for specifying the VRF instance and ports to be used for the NXAPI feature are

listed here:

● nxapi use-vrf {default | management}

● nxapi http port 80

● nxapi https port 443

Listing 14 shows an example of how to configure the features and the NX-API to use VRF management. Any VRF

instance can be specified for this procedure and will be used by the plug-in.

Listing 14 Configuring the features for OVSDB integration

N9k# conf t

Enter configuration commands, one per line. End with CNTL/Z.

N9k(config)# feature nv overlay

N9k(config)# feature vn-segment-vlan-based

N9k(config)# feature bfd

Please disable the ICMP / ICMPv6 redirects on all IPv4 and IPv6 interfaces

running BFD sessions using the command below

'no ip redirects '

'no ipv6 redirects '

N9k(config)# feature nxapi

N9k(config)# feature nxdb

N9k(config)# nxapi http port 80

N9k(config)# nxapi https port 443

N9k(config)# nxapi use-vrf management

Warning: Management ACLs configured will not be effective for HTTP services.

Please use iptables to restrict access.

N9k(config)#

Configuring VXLAN on the switch

Note: The commands discussed here are the same for both standalone and vPC configurations. They must match

exactly for a vPC pair.

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 28 of 62

For the switch to accept Layer 2 VXLAN configurations from an external controller, you must configure some

VXLAN settings. Here, to do this, configure a Network Virtualization Edge (NVE) interface using the following

command:

switch(config)#interface nve 1

This command creates a VXLAN overlay interface that terminates VXLAN tunnels. The NVE interface serves as a

single logical interface for the VXLAN network ports.

After the interface is created, you must enable it by using the following command:

switch(config-if-nve)# no shut

Note that the switch supports only one NVE interface.

The NVE interface then must be tied to the loopback interface that will be its source. Use the following command:

switch(config-if-nve)# source-interface loopback 0 (or the correct loopback

interface)

This /32 IP address must be known by the transient devices in the transport network and the remote VTEPs. This

requirement is met by advertising the address through a dynamic routing protocol in the transport network.

The following command automatically remaps traffic to a different replication service node when a service node is

added or goes down:

switch(config-if-nve)# auto-remap-replication-servers

You then specify that the external controller will distribute the host reachability information (such as the MAC

addresses and IP addresses of the host) in the network. Use the following command:

switch(config-if-nve)# host-reachability protocol controller 1

The next step is to enable the switch to receive configurations from the controller. Use the following command:

switch(config-if-nve)# config-source controller

Next, add a hold-down timer for the interface to keep it down until the switch is up and the routing protocols have

had a chance to converge:

switch(config-if-nve)# source-interface hold-down-time 30

A sample configuration for the NVE interface is shown in Listing 15.

Listing 15 Configuring the NVE interface

N9K# conf t

Enter configuration commands, one per line. End with CNTL/Z.

N9K (config)# interface nve1

N9K (config-if-nve)# no shutdown

N9K (config-if-nve)# source-interface loopback0

N9K (config-if-nve)# auto-remap-replication-servers

N9K (config-if-nve)# host-reachability protocol controller 1

N9K (config-if-nve)# config-source controller

N9K (config-if-nve)# source-interface hold-down-time 30

N9K(config-if-nve)#end

N9K#

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 29 of 62

Configuring BFD over VXLAN

Note: The commands discussed here are the same for both standalone and vPC configurations. They must match

exactly for a vPC pair.

To configure the BFD-over-VXLAN feature, you define a BFD control VLAN and map it to control VNI 0. The BFD

control frames are encapsulated with VNI 0. The following commands are used:

● switch(config)# vlan 3000 !<<Or any VLAN that was reserved for BFD over VXLAN)

● switch(config-vlan)# vn-segment 0

Next, you must define a BFD control SVI with IP forwarding. This command is required to forward the BFD packet

to the supervisor after it is received. The commands are as follows:

● switch(config)# interface Vlan3000 !<<Or any VLAN that is reserved for BFD over VXLAN)

● switch(config-if)# no shutdown

● switch(config-if)# ip forward

A sample configuration for BFD over VXLAN is shown in Listing 16. The actual BFD configuration will be pushed

down by the NSX controller after the integration is configured.

Listing 16 Configuring the VLAN for BFD over VXLAN

N9K# conf t

Enter configuration commands, one per line. End with CNTL/Z.

N9K(config)# vlan 3000 !<<Use the correct VLAN

N9K(config-vlan)# vn-segment 0

N9K(config-vlan)# exit

Warning: Enable double-wide arp-ether tcam carving if igmp snooping is enabled.

Ignore if tcam carving is already configured.

N9K(config)# interface vlan 3000 !<<Use the correct VLAN

N9K(config-if)# no shut

N9K(config-if)# ip forward

N9K(config-if)# end

N9K#

Assigning VLANs and interfaces to the controllers

Note: The commands discussed here are the same for both standalone and vPC configurations. They must match

exactly for the vPC pair for VLANs and any vPCs that are shared. They do not have to match if orphan ports are

being shared on vPC member switches.

For the switch to accept configurations from an external controller, you must identify the VLANs and interfaces

whose configuration can be performed from the external controller. You will assign the reserved VLANs and

interfaces to be configured by the NSX controller using OVSDB.

First you must identify the type of controller that will be connecting to the Cisco Nexus 9300 switch by entering the

following command:

switch(config)# controller type l2-vxlan identifier 1

switch(config-ctrlr-type)#

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 30 of 62

After the controller type is configured and created, you can start assigning interfaces to it. These are the interfaces

that will be controlled and configured by the external controller.

If this is a standalone implementation, then you assign the trunked interfaces that will be connected to the VXLAN

backed port groups.

If this is a vPC implementation and vPC will be assigned, then the vPC peer link must be assigned to the controller

first, and then the actual trunked vPCs should be assigned.

The following example shows how to assign physical (non-vPC) trunked interfaces to the controller:

switch(config-ctrlr-type)# assign interface ethernet 1/4-7, ethernet 1/17 shared

Note that interface ranges can be used, and that the shared keyword at the end of the line is required.

You can also use several lines to assign interfaces:

switch(config-ctrlr-type)# assign interface ethernet 1/4 shared

switch(config-ctrlr-type)# assign interface ethernet 1/5 shared

switch(config-ctrlr-type)# assign interface ethernet 1/6 shared

switch(config-ctrlr-type)# assign interface ethernet 1/7 shared

switch(config-ctrlr-type)# assign interface ethernet 1/17 shared

You can also use the no version of the command to remove an interfaces assignment. Here is an example:

switch(config-ctrlr-type)# no assign interface Ethernet 1/4-5

If you are assigning vPCs, you must assign the vPC peer link first. The vPC configuration must match for both vPC

member switches. As mentioned earlier, orphan ports can be assigned as well. Here is an example:

switch(config-ctrlr-type)# assign interface port-channel 10 shared !<<vPC peer

link

switch(config-ctrlr-type)# assign interface port-channel 100 shared

switch(config-ctrlr-type)# assign interface port-channel 105-107 shared

switch(config-ctrlr-type)# assign interface ethernet 1/20 shared !<Orphan port

After the interfaces have been assigned to the controllers, the next step is to assign the dedicated VLANs that have

been reserved for this configuration. For a vPC implementation, these VLANs must match in both vPC member

switches. The command to assign VLANs to the controller is as follows:

switch(config-ctrlr-type)# assign vlan 500-510, 1000 dedicated

Note that the dedicated keyword at the end of the line is required.

The no version of this command can be used to remove VLANs from the controllers:

switch(config-ctrlr-type)# no assign vlan 510

Note that after running the no assign command and then assigning the VLANs in the controller context, the

OVSDB plug-in will need to be restarted using the command guestshell run sudo ovsdb-plugin service restart.

The processes for installing and working with the plug-in are discussed in the next section of this document.

If you want, you can add a description to the controller by using the following command:

switch(config-ctrlr-type)# controller description <text>

A sample controller configuration for a vPC member switch is shown in Listing 17.

Listing 17 Sample controller configuration for a vPC member switch

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 31 of 62

N9K-vPC-1#

N9K-vPC-1# conf t

Enter configuration commands, one per line. End with CNTL/Z.

N9K-vPC-1(config)# controller type l2-vxlan identifier 1

N9K-vPC-1(config-ctrlr-type)# controller description Controller-for-OVSDB

N9K-vPC-1(config-ctrlr-type)# assign interface port-channel10, port-channel100

shared

N9K-vPC-1(config-ctrlr-type)# assign interface Ethernet1/10-11 shared

N9K-vPC-1(config-ctrlr-type)# assign vlan 600-609 dedicated

N9K-vPC-1(config-ctrlr-type)# end

N9K-vPC-1#

Configuring the guest shell for the OVSDB plug-in

Note: The commands discussed here are the same for both standalone and vPC configurations. They must match

exactly for a vPC pair.

In addition to the NX-OS CLI and Bash access in the underlying Linux environment, the Cisco Nexus 9000 Series

Switches support access to a decoupled run space within a Linux Container (LXC) called the guest shell.

The OVSDB plug-in runs inside the guest shell of the Cisco Nexus 9000 Series Switch. More information about the

Cisco Nexus 9000 Guest Shell can be found at the following URL:

https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/7-

x/programmability/guide/b_Cisco_Nexus_9000_Series_NX-OS_Programmability_Guide_7x/Guest_Shell.html

To run the plug-in, you must first resize the guest shell to the correct values. Use the following commands:

● switch# guestshell destroy

● switch# guestshell resize cpu 18

● switch# guestshell resize mem 2500

● switch# guestshell resize rootfs 1200

● switch# guestshell enable

Warning: The guestshell destroy command removes all existing configurations in the guest shell.

After you enter the guestshell enable command, it may take a few minutes for the new guest shell to come up. An

example of this configuration is shown in Listing 18.

Listing 18 Configuring the guest shell

N9K# guestshell destroy

You are about to destroy the guest shell and all of its contents. Be sure to

save your work. Are you sure you want to continue? (y/n) [n] y

N9K# guestshell resize cpu 18

Note: Guest shell is currently installing, uninstalling or upgrading; please

retry request !<<Please wait

N9K# guestshell resize cpu 18

Note: System CPU share will be resized on Guest shell enable

N9K# guestshell resize mem 2500

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 32 of 62

Note: System memory will be resized on Guest shell enable

N9K# guestshell resize rootfs 1200

Note: Root filesystem will be resized on Guest shell enable

N9K# guestshell enable

N9K#

N9K# show guestshell

Virtual service guestshell+ detail

State : Installing !<<Still not fully enabled

Package information

Name : guestshell.ova

Path : /isanboot/bin/guestshell.ova

Resource reservation

Disk : 0 MB

Memory : 0 MB

CPU : 0% system CPU

N9K# show guestshell

Virtual service guestshell+ detail

State : Activated !<<Guest shell is ready

Package information

Name : guestshell.ova

Path : /isanboot/bin/guestshell.ova

Application

Name : GuestShell

Installed version : 2.2(0.0)

Description : Cisco Systems Guest Shell

Resource reservation

Disk : 1200 MB

Memory : 2500 MB

CPU : 18% system CPU

N9K#

Installing the OVSDB plug-in

Note: The commands discussed here are the same for both standalone and vPC configurations. They must match

exactly for a vPC pair.

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 33 of 62

After the guest shell is active, it is time to install the plug-in and JRE that were copied to the bootflash memory

earlier. To do that, you must first access the guest shell prompt by using the following command:

switch# run guestshell

[guestshell@guestshell ~]$

From the guest shell, install the correct JRE file that should be copied to the bootflash memory. Use the following

command:

[guestshell@guestshell ~]$ sudo rpm -i /bootflash/jre-8u112-linux-x64.rpm!<<Use

correct JRE

After the JRE is installed, it is time to install the correct OVSDB plug-in that has been copied to the bootflash

memory. Use the following command:

[guestshell@guestshell ~]$ sudo rpm -i /bootflash/ovsdb-plugin-2.1.0.rpm !<<Note

plug-in

A sample configuration is shown in Listing 19.

Listing 19 Installing the JRE and plug-in in the guest shell

N9K# run guestshell

[guestshell@guestshell ~]$ sudo rpm -i /bootflash/jre-8u112-linux-x64.rpm

Unpacking JAR files...

rt.jar...

jsse.jar...

charsets.jar...

localedata.jar...

jfxrt.jar...

[guestshell@guestshell ~]$ sudo rpm -i /bootflash/ovsdb-plugin-2.1.0.rpm

+ APPDIR=/usr/local/ovsdb

+ LOGDIR=/usr/local/ovsdb/log

+ START_SCRIPT_NAME=ovsdb.service

+ TIMER_SCRIPT_NAME=ovsdb.timer

+ START_SCRIPT_SRC=/usr/local/ovsdb/systemd/ovsdb.service

+ START_SCRIPT_DST=/etc/systemd/system/ovsdb.service

+ TIMER_SCRIPT_SRC=/usr/local/ovsdb/systemd/ovsdb.timer

+ TIMER_SCRIPT_DST=/etc/systemd/system/ovsdb.timer

+ SERVICE_NAME=ovsdb.timer

+ /bin/id -u guestshell

1000

+ USER=guestshell

+ mkdir -p /usr/local/ovsdb /usr/local/ovsdb/log

+ sed -i 's~^LOGGING_DIR .*~LOGGING_DIR =

abspath('\''/usr/local/ovsdb/log/'\'')~'

/usr/local/ovsdb/lib/python/ovsdb_plugin/common.py

+ sed -i 's/Defaults\s\+requiretty/Defaults \!requiretty/' /etc/sudoers

+ cp /usr/local/ovsdb/systemd/ovsdb.service /etc/systemd/system/ovsdb.service

+ cp /usr/local/ovsdb/systemd/ovsdb.timer /etc/systemd/system/ovsdb.timer

+ sed -i s/guestshell/guestshell/g /etc/systemd/system/ovsdb.service

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 34 of 62

+ chmod 664 /etc/systemd/system/ovsdb.service

+ chmod 664 /etc/systemd/system/ovsdb.timer

+ chown -R guestshell:guestshell /usr/local/ovsdb /usr/local/ovsdb/log

+ chmod -R 777 /usr/local/ovsdb

+ chmod -R 777 /usr/local/ovsdb/bin/data-recv /usr/local/ovsdb/bin/data-send

/usr/local/ovsdb/bin/nc /usr/local/ovsdb/bin/ovsdb-plugin

+ chmod -R 777 /usr/local/ovsdb/log

+ chmod -R u+rwX,g-rwx,o-rwx /usr/local/ovsdb/config/ssl

+ test -h /usr/bin/ovsdb-plugin

+ ln -s /usr/local/ovsdb/bin/ovsdb-plugin /usr/bin/ovsdb-plugin

+ systemctl enable ovsdb.timer

ln -s '/etc/systemd/system/ovsdb.timer' '/etc/systemd/system/multi-

user.target.wants/ovsdb.timer'

[guestshell@guestshell ~]$ exit

logout

N9K#

Configuring the OVSDB plug-in for a standalone switch

Note: The following steps are for configuring the OVSDB plug-in for a standalone (non-vPC) switch. The steps for

configuring the OVSDB plug-in for a vPC pair of switches are in the next section.

After the plug-in is installed in the guest shell, you need to configure it. Make sure that the NSX controller IP

address is reachable before you configure the OVSDB plug-in. Then configure the plug-in using the following

command:

switch# guestshell run sudo ovsdb-plugin config set --run-in-switch --vrf vrf-name --log-level level --log-type

type controller-ip-address switch-mgmt-ip-address switch-username switch-password switch-name --

switch-description switch-description

You must complete all the highlighted fields (in yellow) [[PLS ADJUST AS NECESSARY IF THE TEXT USES

SOMETHING OTHER THAN YELLOW HIGHLIGHTING]] with the information for your environment.

Take a closer look at this long command:

● guestshell run ovsdb-plugin: This entry tells the switch that you are sending the command to the plug-in

running in the guest shell.

● config set: This entry tells the plug-in that you are setting its configuration.

● --run-in-switch: This entry configures the plug-in to run in the switch.

● --vrf vrf-name: This entry is used only when --run-in-switch is set. It configures the plug-in to use the

given VRF name when communicating with the controller. The default VRF name is management.

● --log-level level: This entry sets the logging level. It defaults to info. The recommended setting is debug.

● --log-type type: This entry specifies the type of logging to use. When set to file, the path is always

PLUGIN_ROOT/log/ovsdb-plugin.log. The default type is file, and that is the recommended setting. You can

also set this command to UDP and define a server.

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 35 of 62

● controller-ip-address: This entry specifies is the IP address of one of the NSX controllers (not the NSX

manager). The controller address is in IP:PORT format. The port defaults to 6632 if this command is not

included. The port used by NSX is 6640 and must be specified: for example, 10.10.10.1:6640. The order for

this specification is important. The controller IP address must be entered before the switch IP address.

● switch-mgmt-ip-address: This entry specifies is the switch’s address in IP:PORT format. The port defaults

to 443 if this command is not included. You should use the loopback address here.

● switch-username switch-password: This entry specifies the username and password that will be used by

the NXDB process to push NX-API changes to the switch.

● switch-name: This entry specifies a switch name to be used. This attribute is mandatory. This attribute

must come after the username and password.

● --switch-description switch-description: This entry specifies a description for this switch. This attribute is

optional. You can use any description, including the switch name.

Listing 20 shows an example of the entire command.

Listing 20 Configuring the plug-in

N9K# guestshell run sudo ovsdb-plugin config set --run-in-switch --vrf default --

log-level debug --log-type file 172.26.36.102:6640 100.0.0.3 nxdb-user NX-

DBpassword! N9K --switch-description N9K

Configuration saved

N9K# guestshell run sudo ovsdb-plugin config show

Controllers : #1 addr: 172.26.36.102:6640

VPC : No

In switch : Yes

VRF : default

Switches : #1 addr: 100.0.0.3:443

type: STANDALONE

user: nxdb-user

name: N9K

description: N9K

Log type : file

Log level : debug

Log server : -

TTY log path : -

Max JSON peers: 6

Min heap size : 2048 MB

Max heap size : 2048 MB

Schema : 1.3.0

shol-nxos-3#

As shown in the example, you can use the guestshell run sudo ovsdb-plugin config show command to check

the OVSDB plug-in’s saved configuration.

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 36 of 62

After the OVSDB plug-in is configured, the next step is to create a certificate that will be used for secure

communication between the OVSDB plug-in and the NSX controllers. The certificate is created by entering the

following command:

switch# guestshell run sudo ovsdb-plugin cert bootstrap

Use the command guestshell run sudo ovsdb-plugin cert show.

An example of this command is shown in Listing 21.

Listing 21 Creating a certificate for the plug-in

N9K# guestshell run sudo ovsdb-plugin cert bootstrap

OVSDB plugin keypair saved to '/usr/local/ovsdb/config/ssl/ovsdb_plugin.p12'

Controller CA cert saved to '/usr/local/ovsdb/config/ssl/controller_ca_cert1.der'

Combined controller CA cert saved to

'/usr/local/ovsdb/config/ssl/controller_ca_cert.der'

Switch #1 CA cert saved to '/usr/local/ovsdb/config/ssl/switch_ca_cert_1.der'

N9K# guestshell run sudo ovsdb-plugin cert show

-----BEGIN CERTIFICATE-----

MIICkjCCAfugAwIBAgIJAM31b/Oey9eKMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV

BAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMRMwEQYDVQQHDApQbGVhc2FudG9u

MRIwEAYDVQQKDAlBbmRyb21lZGExFTATBgNVBAMMDG92c2RiX3BsdWdpbjAeFw0x

NjEyMjYyMDU1MjdaFw0xOTEyMjYyMDU1MjdaMGIxCzAJBgNVBAYTAlVTMRMwEQYD

VQQIDApDYWxpZm9ybmlhMRMwEQYDVQQHDApQbGVhc2FudG9uMRIwEAYDVQQKDAlB

bmRyb21lZGExFTATBgNVBAMMDG92c2RiX3BsdWdpbjCBnzANBgkqhkiG9w0BAQEF

AAOBjQAwgYkCgYEArMr611TJIxrIwxwoG+XpoeuhW9r1TNmsog1UNIDQkPXybvLa

8GTBgSqqI7kl4K326fxstPqnJAJtA4R4xCi4OpQDoUmDp680KejjW63/EEvYokx0

XuaY48z+VO0L4zBAIOpI4o0mbGNv5aIaq/qeobbZvo3KSRKkEcGL+IeDEgUCAwEA

AaNQME4wHQYDVR0OBBYEFBqi2xO54kvGVATk3IjvVnoNNAU7MB8GA1UdIwQYMBaA

FBqi2xO54kvGVATk3IjvVnoNNAU7MAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEL

BQADgYEAGkzO/7ubt8LmgKbOUu3nOK5c6fdm+YaktyZ+KemnOPw043FHu/RQ/PF3

aIPhh4pAJXOuln1w4K9hDKgKwXvGxrQdvyXSotrdsrHHVvfxrhP+jSGwgi2x3AZ1

+hotol8wNoc6ZytC6dFtWeL0Jeze+53nB5/zMvgmZ77FI4IewO8=

-----END CERTIFICATE-----

N9K#

Configuring the OVSDB plug-in for a pair of vPC switches

You configure the OVSDB plug-in for use by a pair of vPC switches similar to the way that you configure it for a

standalone switch. After the plug-in is installed in the guest shells of both switches, it is time to configure it.

Make sure that the NSX controller IP address is reachable from both switches before you configure the OVSDB

plug-in. You also need to know which switch has the vPC primary role. The plug-in will be in master mode on the

vPC primary switch and in slave mode on the vPC secondary switch.

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 37 of 62

Now you are ready to configure the plug-in. Enter the following command on the vPC primary switch:

switch# guestshell run sudo ovsdb-plugin config set --run-in-switch --vrf vrf-name --log-level level --log-type

type controller-ip-address vPC-Primary-switch-mgmt-ip-address, vPC-Secondary-switch-mgmt-ip-address

vPC-Primary-switch-username vPC-Secondary-switch-username vPC-Primary-switch-password vPC-

Secondary-switch-password switch-name --switch-description switch-description

Enter the following command on the vPC secondary switch:

switch# guestshell run sudo ovsdb-plugin config set --run-in-switch --vrf vrf-name --log-level level --log-type

type controller-ip-address vPC-Secondary-switch-mgmt-ip-address, vPC-Primary-switch-mgmt-ip-address

vPC-Secondary-switch-username vPC-Primary-switch-username vPC-Secondary-switch-password vPC-

Primary-switch-password switch-name --switch-description switch-description

These commands show that, unlike with the standalone configuration, for the vPC configuration you enter two

switch management IP addresses, two switch usernames, and two switch passwords. You must enter these in a

specific order. On the primary vPC switch, you enter the primary management IP address, username, and

password first. On the secondary vPC switch, you enter the secondary management IP address, username, and

password first.

Take a closer look at this long command:

● guestshell run ovsdb-plugin: This entry tells the switch that you are sending the command to the plug-in

running in the guest shell.

● config set: This entry tells the plug-in that you are setting its configuration.

● --run-in-switch: This entry configures the plug-in to run in the switch.

● --vrf vrf-name: This entry is used only when --run-in-switch is set. It configures the plug-in to use the

given VRF name when communicating with the controller. The default VRF name is management.

● --log-level level: This entry sets the logging level. It defaults to info. The recommended setting is debug.

● --log-type type: This entry specifies the type of logging to use. When the type is set to file, the path is

always PLUGIN_ROOT/log/ovsdb-plugin.log. The default setting is file, and that is the recommended

setting. You can also set the type to UDP and define a server.

● controller-ip-address: This entry is the IP address of one of the NSX controllers (not the NSX manager).

The controller address is in IP:PORT format. The port defaults to 6632 if this command is not included. The

port used by NSX is 6640 and must be specified: for example, 10.10.10.1:6640. The order for this

specification is important. The controller IP address must be entered before the switch IP address.

● vPC-Primary-switch-mgmt-ip-address,vPC-Secondary-switch-mgmt-ip-address: These entries are the

two switch addresses in IP:PORT format. The port defaults to 443 if these entries are not included. The real

loopback addresses (not the anycast loopback address) must be used here. For the vPC primary switch,

the order of IP addresses is vPC-primary-loopback, vPC-secondary-loopback. For the vPC secondary

switch, the order of the IP addresses is reversed. For example, you would use 100.0.0.1,100.0.0.2.

● vPC-Primary-switch-username,vPC-Secondary-switch-username vPC-Primary-switch-

password,vPC-Secondary-switch-password: This entry specifies the username and password that will

be used by the NXDB process to push NX-API changes to the switch. Usually these are the same and are

just repeated. An example of this configuration is nxdb-user, nxdb-user NX-DBpassword!,NX-

DBpassword!

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 38 of 62

● switch-name: This entry specifies the switch name for use by the plug-in, and this attribute is mandatory. It

must come after the username and password. For a vPC setup, the switch name must be identical on both

the vPC primary and secondary switches.

● --switch-description switch-description: This entry specifies a description for this switch, and it is an

optional attribute. You can use any description, including the switch name.

Here is an example of the entire command on the vPC primary switch:

guestshell run sudo ovsdb-plugin config set --run-in-switch --vrf default --log-level debug --log-type file

172.26.36.102:6640 100.0.0.1,100.0.0.2 nxdb-user,nxdb-user NXDBpassword!,NXDBpassword! N9K-VPC-1 --

switch-description N9K-VPC1-N9K-VPC2

Here is an example of the entire command on the vPC secondary switch:

guestshell run sudo ovsdb-plugin config set --run-in-switch --vrf default --log-level debug --log-type file

172.26.36.102:6640 100.0.0.2,100.0.0.1 nxdb-user,nxdb-user NXDBpassword!,NXDBpassword!

[[SOMETHING SEEMS AMISS HERE. PLS VERIFY.]] N9K-VPC-2 --switch-description N9K-VPC1-N9K-VPC2

Note: The anycast loopback IP address is not used for this part of the configuration.

Listings 22 and 23 show the full configurations.

Listing 22 Configuring the plug-in on the vPC primary switch

N9K-VPC-1# sh vpc role

vPC Role status

----------------------------------------------------

vPC role : primary

Dual Active Detection Status : 0

vPC system-mac : 00:23:04:ee:be:01

vPC system-priority : 32667

vPC local system-mac : 00:f6:63:80:65:bf

vPC local role-priority : 100

N9K-VPC-1#

N9K-VPC-1# guestshell run sudo ovsdb-plugin config set --run-in-switch --vrf

default --log-level debug --log-type file 172.26.36.102:6640 100.0.0.1,100.0.0.2

nxdb-user,nxdb-user NXDBpassword!,NXDBpassword! N9K-VPC-1 --switch-description

N9K-VPC1-N9K-VPC2

Configuration saved

N9K-VPC-1#

N9K-VPC-1# guestshell run sudo ovsdb-plugin config show

Controllers : #1 addr: 172.26.36.102:6640

VPC : Yes

In switch : Yes

VRF : default

Switches : #1 addr: 100.0.0.1:443

type: LOCAL

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 39 of 62

user: nxdb-user

name: N9K-VPC-1

description: N9K-VPC1-N9K-VPC2

#2 addr: 100.0.0.2:443

type: REMOTE

user: nxdb-user

name: N9K-VPC-1

description: N9K-VPC1-N9K-VPC2

Log type : file

Log level : debug

Log server : -

TTY log path : -

Max JSON peers: 6

Min heap size : 2048 MB

Max heap size : 2048 MB

Schema : 1.3.0

N9K-VPC-1#

Listing 23 Configuring the plug-in on the vPC secondary switch

N9K-VPC-2# show vpc role

vPC Role status

----------------------------------------------------

vPC role : secondary

Dual Active Detection Status : 0

vPC system-mac : 00:23:04:ee:be:01

vPC system-priority : 32667

vPC local system-mac : 00:62:ec:b3:96:93

vPC local role-priority : 32667

N9K-VPC-2#

N9K-VPC-2# guestshell run sudo ovsdb-plugin config set --run-in-switch --vrf

default --log-level debug --log-type file 172.26.36.102:6640 100.0.0.2,100.0.0.1

nxdb-user,nxdb-user NXDBpassword!,NXDBpassword! N9K-VPC-2 --switch-description

N9K-VPC1-N9K-VPC2

Configuration saved

N9K-VPC-2#

N9K-VPC-2# guestshell run sudo ovsdb-plugin config show

Controllers : #1 addr: 172.26.36.102:6640

VPC : Yes

In switch : Yes

VRF : default

Switches : #1 addr: 100.0.0.2:443

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 40 of 62

type: LOCAL

user: nxdb-user

name: N9K-VPC-2

description: N9K-VPC1-N9K-VPC2

#2 addr: 100.0.0.1:443

type: REMOTE

user: nxdb-user

name: N9K-VPC-2

description: N9K-VPC1-N9K-VPC2

Log type : file

Log level : debug

Log server : -

TTY log path : -

Max JSON peers: 6

Min heap size : 2048 MB

Max heap size : 2048 MB

Schema : 1.3.0

N9K-VPC-2#

After the OVSDB plug-ins are configured, the next step is to create a certificate that will be used for secure

communication between the OVSDB plug-ins and the NSX controllers. Because the two vPC switches will appear

as a single control plane to the NSX controllers (running in a master-slave relationship locally), you must

synchronize the certificate between the two devices. Then, if a failover event occurs, the secondary vPC switch’s

plug-in can take over seamlessly.

You create the certificate by entering the following command on the vPC secondary switch:

N9K-VPC-2#guestshell run sudo ovsdb-plugin cert bootstrap --receive

This command will create a temporary certificate that will be displayed. Copy this certificate.

On the vPC primary switch, enter this command:

N9K-VPC-1#guestshell run sudo ovsdb-plugin cert bootstrap --send IP-of-vPC-Secondary --vrf vrf-name

Enter the IP-of-vPC-Secondary address as well as the correct VRF instance to reach it.

When prompted by the vPC primary switch, paste the temporary certificate that was provided earlier (after you

entered guestshell run sudo ovsdb-plugin cert bootstrap –receive) on the vPC secondary switch.

After you complete the preceding steps, you can display the certificates by entering the guestshell run sudo

ovsdb-plugin cert show command. The certificates in both vPC switches should match.

Sample configuration steps are shown in Listings 24, 25, and 26.

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 41 of 62

Listing 24 Creating a certificate for the plug-in on the vPC secondary switch

N9K-VPC-2# guestshell run sudo ovsdb-plugin cert bootstrap --receive

OVSDB plugin keypair saved to '/usr/local/ovsdb/config/ssl/ovsdb_plugin.p12'

Waiting to receive keypair on port 6640 via SSL/TLS...

Provide sender with my CA cert:

-----BEGIN CERTIFICATE----- !<<This is the temporary cert you should copy

MIICkjCCAfugAwIBAgIJAOV/d9zN9tvLMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV

BAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMRMwEQYDVQQHDApQbGVhc2FudG9u

…CLIP…

mRZr8JYjmHmL1vGfeJ34BjTeCawzkdKSqXptB6DVdsSAEN9LKSl0MZYzxEOm+4xB

nT7wL1LZUMolmftzfZqeexZBmcTIu/HSNlgQTV5NS775SBYiSDA=

-----END CERTIFICATE-----

Listing 25 Creating a certificate for the plug-in on the vPC primary switch

N9K-VPC-1# guestshell run sudo ovsdb-plugin cert bootstrap --send 100.0.0.2 --vrf

default

OVSDB plugin keypair saved to '/usr/local/ovsdb/config/ssl/ovsdb_plugin.p12'

Please paste the receiver's CA cert and press ENTER:

-----BEGIN CERTIFICATE----- !<<This is where the temporary cert should be pasted

MIICkjCCAfugAwIBAgIJAOV/d9zN9tvLMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV

BAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMRMwEQYDVQQHDApQbGVhc2FudG9u

MRIwEAYDVQQKDAlBbmRyb21lZGExFTATBgNVBAMMDG92c2RiX3BsdWdpbjAeFw0x

NjEyMjYyMjI3NTZaFw0xOTEyMjYyMjI3NTZaMGIxCzAJBgNVBAYTAlVTMRMwEQYD

VQQIDApDYWxpZm9ybmlhMRMwEQYDVQQHDApQbGVhc2FudG9uMRIwEAYDVQQKDAlB

bmRyb21lZGExFTATBgNVBAMMDG92c2RiX3BsdWdpbjCBnzANBgkqhkiG9w0BAQEF

AAOBjQAwgYkCgYEAw5f/VtkvtFySQrjBLAii/aZhoVFmoVBknRMGo6BZlw4fJK//

NcgWKXSaxn6FlrgqjLmWIOQtgjDfKEfvU2CwVE+3Z0FLj8TGFGJjIYYToEwGcyWl

ZbnpfQy1Jy+CnnNEcfp5tFkcxIVhf/66gtjispBVlAbu/F+HyeNMQB3JK+UCAwEA

AaNQME4wHQYDVR0OBBYEFJl67js6GlgV/6p8I1nov88K/YL5MB8GA1UdIwQYMBaA

FJl67js6GlgV/6p8I1nov88K/YL5MAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEL

BQADgYEAC+qLxAt3BUB5J516aPkU4OWZE4Zan1fpmyWuanca+Tkb+zKSh7bQwGbr

mRZr8JYjmHmL1vGfeJ34BjTeCawzkdKSqXptB6DVdsSAEN9LKSl0MZYzxEOm+4xB

nT7wL1LZUMolmftzfZqeexZBmcTIu/HSNlgQTV5NS775SBYiSDA=

-----END CERTIFICATE-----

Connecting to 100.0.0.2:6640...

Keypair sent to ssl:100.0.0.2

Controller CA cert saved to '/usr/local/ovsdb/config/ssl/controller_ca_cert1.der'

Combined controller CA cert saved to

'/usr/local/ovsdb/config/ssl/controller_ca_cert.der'

Switch #1 CA cert saved to '/usr/local/ovsdb/config/ssl/switch_ca_cert_1.der'

Switch #2 CA cert saved to '/usr/local/ovsdb/config/ssl/switch_ca_cert_2.der'

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 42 of 62

N9K-VPC-1#

Listing 26 Receiving a certificate for the plug-in on the vPC secondary switch

Keypair received from ssl:100.0.0.1:26021

OVSDB plugin keypair saved to '/usr/local/ovsdb/config/ssl/ovsdb_plugin.p12'

Controller CA cert saved to '/usr/local/ovsdb/config/ssl/controller_ca_cert1.der'

Combined controller CA cert saved to

'/usr/local/ovsdb/config/ssl/controller_ca_cert.der'

Switch #1 CA cert saved to '/usr/local/ovsdb/config/ssl/switch_ca_cert_1.der'

Switch #2 CA cert saved to '/usr/local/ovsdb/config/ssl/switch_ca_cert_2.der'

N9K-VPC-2#

After the two switches have their certificates synchronized, display them to verify that they are identical. The

commands to display the certificates are shown in Listings 27 and 28.

Listing 27 Verifying the certificate on the vPC primary switch

N9K-VPC-1# guestshell run sudo ovsdb-plugin cert show

-----BEGIN CERTIFICATE-----

MIICkjCCAfugAwIBAgIJAOV/d9zN9tvLMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV

BAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMRMwEQYDVQQHDApQbGVhc2FudG9u

MRIwEAYDVQQKDAlBbmRyb21lZGExFTATBgNVBAMMDG92c2RiX3BsdWdpbjAeFw0x

NjEyMjYyMjI3NTZaFw0xOTEyMjYyMjI3NTZaMGIxCzAJBgNVBAYTAlVTMRMwEQYD

VQQIDApDYWxpZm9ybmlhMRMwEQYDVQQHDApQbGVhc2FudG9uMRIwEAYDVQQKDAlB

bmRyb21lZGExFTATBgNVBAMMDG92c2RiX3BsdWdpbjCBnzANBgkqhkiG9w0BAQEF

AAOBjQAwgYkCgYEAw5f/VtkvtFySQrjBLAii/aZhoVFmoVBknRMGo6BZlw4fJK//

NcgWKXSaxn6FlrgqjLmWIOQtgjDfKEfvU2CwVE+3Z0FLj8TGFGJjIYYToEwGcyWl

ZbnpfQy1Jy+CnnNEcfp5tFkcxIVhf/66gtjispBVlAbu/F+HyeNMQB3JK+UCAwEA

AaNQME4wHQYDVR0OBBYEFJl67js6GlgV/6p8I1nov88K/YL5MB8GA1UdIwQYMBaA

FJl67js6GlgV/6p8I1nov88K/YL5MAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEL

BQADgYEAC+qLxAt3BUB5J516aPkU4OWZE4Zan1fpmyWuanca+Tkb+zKSh7bQwGbr

mRZr8JYjmHmL1vGfeJ34BjTeCawzkdKSqXptB6DVdsSAEN9LKSl0MZYzxEOm+4xB

nT7wL1LZUMolmftzfZqeexZBmcTIu/HSNlgQTV5NS775SBYiSDA=

-----END CERTIFICATE-----

N9K-VPC-1#

Listing 28 Verifying the certificate on the vPC primary switch

N9K-VPC-2# guestshell run sudo ovsdb-plugin cert show

-----BEGIN CERTIFICATE-----

MIICkjCCAfugAwIBAgIJAOV/d9zN9tvLMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV

BAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMRMwEQYDVQQHDApQbGVhc2FudG9u

MRIwEAYDVQQKDAlBbmRyb21lZGExFTATBgNVBAMMDG92c2RiX3BsdWdpbjAeFw0x

NjEyMjYyMjI3NTZaFw0xOTEyMjYyMjI3NTZaMGIxCzAJBgNVBAYTAlVTMRMwEQYD

VQQIDApDYWxpZm9ybmlhMRMwEQYDVQQHDApQbGVhc2FudG9uMRIwEAYDVQQKDAlB

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 43 of 62

bmRyb21lZGExFTATBgNVBAMMDG92c2RiX3BsdWdpbjCBnzANBgkqhkiG9w0BAQEF

AAOBjQAwgYkCgYEAw5f/VtkvtFySQrjBLAii/aZhoVFmoVBknRMGo6BZlw4fJK//

NcgWKXSaxn6FlrgqjLmWIOQtgjDfKEfvU2CwVE+3Z0FLj8TGFGJjIYYToEwGcyWl

ZbnpfQy1Jy+CnnNEcfp5tFkcxIVhf/66gtjispBVlAbu/F+HyeNMQB3JK+UCAwEA

AaNQME4wHQYDVR0OBBYEFJl67js6GlgV/6p8I1nov88K/YL5MB8GA1UdIwQYMBaA

FJl67js6GlgV/6p8I1nov88K/YL5MAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEL

BQADgYEAC+qLxAt3BUB5J516aPkU4OWZE4Zan1fpmyWuanca+Tkb+zKSh7bQwGbr

mRZr8JYjmHmL1vGfeJ34BjTeCawzkdKSqXptB6DVdsSAEN9LKSl0MZYzxEOm+4xB

nT7wL1LZUMolmftzfZqeexZBmcTIu/HSNlgQTV5NS775SBYiSDA=

-----END CERTIFICATE-----

N9K-VPC-2#

Enabling the OVSDB plug-in

Note: The commands discussed here are the same for both standalone and vPC configurations. The plug-in must

be enabled first on the primary vPC switch.

After all the previous configuration steps are complete, it is time to enable the OVSDB plug-in. This step will

configure the plug-in to wait for a connection and configurations from the NSX controllers.

Use this command to enable the OVSDB plug-in:

switch#guestshell run sudo ovsdb-plugin service start

Use this command to verify the plug-in status:

switch#guestshell run sudo ovsdb-plugin service status

Remember that in a vPC switch pair, the plug-in on the vPC primary switch should be enabled first, followed by the

plug-in in the secondary switch.

Listings 29 through 33 shows sample configuration and status commands.

Listing 29 shows the configuration for a standalone switch.

Listing 29 Enabling the plug-in on a standalone switch

N9K# guestshell run sudo ovsdb-plugin service start

Starting OVSDB plugin...

nohup: appending output to 'nohup.out'

OVSDB plugin started

N9K# guestshell run sudo ovsdb-plugin service status

Status: Running

Connections

Switches:

#1 addr : 172.26.36.76

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 44 of 62

type : Local

vpc : Disabled

state : Up

Controllers:

#1 addr : 172.26.36.102

state : Starting !<<This is in starting mode since we have not

configured NSX yet

N9K#

Listings 30 through 33 show the configurations for a vPC pair of switches.

Listing 30 shows the configuration on the vPC primary switch.

Listing 30 Enabling the plug-in on the vPC primary switch

N9K-VPC-1# guestshell run sudo ovsdb-plugin service start

Starting OVSDB plugin...

nohup: appending output to 'nohup.out'

OVSDB plugin started

N9K-VPC-1#

Listing 31 shows the configuration on the vPC secondary switch.

Listing 31 Enabling the plug-in on the vPC secondary switch

N9K-VPC-2# guestshell run sudo ovsdb-plugin service start

Starting OVSDB plugin...

nohup: appending output to 'nohup.out'

OVSDB plugin started

N9K-VPC-2#

Listing 32 shows verification on the vPC primary switch.

Listing 32 Verifying the plug-in on the vPC primary switch

N9K-VPC-1# guestshell run sudo ovsdb-plugin service status

Status: Running

Connections

Switches:

#1 addr : 100.0.0.1

type : Local

vpc : Enabled/Master

state : Up

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 45 of 62

#2 addr : 100.0.0.2

type : Remote

vpc : Enabled/Slave

state : Up

Controllers:

#1 addr : 172.26.36.102

state : Starting !<<This is in starting mode since we have not

configured NSX yet

N9K-VPC-1#

Listing 33 shows verification on the vPC secondary switch.

Listing 33 Verifying the plug-in on the vPC secondary switch

N9K-VPC-2# guestshell run sudo ovsdb-plugin service status

Status: Running

Connections

Switches:

#1 addr : 100.0.0.2

type : Local

vpc : Enabled/Slave

state : Up

#2 addr : 100.0.0.1

type : Remote

vpc : Enabled/Master

state : Up

Controllers:

#1 addr : 172.26.36.102

state : Down !<<This is in down mode since the other switch is

master for now

N9K-VPC-2#

The vPC secondary switch plug-in is in backup, or slave, mode. Therefore, it will show its state as down. The state

will change if the primary plug-in or switch goes down and failover occurs.

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 46 of 62

Registering the HW-VTEP with VMware NSX

After the OVSDB plug-in is running in the Cisco Nexus 9000 Series Switch, it is time to register the HW-VTEP with

the NSX controllers. The configuration on the NSX manager will be the same for both standalone and vPC setups,

because from the perspective of NSX, the vPC pair appears as a single control plane.

First, a hardware gateway must be registered to NSX. The steps detailed in this section need to be performed only

once for this purpose. The user must configure the HSC (in the case of the Cisco Nexus 9000 Series integration,

this is the plug-in) of their hardware gateway with the NSX controller IP address. Note that there are typically three

redundant NSX controllers. You need to specify only one of them, and the others will be automatically discovered.

This process was completed when the plug-in was configured in the earlier sections.

After the plug-in is configured to point to an NSX controller, you need to collect the certificate that will be used by

the OVSDB client on the NSX controller to connect to the server on the HSC (the plug-in). To collect the certificate,

as mentioned in the earlier sections, you enter the following command:

switch# guestshell run sudo ovsdb-plugin cert show

Note that if a vPC pair is being used, the certificates have already been synchronized, so you can enter the

command on either switch.

From here, the registration of a new hardware gateway in the NSX GUI is relatively straightforward. In vCenter,

navigate to the Networking and Security section and select the Service Definition tab. Then open the Hardware

Devices menu and click the + button, as shown in Figure 14.

For this example, the replication cluster is already configured with the replication service nodes for BUM traffic as

discussed earlier in this document. For more information about configuring the RSN, see the VMware

documentation.

Figure 14. Adding a hardware device in the VMware NSX manager GUI

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 47 of 62

After you click the + button, a new window appears, where you enter a name and certificate. The name can be

anything that makes sense to the administrator. The certificate is the one retrieved with the guestshell run sudo

ovsdb-plugin cert show command on the Cisco Nexus 9000 Series Switch. There is also an optional Description

field.

Note that BFD is enabled by default, so the Cisco Nexus 9000 Series Switch will establish BFD sessions with the

RSNs. This configuration is critical for protecting against the silent failure of an RSN, and VMware supports only

configurations that run BFD. The process for enabling BFD over VXLAN on the Cisco Nexus 9000 Series was

discussed in an earlier section of this document. As mentioned earlier, the configuration for this is the same for

both standalone and vPC switches. The example in Figure 15 uses a vPC switch.

Figure 15. Adding a certificate for a Cisco Nexus 9000 Series Switch in the VMware NSX manager GUI

After a few seconds (a refresh may be required on the browser), the new HW-VTEP should appear in the

Hardware Devices window with connectivity shown as Up (Figure 16).

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 48 of 62

Figure 16. Checking the connectivity to the Cisco Nexus 9000 Series Switch in the VMware NSX manager GUI

You can also verify the connectivity in the Cisco Nexus 9000 Series Switch by entering the following command:

switch# guestshell run sudo ovsdb-plugin service status

An example of the output for this command for a vPC primary switch is shown in Listing 34.

Listing 34 – Verifying the plugin connectivity to the NSX controllers

N9K-VPC-1# guestshell run sudo ovsdb-plugin service status

Status: Running

Connections

Switches:

#1 addr : 100.0.0.1

type : Local

vpc : Enabled/Master

state : Up

#2 addr : 100.0.0.2

type : Remote

vpc : Enabled/Slave

state : Up

Controllers:

#1 addr : 172.26.36.103

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 49 of 62

state : Up

#2 addr : 172.26.36.104

state : Up

#3 addr : 172.26.36.102

state : Up

N9K-VPC-1#

Note that there are three controllers in the Up state. The two additional controllers were pushed down by the first

controller (the one you configured). The same command can be used for a standalone switch.

Binding a logical switch in VMware NSX to a physical switch, physical port, and VLAN

After a hardware gateway has been added to NSX, any existing logical switch can programmatically be mapped to

any physical port or VLAN advertised by the switch. The ports advertised by the Cisco Nexus 9000 Series using

OVSDB are the ones shared in the Assigning VLANs and interfaces to the controllers section of this document.

This section illustrates the mapping of a logical switch to a particular port, using the vCenter GUI.

First, select a logical switch on the Network and Security > Logical Switches tab. Then open the Actions menu and

choose Manage Hardware Bindings.

Note that any logical switch that will be connected to a hardware VTEP cannot be connected to a DLR at

the same time. This is an NSX requirement, and it applies to every vendor’s HW-VTEP integration.

An example is shown in Figure 17.

Figure 17. Connecting a logical switch to an HW-VETP using the VMware NSX manager GUI

A new window will appear with all the hardware gateways that are configured for this set of NSX controllers.

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 50 of 62

Click the triangle icon to the left of the switch name to open the dialog box for that hardware gateway. You can then

click the + icon.

The next step is to click the Select button in the Port column, as shown in Figure 18.

Figure 18. Selecting the port for a binding using the VMware NSX manager GUI

After you click Select, all the ports that have been shared will appear. In the case of orphan ports in a vPC pair, the

serial number of the physical switch is appended to the interface name. This naming allows the plug-in to

understand the difference between Ethernet1/10 on the primary vPC switch and Ethernet1/10 on secondary vPC

switch. vPC ports will appear as single interfaces as they are shared. This process does not apply to standalone

mode.

An example is shown in Figure 19.

Figure 19. Displaying the shared ports in the VMware NSX manager GUI

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 51 of 62

Select the interface on this switch that you want to map to this NSX logical switch. After you have selected an

interface (multiple interfaces can be selected, one at a time), enter the VLAN that will be mapped to this logical

switch on this interface. Again, this is one of the VLANs was shared in the Assigning VLANs and interfaces to the

controllers section of this document. After you have entered the VLAN and clicked OK, the binding is complete

(Figure 20).

Figure 20. Completing the HW-VTEP binding using the VMware NSX manager GUI

You can verify the binding by clicking the Hardware Port Binding number that now appears on the screen, as

shown in Figure 21.

Figure 21. Verifying the HW-VTEP binding using the VMware NSX manager GUI

You can also verify that the configuration was correctly pushed to the switch by entering the following command:

switch# show run controller

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 52 of 62

This is a hidden command so you cannot use the Tab key to autocomplete it. You must type the content manually.

An example of the results for this command are shown in Listing 35.

Listing 35 Verifying the HW-VTEP binding on the Cisco Nexus 9000 Series Switch

N9K-VPC-1# show run controller

!Command: show running-config controller

!Time: Fri Dec 30 19:31:06 2016

version 7.0(3)I4(3)

feature vn-segment-vlan-based

feature nv overlay

vlan 600-602

vlan 600

vn-segment 5001

interface port-channel10

switchport

switchport mode trunk

switchport trunk allowed vlan 600

!controller type l2-vxlan identifier 1

interface port-channel100

switchport

switchport mode trunk

switchport trunk allowed vlan 600

!controller type l2-vxlan identifier 1

interface nve1

source-interface loopback0

auto-remap-replication-servers

host-reachability protocol controller 1

source-interface hold-down-time 30

config-source controller

member vni 5001

ingress-replication protocol static

peer-ip 10.10.2.11 !<<The RSN the switch randomly picked BUM traffic for

this VNI

bfd-neighbor 10.10.2.10 10.10.2.10 0023.2000.0003 !<<The RSNs down pushed by

NSX

bfd-neighbor 10.10.2.11 10.10.2.11 0023.2000.0002

bfd-neighbor 10.10.2.12 10.10.2.12 0023.2000.0001

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 53 of 62

controller type l2-vxlan identifier 1

controller description Controller-for-OVSDB

assign vlan 600-602 dedicated

assign interface port-channel10, port-channel100 shared

assign interface Ethernet1/10-11 shared

…CLIP…

N9K-VPC-1#

The highlighted text in the command is the information that was pushed down by the NSX controller through the

OVSDB.

vlan 600

vn-segment 5001

This information shows that you configured VLAN 600 to be connected to VNI 5001 for this switch.

!controller type l2-vxlan identifier 1

The information under the interface shows that you have shared that interface with the controller.

member vni 5001

ingress-replication protocol static

peer-ip 10.10.2.11

Under the NVE interface, you see that the NVE interface is now part of VNI 5001, and that for this VNI you are

using ingress BUM replication. The RSN that was chosen for BUM replication for this VNI is 10.10.2.11 (this is one

of the RSNs that was configured in NSX).

bfd-neighbor 10.10.2.10 10.10.2.10 0023.2000.0003

bfd-neighbor 10.10.2.11 10.10.2.11 0023.2000.0002

bfd-neighbor 10.10.2.12 10.10.2.12 0023.2000.0001

Under the NVE interface configuration, you also see three BFD neighbors that were pushed down using OVSDB by

the NSX controllers. These are the three RSNs that were configured in the NSX manager for the example

presented here (the number changes depending on the number of RSNs configured). The Cisco Nexus 9000

Series Switch will send BFD hello messages to these devices. As discussed earlier, BFD is enabled by default in

the NSX hardware VTEP configuration. This setting is critical to protect against a silent failure of an RSN, and

VMware supports only configurations that run BFD.

Verification and troubleshooting

After connectivity is configured with NSX, including the mapping of a logical switch to a physical port and VLAN,

you can use several commands on the Cisco Nexus 9000 Series Switch to verify that everything works as

expected.

Assuming that you have completed all the tasks on the verification checklist provided earlier in this document, and

that everything is working correctly, you can verify that the configuration for the controller is correct by entering the

following command:

switch# show run controller

Note that for vPC pairs, the shared vPC interfaces must match and the vPC peer link must be shared with the

controller.

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 54 of 62

To see the peer NSX hosts, enter the following command:

switch# show nve peers

Sample output for the command is shown in Listing 36.

Listing 36 Example of output for show nve peers command

N9K# show nve peers

Interface Peer-IP State LearnType Uptime Router-Mac

--------- --------------- ----- --------- --------- ----

------------

nve1 10.10.2.10 Up CP 00:32:57 n/a

nve1 10.10.2.11 Up CP 00:32:57 n/a

nve1 10.10.2.12 Up CP 00:32:57 n/a

N9K#

To see the RSNs configured in NSX and their state, enter the following command:

switch# show nve replication-servers

Sample output is shown in Listing 37.

Listing 37 Example of output for show nve replication-servers command

N9K# show nve replication-servers

Interface Replication Servers State Ready

--------- ------------------- ----- -----

nve1 10.10.2.10 Up Yes

10.10.2.12 Up Yes

10.10.2.11 Up Yes

N9K#

To see the BFD state of the RSNs configured for BUM traffic replication, enter the following command:

switch# show bfd neighbors

Listing 38 shows sample output.

Listing 38 Example of output for show bfd neighbors command

N9K-VPC-1# show bfd neighbors

OurAddr NeighAddr LD/RD RH/RS Holdown(mult) State Int

Vrf

100.0.0.10 10.10.2.12 1090519041/1045571973 Up 819(3) Up nve1

default

100.0.0.10 10.10.2.11 1090519042/226687077 Up 871(3) Up nve1

default

100.0.0.10 10.10.2.10 1090519043/1950270581 Up 868(3) Up nve1

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 55 of 62

default

N9K-VPC-1#

shol-nxos-1#

Note that for vPC switches, the OurAddr field uses the anycast loopback address, and the vPC secondary switch

shows the BFD neighbors as down. This information is correct because only the active plug-in switch (usually the

vPC primary switch) will send BFD hello messages.

Note that the VLAN tied to vn-segment 0 for BFD must be allowed on the vPC peer link.

NXDB supports two scopes:

● Base scope: You can enter configuration and show commands in this scope to configure the switch and

view the configurations. This scope is the default scope and the normal operating mode of the switch. When

you enter show commands in this scope, you will see configurations that are owned by the switch.

● External controller scope: You can enter show commands in this scope to see the configurations that have

been pushed down from the external controller. You cannot enter configuration commands in this scope.

When you enter show commands in this scope, you will see configurations that are owned by the external

controller.

You can change the scope to view configurations that have been pushed down from the external controller by

entering the following command:

switch# switch-scope controller l2-vxlan 1

switch%%ctrlr-1#

To switch back to the base scope, enter the following command:

switch%%ctrlr-1#end

When you use the external controller scope, enter the following command to see the controller configuration:

switch%%ctrlr-1#show run

An example of how to change the scope is shown in Listing 39.

Listing 39 Changing to the external controller scope

N9K# switch-scope controller l2-vxlan 1

N9K%%ctrlr-1#

N9K%%ctrlr-1#

N9K%%ctrlr-1# end

N9K#

Limitations of VMware NSX OVSDB integration with HW-VTEPs

The following list summarizes the known limitations for VMware NSX OVSDB integration with hardware VTEPs as

of the writing of this document. These limitations are from NSX and exist for all hardware VTEP vendors. They

must be taken into account when designing hardware VTEP integration with NSX.

● Currently, this integration works only for Layer 2.

● Security, ACLs, and QoS are not supported for hardware VTEPs.

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 56 of 62

● BUM traffic is replicated in software by RSNs within a vSphere and NSX cluster (this is not an OVSDB

limitation, but an NSX-specific implementation).

● When this feature is enabled, a DLR cannot be used for the logical switch in NSX.

Configuring the Cisco Nexus 9000 Series Switch as the default gateway for the VNI and VLAN

As discussed earlier, the cloud-scale ASIC–based Cisco Nexus 9000 EX platform switches can be used as both

the HW-VTEP and the default gateway for a particular VNI and VLAN combination at the same time.

Table 5 lists the minimum hardware requirements, and Table 6 lists the minimum software requirements for the

gateway feature.

Table 5. Supported hardware for default gateway function

Supported Cisco Nexus 9300 platform hardware for default gateway integration

Part number

Cisco Nexus 93180YC-EX Switch N9K-C93180YC-EX

Cisco Nexus 93108TC-EX Switch N9K-C93108TC-EX

Cisco Nexus 93180LC-EX Switch N9K-C93180LC-EX

Table 6. Supported software release for default gateway function

Minimum supported Cisco NX-OS Software release Release 7.0(3)I7(1) and later

Switches that meet the minimum requirements for NSX OVSDB integration but do not meet the requirements for

the gateway feature cannot be used to run both items at the same time.

Configuring a redundant default gateway on two vPC switches acting as HW-VTEPs using HSRP

You should create SVIs after the VLANs are assigned to the controller. You do not need to map the logical switch

and VNI to the VLAN in the NSX manager before the SVI for the VLAN is created.

To assign the VLAN to the controller, see the example in Listing 40.

Listing 40 Assigning VLANs to the controller

N9K# config t

N9K# (config)# controller type l2-vxlan identifier 1

N9K# (config)# assign vlan 600-602 dedicated !<<VLANs assigned to the controller

N9K# (config)# end

N9K#

When configuring a default gateway on a pair of switches running vPC for redundant HW-VTEP connectivity, you

can take advantage of HSRP. HSRP is an FHRP that allows transparent failover of the first-hop IP router.

For more information about HSRP and its settings and how to configure them, see the following link:

https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/6-

x/unicast/configuration/guide/l3_cli_nxos/l3_hsrp.html

For this example, a basic HSRP configuration in the two vPC switches is used to provide default gateway

redundancy.

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 57 of 62

To enable the HSRP feature, use the following command:

switch# feature hsrp

When you configure HSRP on a network segment, you provide a virtual MAC address and a virtual IP address for

the HSRP group. The virtual MAC address will be advertised to the NSX controllers using OVSDB. You configure

the same virtual address on each HSRP-enabled interface in the group. On each interface, you also configure a

unique IP address that acts as the real address.

To do this, you first create the SVI and give it a unique IP address for each switch. Then you configure a matching

unique HSRP group number and virtual IP address (IPv4 or IPv6 or both) under the SVI in both switches. Listings

41 and 42 show sample configurations for IPv4.

To create an SVI, first enable the feature by using the following command:

switch# feature interface-vlan

Note: A unique HSRP group address is strongly recommended for each SVI because the group address is used to

derive the MAC address for the virtual IP address. Using a unique group number for each HSRP group will create a

unique MAC address for each virtual IP address. (Parameter values 0 to 4095 are supported.)

Listing 41 Configuring HSRP in vPC switch 1

N9K-VPC-1# conf t

N9K-VPC-1 (config) # feature hsrp !<<Enable the HSRP feature

N9K-VPC-1 (config) # feature interface-vlan

N9K-VPC-1 (config) # interface vlan 600 !<< Create the SVI for VLAN 600

N9K-VPC-1 (config-if) # no shut

N9K-VPC-1 (config-if) # ip address 10.10.10.2/24 !<<Assign a unique IP address

to the SVI

N9K-VPC-1 (config-if) # hsrp version 2 !<<Set the HSRP version

N9K-VPC-1 (config-if) # hsrp 1 !<<Enter a unique HSRP group configuration

N9K-VPC-1 (config-if-hsrp) # ip 10.10.10.1 !<<Enter the virtual IP to be used as

subnet’s gateway

N9K-VPC-1 (config-if-hsrp) # end

N9K-VPC-1#

Listing 42 Configuring HSRP in vPC switch 2

N9K-VPC-2# conf t

N9K-VPC-2 (config) # feature hsrp

N9K-VPC-2 (config) # feature interface-vlan

N9K-VPC-2 (config) # interface vlan 600

N9K-VPC-2 (config-if) # no shut

N9K-VPC-2 (config-if) # ip address 10.10.10.3/24 !<<Assign a unique IP address

to the SVI

N9K-VPC-2 (config-if) # hsrp version 2

N9K-VPC-2 (config-if) # hsrp 1

N9K-VPC-2 (config-if-hsrp) # ip 10.10.10.1

N9K-VPC-2 (config-if-hsrp) # end

N9K-VPC-2#

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 58 of 62

In the preceding examples, the IP address 10.10.10.1 is used as the default gateway for VLAN 600 and whichever

logical switch and VNI is mapped to it in the NSX manager. As mentioned earlier, both IPv4 and IPv6 addressing

are supported for this feature.

A separate SVI and HSRP configuration is created for each VNI and VLAN pair for which the switches will be the

default gateway. If you want to use the vPC pair of switches as the default gateway for a logical switch and VNI,

but they do not have a physical workload attached to the vPC to be mapped to that VNI, you can create a spare

vPC and VLAN and assigned them to the controller. The vPC must be set up as a trunk in both switches using the

switchport mode trunk command, but no physical interfaces need to be assigned to it. This vPC and VLAN can

then be connected to the logical switch through the hardware binding configuration in the NSX manager. This

association allows the vPC pair of switches to be a redundant external default gateway for the VNI. You can

connect the same spare vPC to several VNIs as long as you use a different VLAN for each VNI.

Conclusion

Cisco Nexus 9000 Series Switches are an excellent platform to use as HW-VTEPs with VMware NSX.

In some NSX deployments, NSX logical switches must be extended into the physical environment. The OVSDB

protocol is used to accomplish this extension. The Cisco Nexus 9000 Series Switches are certified to work in

concert with NSX in both vPC and non-vPC modes to allow this integration to occur.

OVSDB integration between NSX and the Cisco 9000 Series Switches is supported by both VMware Global

Support Services (GSS) and Cisco Technical Assistance Center (TAC).

Appendix A: Upgrading the Cisco NX-OS image and the OVSDB plug-in for vPC

This appendix provides an example of the recommended upgrade sequence for a vPC setup. In this example, the

switches in the vPC are S1 (primary) and S2 (secondary). The OVSDB plug-in on the S1 switch is A1, and the

OVSDB plug-in on the S2 switch is A2, and the controller-cluster is C.

This example uses the following initial versions and new versions of the components that will be upgraded.

● Initial versions

◦ The controller is running Release 6.2.0.

◦ The OVSDB plug-in is Release 1.1.0.

◦ The switches are running Cisco NX-OS Release 7.0(3)I4(0).

● New versions

◦ The controller is running Release 6.3.0.

◦ The OVSDB plug-in is Release 2.2.0.0.

◦ The switches are running Cisco NX-OS Release 7.0(3)I7(0).

The tasks for upgrading the components must be completed in this order:

1. Check prerequisites.

2. Upgrade the OVSDB plug-in on the vPC secondary switch.

3. Upgrade the OVSDB plug-in on the vPC primary switch.

4. Upgrade the vPC secondary switch.

5. Upgrade the vPC primary switch.

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 59 of 62

6. Configure NXDB for Cisco Nexus 9000 Series Switches and communicate with NSX controllers (for Layer 2

VXLANs).

Check prerequisites

Start by checking that you have completed the prerequisites.

1. On switch S1, use the guestshell run sudo ovsdb-plugin service status --more command to verify that

OVSDB plug-in A1 is connected to controller C and switches S1 and S2.

2. On switch S2, use the guestshell run sudo ovsdb-plugin service status --more command to verify that

OVSDB plug-in A2 is connected to switches S1 and S2 and that the connection to C is down.

3. Make sure that the new OVSDB plug-in RPM and JRE8 RPM images are copied to the /bootflash file on both

vPC switches.

4. Make sure that the new switch image and the new electronically programmable logic device (EPLD) image are

also copied to both vPC switches.

Upgrade the OVSDB plug-in on the vPC secondary switch

Now upgrade the plug-in on the vPC secondary switch.

1. Access the guest shell prompt.

run guestshell

2. Stop the OVSDB plug-in.

sudo ovsdb-plugin service stop

3. Display the currently installed OVSDB plug-in.

sudo rpm -qa | grep ovsdb

4. Remove the OVSDB plug-in.

sudo rpm -e ovsdb-plugin-filename

5. Display the currently installed JRE.

sudo rpm -qa | grep jre

6. Remove JRE7.

sudo rpm -e jre-1.7.0_80-fcs.x86_64

7. Install the new JRE8.

sudo rpm -i /bootflash/jre-8u112-linux-x64.rpm

8. Install OVSDB plug-in version 2.2.0.0.

sudo rpm -i /bootflash/ovsdb-plugin-2.2.0.10.rpm

9. Make sure the OVSDB plug-in version is the new version.

sudo ovsdb-plugin service version

10. Ping the controller IP addresses and switch IP addresses to make sure the connectivity is good.

11. Reconfigure the OVSDB plug-in using the appropriate config set command.

sudo /usr/bin/ovsdb-plugin config set --run-in-switch --vrf management

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 60 of 62

--log-level debug --log-type file 172.31.148.197:6640 172.31.145.141,172.31.145.144 admin, admin insieme,

insieme ovsdb-plugin --switch-description ovsdb-plugin

Note If no changes are made to the controller cluster, then the previous certificates should be good and a

certificate reset is not required. If the controller has new credentials or is a new installation, perform the certificate

bootstrap and certificate reset steps.

12. Start the OVSDB plug-in.

sudo ovsdb-plugin service start

13. Verify that the OVSDB plug-in connects to switches S1 and S2.

sudo ovsdb-plugin service status --more

After the plug-in is running, upgrade the vPC primary switch. When you stop the plug-in the primary switch, the

secondary, or backup, plug-in will take over.

Upgrade the OVSDB plug-in on the vPC primary switch

The steps for upgrading the vPC primary switch are the same as those for upgrading the secondary switch.

1. Access the guest shell prompt.

run guestshell

2. Stop the OVSDB plug-in.

sudo ovsdb-plugin service stop

3. Display the currently installed OVSDB plug-in.

sudo rpm -qa | grep ovsdb

4. Remove the OVSDB plug-in.

sudo rpm -e ovsdb-plugin-filename

5. Display the currently installed JRE.

sudo rpm -qa | grep jre

6. Remove JRE7.

sudo rpm -e jre-1.7.0_80-fcs.x86_64

7. Install the new JRE8.

sudo rpm -i /bootflash/jre-8u112-linux-x64.rpm

8. Install OVSDB plug-in version 2.2.0.0.

sudo rpm -i /bootflash/ovsdb-plugin-2.2.0.10.rpm

9. Make sure the OVSDB plug-in version is the new version.

sudo ovsdb-plugin service version

10. Ping the controller IP addresses and switch IP addresses to make sure the connectivity is good.

11. Reconfigure the OVSDB plug-in using the appropriate config set command.

12. Start the OVSDB plug-in.

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 61 of 62

sudo ovsdb-plugin service start

13. Verify that the OVSDB plug-in connects to switches S1 and S2.

sudo ovsdb-plugin service status --more

Now you are ready to perform the NX-OS upgrade.

Upgrade the Cisco NX-OS image

To upgrade the NX-OS image, use the following command:

install all nxos bootflash:new-switch-image.bin

Upgrade the configured vPC primary switch first.

This step results in a vPC failover. Switch S2 becomes the new primary switch, and after the reload, switch S1

becomes the new secondary switch.

After the upgrade, make sure that the switch goes through a reboot cycle and loads the new image and that all

controller configurations are downloaded to the switch.

show running controller

Access the guest shell prompt on the S1 switch.

Verify that the OVSDB plug-in is running after the upgrade.

run guestshell

sudo ovsdb-plugin service status --more

After the plug-in is running, upgrade the configured vPC secondary switch.

What to do next

If necessary, perform any controller-side upgrades. For information, see the documentation from VMware.

Appendix B: Best-practice configurations for vPCs

Listing 42 shows a sample configuration for vPCs used for OVSDB integration.

Listing 42 vPC configuration best practices

#On primary:

vpc domain 1

peer-switch

role priority 100

peer-keepalive destination 172.26.36.75 source 172.26.36.74 vrf default

delay peer-link 60

peer-gateway

ipv6 nd synchronize

ip arp synchronize

interface port-channel100

© 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 62 of 62

switchport mode trunk

spanning-tree port type network

vpc peer-link

#On secondary:

vpc domain 1

peer-switch

peer-keepalive destination 172.26.36.74 source 172.26.36.75 vrf default

delay peer-link 60

peer-gateway

ipv6 nd synchronize

ip arp synchronize

interface port-channel100

switchport mode trunk

spanning-tree port type network

vpc peer-link

Printed in USA C11-740091-01 06/18