neutron kilo

127
Neutron Networking / OpenStack Kilo Lukáš Korous

Upload: lukas-korous

Post on 15-Apr-2017

640 views

Category:

Technology


1 download

TRANSCRIPT

Page 1: Neutron   kilo

Neutron Networking / OpenStack Kilo

Lukáš Korous

Page 2: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Outline

0. OpenStack review, training environment setup

a. User & Administrator dashboard use

b. Training environment setup & walkthrough

1. Related networking technologies

a. Linux bridges and namespaces(,kernel settings – sysctl)

b. Network segmentation, encapsulation

i. vLAN

ii. GRE

iii. VXLAN

c. SDN Overview

2. Open vSwitch

a. Description, comparison with Linux bridge

b. Configuration

3. Neutron concept and relationship to other OpenStack components

a. Overview

b. Relationship to other components & projects

4. OpenStack commands for Neutron checking

5. Neutron configuration files

a. Controller

i. neutron.conf

ii. nova.conf

iii. plugins/*/*.ini

iv. dnsmasq-neutron.conf

b. Nodes

i. l3_agent.ini

ii. dhcp_agent.ini

iii. metadata_agent.ini

iv. plugins/*/*.ini

6. OpenStack network architecture

a. Public network

b. External (flat) network

c. Management network(, SSH network)

d. Tunnel (internal) network

7. Network traffic schemes

a. VM <-> VM

b. VM -> Outside world

c. Outside world -> VM (Floating IP)

d. Outside world -> VM (directly on Public network)

8. Network traffic inspection

a. Host level, guest level

Page 3: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

OpenStack review, training environment setup

OpenStack user dashboard

Testing dashboard address

http://test.ultimum-cloud.com/

Login

Please contact Ultimum Technologies at [email protected] for access to the testing environment.

Keypairs for SSH

In the left menu, select "Access & Security", and after opening in the right panel, select the tab "Keypairs".

Now there are two ways how to add an SSH key - either create a new one, or import an existing one.

Create a key for SSH

In the top buttons row (see screenshot) select "Create Keypair". In the dialog, insert a name, e.g.

"myFirstKeypair".

Page 4: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Confirm by selecting "Create Keypair". Keypair is created and you can see it on the list.

Import a key for SSH

In the top buttons row (see screenshot) select "Import Keypair".

Page 5: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

In the dialog, fill in a name, e.g. "myImportedKeypair", and copy your public key to the "Public Key" field. If

you do not know where to look for a public key, it is usually in your home folder (typically "/home/'user'" on

Linux, "C:\Users\'user'" on Windows), in subdirectory (caution, may be hidden) ".ssh", in a file id_rsa.pub

(id_dsa.pub).

The whole key, e.g. the entire content of the file id_rsa.pub, should then be copied into this field.

Confirm by selecting "Import Keypair". Keypair is created and you can see it on the list.

Security Groups

In the left menu, select "Access & Security", which opens in the right panel. Select the "Security Groups" tab.

In the default settings, there is one security group created - with the name "default":

Page 6: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

If you want to check the default security group rules, click on it and the detail will open:

Page 7: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

For the purpose of this tutorial though, we will create a new security group that will later allow us to

connect to our instances using SSH. On the list of security groups, press the button "Create Security Group"

and create a group for SSH connections by filling the information in the creation dialog:

The security group appears in the list:

Page 8: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Adjustment of the security group rules

For adjustments of the SSH security group rules, select "Edit Rules":

Page 9: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Then select "Add Rule" and add the SSH pre-defined rule (or you may fill the custom rule dialog and add it

manually). Note that the default CIDR is 0.0.0.0/0 which stands for any (=all) IP address.

The rule is created:

Page 10: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Set up networking

Now we have to set up networking in order to plug the instance we are about to create into a proper

network.

Navigate to "Network Topology" in the left menu. The basic setup looks like in the following figure.

Page 11: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Creating a private network for your servers

Select "Create Network" and network creation dialog appears.

Page 12: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Creating a subnet

Now we have to create a subnet in the network being created. Navigate to the second tab of the dialog -

Subnet.

Subnet represents a logical set of connected devices, addressed in a common address space.

After you confirm, the network is created and it appears in the topology diagram.

Page 13: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

.

Creating a router

Now we have to create a router in order to be able to connect the new network to the outside world.

Select "Create Router" and name your new router. Router is created and we can see it in the router list:

Page 14: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Also the router appears in the Network Topology schema:

Page 15: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

If you navigate to the router's detail, it opens and looks similarly to the following figure.

Now we need to create an interface, select "Add Interface" and add an interface to the private network.

When you confirm, the interface is created and we can see it on the router's detail:

Page 16: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Adding a gateway to a router

Now we have to connect to the outside world using a gateway. For this, open the router list from the left

menu ("Routers") and select "Set Gateway" next to the appropriate router. The following dialog appears:

Page 17: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Select the public network there and confirm.

Now the network topology reflects that the router is connected to the outside world:

First run of a cloud server

Let us now go through running a cloud server from a collection of public images (according to the operating

systems) and connect to it via the remote desktop. In the left menu, choose "Images & Snapshots". In the

right panel, a list of all available public images appears.

List of available ready-to-use images

Select "Images & Snapshots" in the left menu. In the right panel, the list of publicly available images will

appear.

Page 18: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Instantiation of a public server image

Next to the selected image (for the purpose of this manual we will select "Ubuntu 13.04 cloudimg x86_64"),

select "Launch".

A dialog window appears, where on the "Details" tab we fill in the instance name, e.g. "myFirstInstance" and

we select the type of the virtual hardware (flavor).

Page 19: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Before we confirm by pressing "Launch", we need to check the security settings on the "Access & Security"

tab:

Page 20: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Here we should add the security group for SSH connection that we created earlier and the keypair we either

created or imported earlier as well.

We also have to address the networking. Switch to the "Networking" tab.

Here you have to assign the private network to the instance being launched.

Now we confirm by pressing "Launch". We are automatically redirected to the instance overview (available

from the left menu by selecting "Instances"), where we can see the newly created instance.

Page 21: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

We can also see the newly launched instance in the "Network Topology" schema:

Page 22: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Public IP address assignment

For a successful connection to the newly created instance, it is important to add a public IP address. On the

instance overview we select "More" by the appropriate instance and in the context menu we select

"Associate Public IP":

A dialog appears:

Page 23: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

We need to select the "+" button next to the IP addresses drop-down list to add allocate a new IP address.

The allocation dialog appears.

Page 24: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Select the public pool and confirm. The address is allocated and we can select it now in the previous

assignment dialog.

After confirmation, we can select "Access & Security" in the left menu and then "Public IPs" in the right

window to see the list of allocated or assigned IP addresses:

Page 25: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

SSH connection to your newly created instance

Now we are able to connect to this new instance.

Username: debian

Password: debian / SSH key

Here is what a terminal looks like after a successful login.

Page 26: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Training environment setup & walkthrough

Networks (details of OpenStack networks will follow in one of subsequent sections)

Public (not part of the training environment) o physical network in the datacenter

Openstack-external-flat o external (public) network from the testing environment viewpoint

Openstack-ssh o network dedicated for SSH, might be merged with management network, except for

deployment

Openstack-management o management network, access to controllers, and nodes

Openstack-tunnel o tunneled communication between nodes (VXLAN tunneling is used in the training

environment)

Machines

Name Public IP address Purpose OS

Controller1 XXX.YYY.ZZZ.239 Controller debian

Compute1 XXX.YYY.ZZZ.236 Compute & Network Node debian

Compute2 XXX.YYY.ZZZ.237 Compute & Network Node debian

Client1 XXX.YYY.ZZZ.238 Client machine debian

Services

List all installed OpenStack services on the controller

debian@controller1:~ $ source admin-openrc.sh

debian@controller1:~ $ openstack service list

+----------------------------------+----------+----------------+

| ID | Name | Type |

+----------------------------------+----------+----------------+

| 01dd76e265c34c98b30a80a95f4f744a | heat | orchestration |

| 0e313bbb8cc44d66b4691a65152450d3 | neutron | network |

| 41ed6e5e1ad84b5783ed52840b11bf2b | keystone | identity |

| a8761537893e490799af9cccd114df27 | cinderv2 | volumev2 |

| aa5dc322091549028b7c1b755a061dd1 | glance | image |

| c31f2edc4f14411d9efc7f3b6f9db5a6 | nova | compute |

| d5c2a54ee66648849607a8feea028662 | heat-cfn | cloudformation |

| f4cb7d150eb04920a9cfab8f24e29062 | cinder | volume |

+----------------------------------+----------+----------------+

debian@controller1:~ $

Page 27: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

View all services in the dashboard

Admin -> System -> System Information

-> Services

-> Compute Services

-> Network Agents

Page 28: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Related networking technologies

Linux bridges

A piece of software used to unite two or more network segments. A bridge behaves like a virtual network

switch, working transparently (the other machines do not need to know or care about its existence). Any

real devices (e.g. eth0) and virtual devices (e.g. tap0) can be connected to it.

Layer 2 device.

Part of Linux kernel since 2.2 (stable as of 2.6).

$ brctl addbr mybridge

$ brctl addif mybridge eth0

$ brctl addif mybridge eth1

$ ifconfig mybridge up

$ brctl show

$ brctl showmacs mybridge

$ brctl delbr mybridge

Network namespaces

Isolated containers which can hold a network configuration and are not seen from outside of the

namespace.

Used to encapsulate specific network functionality or provide a network service in isolation as well as simply

help to organize a complicated network setup.

Used for many purposes and heavily used in OpenStack networking.

$ ip netns add ns1

$ ip netns add ns2

$ ip netns list

BUT: Cannot contain physical interfaces. Only virtual Ethernet (veth) interfaces.

Virtual Ethernet (veth) interfaces

Always come in pairs.

Connected like a tube—whatever comes in one veth interface will come out the other peer veth interface.

Used to connect a network namespace to the outside world via the “default” or “global” namespace

where physical interfaces exist.

Simple connection is achieved by the following:

Page 29: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

# create the veth pair

$ ip link add tap1 type veth peer name tap2

$ ip link list

# move the interfaces to the namespaces

$ ip link set tap1 netns ns1

$ ip link set tap2 netns ns2

$ ip link list # no veth here

$ ip netns exec ns1 ip link list

$ ip netns exec ns2 ip link list

# bring up the links

$ ip netns exec ns1 ifconfig tap1 10.0.0.0/24 up

$ ip netns exec ns2 ifconfig tap2 20.0.0.0/24 up

Connecting namespaces by a Linux bridge

$ ip netns exec ns1 ip link delete tap1

$ ip netns exec ns2 ip link delete tap2

$ brctl addbr mybridge

$ ip link set dev mybridge up

#### PORT 1

# create a port pair

$ ip link add tap1 type veth peer name br-tap1

# attach one side to linuxbridge

$ brctl addif mybridge br-tap1

# attach the other side to namespace

$ ip link set tap1 netns ns1

# set the ports to up

$ ip netns exec ns1 ifconfig tap1 10.0.0.0/24 up

$ ip link set dev br-tap1 up

#

#### PORT 2

A veth pair connecting namespaces

Page 30: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

# create a port pair

$ ip link add tap2 type veth peer name br-tap2

# attach one side to linuxbridge

$ brctl addif mybridge br-tap2

# attach the other side to namespace

$ ip link set tap2 netns ns2

# set the ports to up

$ ip netns exec ns2 ifconfig tap2 20.0.0.0/24 up

$ ip link set dev br-tap2 up

Linux bridge between two tap devices

Relevant kernel settings

/etc/sysctl.conf

$ vim /etc/sysctl.conf

net.ipv4.ip_forward=1

net.ipv4.conf.all.rp_filter=0

net.ipv4.conf.default.rp_filter=0

net.bridge.bridge-nf-call-arptables=1

net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-ip6tables=1

Page 31: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

$ sysctl –p # save changes without reboot

Page 32: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Network segmentation, encapsulation

vLAN tagging

Layer 2

One vLAN (One vLAN ID) = One broadcast domain

No impact on MTU (MTU does not include ethernet header size)

Ethernet packet with VLAN Tag

[length in bytes]

Destination MAC

address

Source MAC

address Type (VLAN: 0x8100) VLAN Tag Payload

6 6 2 4 1500

VLAN Tag

[length in bits]

Priority CFI ID Ethernet Type/Length

3 1 12 16

Example packet

Frame 53 (70 bytes on wire, 70 bytes captured)

Ethernet II, Src: 00:40:05:40:ef:24, Dst: 00:60:08:9f:b1:f3

802.1q Virtual LAN

000. .... .... .... = Priority: 0

...0 .... .... .... = CFI: 0

.... 0000 0010 0000 = ID: 32

Type: IP (0x0800)

Internet Protocol, Src Addr: 131.151.32.129 (131.151.32.129), Dst

Addr: 131.151.32.21 (131.151.32.21)

Transmission Control Protocol, Src Port: 1173 (1173), Dst Port:

6000 (6000), Seq: 0, Ack: 128, Len: 0

Disadvantages

Only 12bits for VLAN ID => 4094 Available Ids

STP imposes existence of a single active path => resiliency issues

GRE tunneling

Layer 3

Overlay network / Tunneling – IP packet is wrapped in another IP packet

Page 33: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Ethernet packet tunneled through GRE

[length in bytes]

MTU considerations

overhead is 24 bytes

MTU by default 1476 bytes, but if DF bit set, packets dropped

GRE clears the DF bit, unless 'tunnel path-mtu-discovery' set

Example packet

Frame 1: 138 bytes on wire (1104 bits), 138 bytes captured (1104

bits)

Ethernet II, Src: c2:00:57:75:00:00 (c2:00:57:75:00:00), Dst:

c2:01:57:75:00:00 (c2:01:57:75:00:00)

Internet Protocol Version 4, Src: 10.0.0.1 (10.0.0.1), Dst:

10.0.0.2 (10.0.0.2)

Generic Routing Encapsulation (IP)

Flags and Version: 0x0000

Protocol Type: IP (0x0800)

Internet Protocol Version 4, Src: 1.1.1.1 (1.1.1.1), Dst: 2.2.2.2

(2.2.2.2)

Transmission Control Protocol, Src Port: 1173 (1173), Dst Port:

6000 (6000), Seq: 0, Ack: 128, Len: 0

Transport Ethernet

header

Transport IP header

[Protocol = GRE(47)] GRE header Original IP header Payload

18 20 4 20 1500 (1476)

GRE schema

Page 34: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

VXLAN Tagging

Layer 4, default port: 4789

Overlay network / Tunneling – Ethernet packet is wrapped in a UDP packet

VTEP - VXLAN Tunnel End Point

VTEP is located within the hypervisor

VTEP is assigned an IP address and acts as an IP host to the IP network

Ethernet packet tunneled through VXLAN

[length in bytes]

Recommended to increase the MTU to 1550:

VxLAN schema

Page 35: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

- 1500 byte payload which includes the original IP header/s. - 14 byte inner Ethernet header - 8 byte VXLAN header - 8 byte UDP header - 20 byte IP header

Transport Ethernet

header

Transport IP

header

Transport UDP

header VXLAN header

Original

Ethernet header Payload

18 20 8 8 18 1500

Example packet

Frame 1: 148 bytes on wire (1184 bits), 148 bytes captured (1184

bits)

Ethernet II, Src: c2:00:57:75:00:00 (c2:00:57:75:00:00), Dst:

c2:01:57:75:00:00 (c2:01:57:75:00:00)

Internet Protocol Version 4, Src: 10.0.0.1 (10.0.0.1), Dst:

10.0.0.2 (10.0.0.2)

Protocol: UDP (17)

User Datagram Protocol, Src Port: 46219 (46219), Dst Port: 4789

(4789)

Virtual eXtensible Local Area Network

...

.... 1... = VXLAN Network ID(VNI): Present

...

VXLAN Network Identifier (VNI): 1234

Ethernet II, Src: a6:00:57:75:00:00 (a6:00:57:75:00:00), Dst:

b3:01:57:75:00:00 (b3:01:57:75:00:00)

Internet Protocol Version 4, Src: 1.1.1.1 (1.1.1.1), Dst: 2.2.2.2

(2.2.2.2)

Transmission Control Protocol, Src Port: 1173 (1173), Dst Port:

6000 (6000), Seq: 0, Ack: 128, Len: 0

Page 36: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

SDN Overview

SDN – source: opennetworking.org

Control Plane

Planning, setup, internal traffic, configuration, management, routers ‘talking’.

Routers exchange topology information and construct routing tables based on a routing protocol, for

example Routing Information Protocol (RIP) or Open Shortest Path First (OSPF).

Control plane packets are processed by the router to update the routing table information.

Analogy with telephony – network signaling.

Data Plane

Also known as Forwarding Plane.

Analogy with telephony – network media.

Handles ‘user’ or ‘data’ traffic according to the setup from the Control Plane.

Page 37: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

SDN with OpenStack

Source: docs.openstack.org:

If an SDN implementation requires layer-2 access because it directly manipulates switches, we do not

recommend running an overlay network or a layer-3 agent. If the controller resides within an OpenStack

installation, it may be necessary to build an ML2 plug-in and schedule the controller instances to connect to

tenant VLANs that then talk directly to the switch hardware. Alternatively, depending on the external device

support, use a tunnel that terminates at the switch hardware itself.

OpenStack hosted SDN controller

OpenStack participating in an SDN controller network

Page 38: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Open vSwitch (OVS)

Description, setup

Not a full-stack SDN / SDDC platform: focused on providing L2/L3 virtual networking.

Advantage of using (OVS) over Linux bridge as a L2 switching backend for Neutron is mainly performance.

Replacing Linux bridge

In the section about Linux bridges we used Linux bridges to connect virtual Ethernet interfaces tap1, tap2

(These will serve as vNICs for OpenStack VMs).

Commands

Here is how we can achieve the same with OVS:

# clean the namespaces if they exist

root@server:~ $ ip netns delete ns1

root@server:~ $ ip netns delete ns2

# add the namespaces

root@server:~ $ ip netns add ns1

root@server:~ $ ip netns add ns2

# create the switch

root@server:~ $ ovs-vsctl add-br myovs

#### PORT 1

# create a port pair

root@server:~ $ ip link add tap1 type veth peer name ovs-tap1

# attach one side to ovs

root@server:~ $ ovs-vsctl add-port myovs ovs-tap1

# attach the other side to namespace

root@server:~ $ ip link set tap1 netns ns1

# set the ports to up

root@server:~ $ ip netns exec ns1 ip link set dev tap1 up

root@server:~ $ ip link set dev ovs-tap1 up

#### PORT 2

# create a port pair

root@server:~ $ ip link add tap2 type veth peer name ovs-tap2

# attach one side to ovs

root@server:~ $ ovs-vsctl add-port myovs ovs-tap2

# attach the other side to namespace

root@server:~ $ ip link set tap2 netns ns2

# set the ports to up

Page 39: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

root@server:~ $ ip netns exec ns2 ip link set dev tap2 up

root@server:~ $ ip link set dev ovs-tap2 up

Schema

This is what we will end up with:

Problem (OpenStack - related) – We cannot (so far) impose firewall (security group) rules on OVS

Currently the implementation employs a standard Linux Bridge „in the middle“ that does no bridging,

merely filtering using iptables.

Replacing Linux bridge with Open vSwitch for namespaces connection

Page 40: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

(New info (20th October 2015): „Changed in neutron: importance: Undecided → Wishlist“)

Open vSwitch ports

Veth pairs can be replaced by openvswitch ports for better performance results:

Open vSwitch ports

# add the namespaces

ip netns add ns1

Page 41: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

ip netns add ns2

# create the switch

ovs-vsctl add-br ovstest

# PORT 1

# create an internal ovs port

ovs-vsctl add-port ovstest tap1 -- set Interface tap1 type=internal

# attach it to namespace

ip link set tap1 netns ns1

# set the ports to up

ip netns exec ns1 ip link set dev tap1 up

# PORT 2

# create an internal ovs port

ovs-vsctl add-port ovstest tap2 -- set Interface tap2 type=internal

# attach it to namespace

ip link set tap2 netns ns2

# set the ports to up

ip netns exec ns2 ip link set dev tap2 up

Performance comparison

Switch and Connection type

# of iperf threads

1 2 4 8 16

linuxbridge with two veth pairs 3.9 8.5 8.8 9.5 9.1

openvswitch with two veth pairs 4.5 9.7 11 11 11

openvswitch with two internal ovs ports 42 69 76 67 74

Performance comparison Linux Bridge vs. OVS [GBit/s] on a single machine between two namespaces – source: opencloudblog.com

Page 42: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Full picture

OVS Overview

Page 43: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Management

Sources: openvswitch.org, yet.org

Two kinds of flow

OpenFlows - User Space based

Datapath - kernel based, a kind of cached version of the OpenFlow ones.

CLI parts

ovs-vsctl - high level interface for Open vSwitch Database

ovs-ofctl - speak to OpenFlow module

ovs-dpctl - speak to Kernel module

ovs-vsctl

ovs-vsctl - V version of openvswitch

ovs-vsctl show - print a brief overview of database configuration

ovs-vsctl list-br - list of configured bridges

ovs-vsctl list-ports <bridge> - list of ports on a specific bridge

ovs-vsctl list interface - list of interfaces

ovs-ofctl

ovs-ofctl dump-flows <br> - examine OpenFlow tables

ovs-ofctl show <br> - port number to port name mapping

ovs-ofctl dump-ports <br> - port statistics by port number

ovs-dpctl

low level datapath manipulation

ovs-dpctl show - basic info

ovs-dpctl dump-flows - dump datapath (kernel cached) flows

Data Plane Development Kit (DPDK) with OVS

Development state analysis

Component Development Documentation Testing Production readiness

DPDK library OK (http://dpdk.org/dev)

Last commit: today OK OK

Intel - OK Other –

Scarse use (80%)

OVS 2.4 (needed for DPDK 2.0)

OK Last commit: today

OK OK 100%

OVS compiled

with DPDK

Poor (github.com/01org/dpdk-ovs) Last commit: 8 months ago

Very poor (e.g. makefiles

wrong)

DPDK Version 1.7 – Poor

10% – 50%

Page 44: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

DPDK Version 2.0 – Non-existent

OVS + DPDK in Neutron

In development (https://github.com/openstack/netw

orking-ovs-dpdk) Last commit: a week ago

Non-existent Non-existent (?) 0% - 20%

Mechanism needed for passing (very) custom parameters to qemu when running instances (to use DPDK features). Such mechanism is a part of nova, although this is not its intended

use and such use may not be stable.

Page 45: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

OpenContrail and comparison Neutron+OVS vs. OpenContrail+vRouter

Source: opencontrail.org

Configuration node

Management layer

North-bound APIs:

ReST

GUI

Integration

Page 46: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Transformation

Horizontally scalable

Failover-safe (including active-active HA setup)

Control node

Control layer

South-bound APIs

vRouters

physical routers

Horizontally scalable

Failover-safe (including active-active HA setup)

Analytics node

North-bound analytics APIs

Triggers, monitoring

Horizontally scalable

Failover-safe (including active-active HA setup)

vRouter on an OpenStack network/compute node

vRouter agent

The vRouter agent is a user space process running inside Linux. It acts as the local, light-weight control plane

and is responsible for the following functions:

Page 47: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Exchanging control state such as routes with the Control nodes using XMPP.

Receiving low-level configuration state such as routing instances and forwarding policy from the Control nodes using XMPP.

Reporting analytics state such as logs, statistics, and events to the analytics nodes.

Installing forwarding state into the forwarding plane.

Discovering the existence and attributes of VMs in cooperation with the Nova agent.

Applying forwarding policy for the first packet of each new flow and installing a flow entry in the flow table of the forwarding plane.

Proxying DHCP, ARP, DNS, and MDNS. Additional proxies may be added in the future.

Each vRouter agent is connected to at least two control nodes for redundancy in an active-active

redundancy model.

vRouter forwarding plane

The vRouter forwarding plane runs as a kernel loadable module in Linux and is responsible for the following

functions:

Encapsulating packets sent to the overlay network and decapsulating packets received from the overlay network.

Assigning packets to a routing instance: o Packets received from the overlay network are assigned to a routing instance based on the

MPLS label or Virtual Network Identifier (VNI). o Virtual interfaces to local virtual machines are bound to routing instances. o Doing a lookup of the destination address of the packet in the Forwarding Information

Base (FIB) and forwarding the packet to the correct destination. The routes may be layer-3 IP prefixes or layer-2 MAC addresses.

o Optionally, applying forwarding policy using a flow table: Match packets against the flow table and apply the flow actions. Optionally, punt the packets for which no flow rule is found (i.e. the first packet

of every flow) to the vRouter agent which then installs a rule in the flow table.

Page 48: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Punting certain packets such as DHCP, ARP, MDNS to the vRouter agent for proxying.

Comparision Neutron+OVS vs. OpenContrail+vRouter

Production readiness

OVS

Initial commit: before 8.7.2009

https://bugs.launchpad.net/neutron/+bugs?field.tag=ovs

vRouter

Initial commit: 24.8.2013

https://bugs.launchpad.net/juniperopenstack/+bugs?field.tag=vrouter

Management + functional level (SDN)

Neutron + OVS

OpenFlow

API – native o Sec. groups, FwaaS, LbaaS, Vpnaas o Will support QoS in Liberty

GUI – e.g. Floodlight

DPDK

Neutron initial commit: 11.5.2011

OpenContrail + vRouter

Complete platform, GUI & API included

Page 49: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Dynamic routing

Service chaining - Initial commit: 24.8.2013

Performance comparison - vRouter vs. Linux Bridge

Source: opencontrail.org

Description

On this setup, the unidirectional throughput measured with vRouter using MPLS over GRE as the

encapsulation is 9.18Gbps. The CPU consumption on the sender is 128% (1.28 CPU cores) and it is 166% on

the receiver. The CPU consumption includes the processing to push packets to and from the guest and does

not include the CPU consumed by the guest itself. With bidirectional traffic (one TCP stream in each

direction), the aggregate throughput is 13.1 Gbps and the CPU consumption is 188% on both ends.

Results

The table below compares the throughput and CPU consumption of vRouter with the Linux bridge numbers

for a unidirectional TCP streaming test.

Throughput Sender CPU Receiver CPU

Linux bridge 9.41 Gbps 85% 125%

vRouter 9.18 Gbps 128% 166%

Table 1: TCP unidirectional streaming test

The table below compares the numbers for a bidirectional TCP streaming test. The throughput below is the

aggregate of the measured throughput at each end. The CPU consumption is the same on both servers as

the traffic is bidirectional

Throughput CPU consumption

Linux bridge 13.9 Gbps 128%

vRouter 13.1 Gbps 188%

Table 2: TCP bidirectional streaming test

In order to measure the latency of communication, a TCP request-response test was run between the 2

servers. The table below compares the number of request-response transactions seen with Linux bridge and

vRouter.

Request-response transactions

Linux bridge 11050

VRouter 10800

Table 3: TCP request-response test

Data center networks often enable jumbo frames for better performance. The following table compares the

performance of vRouter with Linux bridge with a jumbo MTU on the 10G interface. The guest application

was modified to use sendfile() instead of send() in order to avoid a copy from user space to kernel.

Page 50: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Otherwise, the single-threaded guest application couldn’t achieve a bidirectional throughput higher than 14

Gbps.

Throughput CPU consumption

Linux bridge 18 Gbps 125%

vRouter 17.4 Gbps 120%

Table 4: TCP bidirectional streaming test (jumbo MTU)

SR-IOV (Single-root I/O Virtualization)

Alternative to virtual switches (OVS, Linux bridge).

Has to be supported by the physical NICs: https://access.redhat.com/articles/1390483

Protocol specification in 2007.

In OpenStack from 2014 (Juno).

Virtual Switch without (left) and with (right) SR-IOV enabled NIC – source: Glenn K. Lockwood, edited

Comparison to virtual switches:

performance (still a large gap between native – non-virtualized)

Page 51: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Comparison of virtualized w/o SR-IOV, w/ SR-IOV and native 10G switches using Amazon EC2 – source: Glenn K. Lockwood

Comparison of virtualized w/o SR-IOV, w/ SR-IOV and native 10G switches using Amazon EC2 – source: Glenn K. Lockwood

- features (iptables, migration, HA) - limited number of Virtual Functions per NIC

Page 52: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Neutron concept and relationship to other OpenStack components

Basic OpenStack services interaction scheme

(source: openstack.org)

Page 53: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Objective - To provide Network-as-a-Service (NaaS)

Sources: openstack.org, Hewlett-Packard Development Company, L.P.

To allow tenants to control their own private networks o Ability to create “multi-tier” networks o Control IP addressing (IP address overlapping)

Neutron API for operating logical networks o Separate logical operations and backend provisioning o Backend technologies are provisioned/configured by Neutron plugins/drivers.

Neutron provides common abstract network API independent from specific technologies/vendors.

Abstractions - what abstractions do we need to provide? o L2 (Switches) o L3 (Routers) o L4-7 (Firewalls, Load Balancers)

Basic network model

Tenant can create multiple (L2) networks o Subnet defines L3 subnet information (CIDR, gateway etc)

Multiple subnets can be associated to a network o E.g., IPv4 subnet + IPv6 subnet. Multiple IPv4 address pools

Support IP address overlapping among tenants o Network have multiple ports (similar to physical L2SW ports)

Virtual NICs of VMs or router interfaces are associated with neutron ports.

Basic Neutron network model

Page 54: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Components

API server - controller

Handles incoming API requests (including dashboard).

Handles DB connections & requests from agents.

Manages the agents.

OpenStack Networking plug-in and agents - nodes

Plugs and unplugs ports, creates networks or subnets, and provides IP addressing. The chosen plug-in and

agents differ depending on the vendor and technologies used in the particular cloud. It is important to

mention that only one plug-in can be used at a time.

Messaging queue – AMQP server & agents

Accepts and routes RPC requests between agents to complete API operations. Message queue is used in the

ML2 plug-in for RPC between the neutron server and neutron agents that run on each hypervisor, in the

ML2 mechanism drivers for Open vSwitch and Linux bridge.

Page 55: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Neutron networks

Tenant networks

Users create tenant networks for connectivity within projects. By default, they are fully isolated and are not

shared with other projects. OpenStack Networking supports the following types of network isolation and

overlay technologies.

Flat

(useless for multi-tenant environments)

All instances reside on the same network, which can also be shared with the hosts. No VLAN tagging or

other network segregation takes place.

VLAN

Described in the section “Network segmentation, encapsulation”. Networking allows users to create

multiple provider or tenant networks using VLAN IDs (802.1Q tagged) that correspond to VLANs present in

the physical network. This allows instances to communicate with each other across the environment. They

can also communicate with dedicated servers, firewalls, load balancers, and other networking infrastructure

on the same layer 2 VLAN (including hosts).

GRE and VXLAN

Described in the section “Network segmentation, encapsulation”. A Networking router is required to allow

traffic to flow outside of the GRE or VXLAN tenant network. A router is also required to connect directly-

connected tenant networks with external networks, including the Internet. The router provides the ability to

connect to instances directly from an external network using floating IP addresses.

Provider networks

Networks that map to existing (physical) provider’s networks. Can be created only by the administrator. Can

be:

External

Shared

Subnets

A block of IP addresses and associated configuration state. This is also known as the native IPAM (IP Address

Management) provided by the networking service for both tenant and provider networks. Subnets are used

to allocate IP addresses when new ports are created on a network.

Ports

A port is a connection point for attaching a single device, such as the NIC of a virtual server, to a virtual

network. Also describes the associated network configuration, such as the MAC and IP addresses to be used

on that port.

Page 56: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Routers

This is a logical component that forwards data packets between networks. It also provides L3 and NAT

(SNAT for connections from VMs to outside world, DNAT for floating IP addresses connections to VMs)

forwarding to provide external network access for VMs on tenant networks. Required by certain plug-ins

only.

Security groups

A security group acts as a virtual firewall for your compute instances to control inbound and outbound

traffic. Security groups act at the port level, not the subnet level. Therefore, each port (and therefore each

VM) in a subnet could be assigned to a different set of security groups.

Security groups and security group rules give administrators and tenants the ability to specify the type of

traffic and direction (ingress/egress) that is allowed to pass through a port. A security group is a container

for security group rules. When a port is created, it is associated with a security group.

Extensions

Applications can programmatically list available extensions by performing a GET on the /extensions URI.

Page 57: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Relationship to other components & projects

Projects – Nova

Nova metadata being accessed from neutron and vice-versa.

Nova passing bootfile DHCP parameter to Neutron for PXE boot.

Nova asks neutron for default networks when launching an instance.

# launching an instance

$ nova boot [--flavor <flavor>] [--image <image>]

[--image-with <key=value>]

[--boot-volume <volume_id>]

[--snapshot <snapshot_id>] [--min-count <number>]

[--max-count <number>] [--meta <key=value>]

[--file <dst-path=src-path>]

[--key-name <key-name>]

[--user-data <user-data>]

[--availability-zone <availability-zone>]

[--security-groups <security-groups>]

[--block-device-mapping <dev-name=mapping>]

[--block-device key1=value1[,key2=value2...]]

[--swap <swap_size>]

[--ephemeral size=<size>[,format=<format>]]

[--hint <key=value>]

[--nic <net-id=net-uuid,v4-fixed-ip=ip-addr,v6-

fixed-ip=ip-addr,port-id=port-uuid>]

[--config-drive <value>] [--poll]

[--admin-pass <value>]

<name>

# DHCP

root@compute1: $ ps –A | grep dnsmasq # just to get process id

980 ? 00:00:00 dnsmasq

root@compute1: $ cat /proc/980(dnsmasq-proc-id)/cmdline

dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --

interface=tap6cb4855a-ce --except-interface=lo --pid-

file=/var/lib/neutron/dhcp/e58741d0-60c7-457e-8780-366ecd3200aa/pid

--dhcp-hostsfile=/var/lib/neutron/dhcp/e58741d0-60c7-457e-8780-

366ecd3200aa/host

--addn-hosts=/var/lib/neutron/dhcp/e58741d0-60c7-457e-8780-

366ecd3200aa/addn_hosts --dhcp-

optsfile=/var/lib/neutron/dhcp/e58741d0-60c7-457e-8780-

366ecd3200aa/opts --dhcp-leasefile=/var/lib/neutron/dhcp/e58741d0-

60c7-457e-8780-366ecd3200aa/leases --dhcp-

range=set:tag0,192.168.150.0,static,86400s --dhcp-lease-max=256 --

conf-file=/etc/neutron/dnsmasq-neutron.conf --server=8.8.8.8 --

server=8.8.4.4 --domain=openstacklocal

root@compute1: $ cat /var/lib/neutron/dhcp/e58741d0-60c7-457e-8780-

366ecd3200aa/host

Page 58: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

fa:16:3e:03:4e:65,host-192-168-150-

233.openstacklocal,192.168.150.233

fa:16:3e:9f:ee:3d,host-192-168-150-

239.openstacklocal,192.168.150.239

fa:16:3e:01:53:94,host-192-168-150-

238.openstacklocal,192.168.150.238

fa:16:3e:b1:d1:e4,host-192-168-150-

235.openstacklocal,192.168.150.235

DHCP traffic follows the same principles discussed in section Network traffic schemes / VM <-> VM.

Projects – Keystone

Standard authentication mechanisms when using Neutron API.

Components – AMQP (RabbitMQ)

Internal communication among Neutron services performed via RPC.

tcpdump -f "port 5672" -i eth1 # management network

Components – Database

Neutron has its dedicated database (on the controller / dedicated db server).

Neutron architecture

Page 59: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Page 60: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

OpenStack commands for Neutron checking Sources: openstack.org, http://docs.openstack.org/cli-reference/content/neutronclient_commands.html

Common parameters

*-list

[-c, -F] specify columns to be listed [-f] format output (html, json, ...) [-D] details [--sort-key] sort according to [--sort-dir] sort direction

*-create

[-c] specify columns to be listed [-f] format output (html, json, ...) [--tenant-id]

*-show

name or ID

*-update

name or ID

*-delete

name or ID

Basic commands

openstack network

(NOT external network)

list create

o name o [--shared] o [--provider:network_type] o [--provider:physical_network] o [--provider:segmentation_id] o [--vlan-transparent] o [--qos-policy]

delete update show

external network has to be created using the ‘old way’:

Page 61: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

$ neutron net-create ext-net --router:external True \

--provider:physical_network external --provider:network_type flat

Created a new network:

+---------------------------+--------------------------------------|

Field | Value |

+---------------------------+--------------------------------------+

| admin_state_up | True |

| id | 893aebb9-1c1e-48be-8908-6b947f3237b3 |

name | ext-net |

provider:network_type | flat |

provider:physical_network | external |

provider:segmentation_id | |

router:external | True |

shared | False |

status | ACTIVE |

subnets | |

tenant_id | 54cd044c64d5408b83f843d63624e0d8 +--

-------------------------+--------------------------------------

$ neutron net-external-list +--------------------------------------+--------+-------------------------------------------------------+ | id | name | subnets | +--------------------------------------+--------+-------------------------------------------------------+ | e58741d0-60c7-457e-8780-366ecd3200aa | public | 31a30f35-54c9-4767-83bc-8df76b909cdc 192.168.150.0/24 | +--------------------------------------+--------+-------------------------------------------------------+

subnet

(NOT external network)

list create

o network o cidr o [--allocation-pool] o [--host-route] o [--gateway / --no-gateway] o [--dns-nameserver] o [--enable-dhcp/--disable-dhcp] o [--subnetpol] o [--prefixlen]

delete update

o [--allocation-pool] o [--host-route] o [--gateway / --no-gateway] o [--dns-nameserver] o [--enable-dhcp/--disable-dhcp]

Page 62: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

show

external subnet:

$ neutron subnet-create ext-net --name ext-subnet \

--allocation-pool start=FLOATING_IP_START,end=FLOATING_IP_END \

--disable-dhcp --gateway EXTERNAL_NETWORK_GATEWAY

EXTERNAL_NETWORK_CIDR

Notes from docs.openstack.org:

Replace FLOATING_IP_START and FLOATING_IP_END with the first and last IP addresses of the

range that you want to allocate for floating IP addresses. Replace EXTERNAL_NETWORK_CIDR with the

subnet associated with the physical network. Replace EXTERNAL_NETWORK_GATEWAY with the

gateway associated with the physical network, typically the ".1" IP address.

OK

You should disable DHCP on this subnet because instances do not connect directly to the external

network and floating IP addresses require manual assignment.

This is for consideration.

floatingip

list create

o public network (from which the floating ip is allocated) o [--port-id] o [--fixed-ip-address] o [--floating-ip-address] o

delete show

port

list create

o network o [--fixed-ip] o [--device-id] o [--device-owner] o [--mac-address] o [--security-group/--no-security-groups] o [--qos-policy]

delete (!!! does not delete the vm’s interface, or its ip address) update

o [--fixed-ip] o [--device-id] o [--device-owner] o [--security-group/--no-security-groups]

Page 63: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

o [--qos-policy] show nova interface-attach –port-id=(name or id of port) name-or-id-of-instance

router

list create

o name o [--ha] o [--distributed] o [--admin-state-down]

delete update

o [--name] o [--distributed] o [--admin-state-down]

show

security-group

list create

o name o [--description]

delete update

o [--name] o [--description]

show

security-group-rule

list create

o security_group o [--direction] o [--ethertype] o [--protocol] o [--port-range-min / port-range-max] o [--remote-ip-prefix]

delete show

QoS

Blueprints

https://blueprints.launchpad.net/neutron?searchtext=qos

Page 64: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Kilo

no support

Liberty

first official support (=> (prediction) not practically usable)

neutron qos-policy

policy that is later possible to assign to a vm, or a router

list create

o name o [--shared]

delete update

o [--name] o [--shared]

show

neutron qos-available-rule-types

list available rule types (in Liberty – 7.0.0 – only bandwidth limit rules are available)

neutron qos-bandwith-limit-rule

create o [--max-kbps] o [--max-burst-kbps] o qos_policy

show update

o [--max-kbps] o [--max-burst-kbps] o qos_policy

delete list

neutron queue

QoS queues

create o [--min] o [--max] o [--qos-marking] o [--default] o [--dscp] o name

show delete list

Page 65: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Page 66: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Neutron configuration files

Controller

Log file

/var/log/neutron-server.log

Main configuration file

/etc/neutron/neutron.conf

[DEFAULT]

# Print more verbose output (set logging level to INFO instead of

default WARNING level).

# Handles logging to /var/log/neutron-server.log on the controller

verbose = True

# ===Start Global Config Option for Distributed L3 Router=======

# Setting the "router_distributed" flag to "True" will default to

the creation

# of distributed tenant routers. The admin can override this flag by

specifying

# the type of the router on the create request (admin-only

attribute). Default

# value is "False" to support legacy mode (centralized) routers.

#

# Legacy mode (no DVR)

router_distributed = False

#

# ==End Global Config Option for Distributed L3 Router==========

# Print debugging output (set logging level to DEBUG instead of

default WARNING level).

# Handles logging to /var/log/neutron-server.log on the controller

# debug = False

# Where to store Neutron state files.

# This directory must be writable by the user executing the agent.

# Temporary files

# state_path = /var/lib/neutron

# log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s

# log_date_format = %Y-%m-%d %H:%M:%S

# Start - logging to syslog, stderr

# use_syslog -> syslog

# ... (logging to syslog, stderr)

# publish_errors = False

# End - logging to syslog, stderr

Page 67: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

# API endpoint specification

# Address to bind the API server to

bind_host = controller1

# Port the bind the API server to

bind_port = 9696

# Path to the API extensions

# ...

# api_extensions_path =

# (StrOpt) Neutron core plugin entrypoint to be loaded from the

# neutron.core_plugins namespace. See setup.cfg for the entrypoint

names of the

# plugins included in the neutron source distribution. For

compatibility with

# previous versions, the class name of a plugin can be specified

instead of its

# entrypoint name.

#

# Example: core_plugin = ml2

# https://github.com/openstack/neutron/blob/master/setup.cfg#L103

core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin

# (ListOpt) List of service plugin entrypoints to be loaded from the

# neutron.service_plugins namespace. See setup.cfg for the

entrypoint names of

# the plugins included in the neutron source distribution. For

compatibility

# with previous versions, the class name of a plugin can be

specified instead

# of its entrypoint name.

#

# Example: service_plugins = router,firewall,lbaas,vpnaas,metering

# https://github.com/openstack/neutron/blob/master/setup.cfg#L108

service_plugins = router,lbaas,vpnaas

# api-paste.ini – API & authentication settings

# Paste configuration file

# api_paste_config = api-paste.ini

# default = hostname

# (StrOpt) Hostname to be used by the neutron server, agents and

services

# running on this machine. All the agents and services running on

this machine

# must use the same host value.

# The default value is hostname of the machine.

#

# host =

Page 68: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

# Neutron authentication

# The strategy to be used for auth.

# Supported values are 'keystone'(default), 'noauth'.

auth_strategy = keystone

# Random MAC generation (defaults apply)

# Base MAC address. The first 3 octets will remain unchanged. If the

# 4h octet is not 00, it will also be used. The others will be

# randomly generated.

# 3 octet

# base_mac = fa:16:3e:00:00:00

# 4 octet

# base_mac = fa:16:3e:4f:00:00

# DVR Base MAC address. The first 3 octets will remain unchanged. If

the

# 4th octet is not 00, it will also be used. The others will be

randomly

# generated. The 'dvr_base_mac' *must* be different from 'base_mac'

to

# avoid mixing them up with MAC's allocated for tenant ports.

# A 4 octet example would be dvr_base_mac = fa:16:3f:4f:00:00

# The default is 3 octet

# dvr_base_mac = fa:16:3f:00:00:00

# Maximum amount of retries to generate a unique MAC address

# mac_generation_retries = 16

# DNSMASQ settings

# DHCP Lease duration (in seconds). Use -1 to

# tell dnsmasq to use infinite lease times.

# dhcp_lease_duration = 86400

# Allow sending resource operation notification to DHCP agent

# dhcp_agent_notification = True

# Enable or disable bulk create/update/delete operations

# allow_bulk = True

# Enable or disable pagination

# allow_pagination = False

# Enable or disable sorting

# allow_sorting = False

# Enable or disable overlapping IPs for subnets

# Attention: the following parameter MUST be set to False if Neutron

is

# being used in conjunction with nova security groups

allow_overlapping_ips = True

# Ensure that configured gateway is on subnet. For IPv6, validate

only if

# gateway is not a link local address. Deprecated, to be removed

during the

Page 69: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

# K release, at which point the check will be mandatory.

# force_gateway_on_subnet = True

# Default maximum number of items returned in a single response,

# value == infinite and value < 0 means no max limit, and value must

# be greater than 0. If the number of items requested is greater

than

# pagination_max_limit, server will just return pagination_max_limit

# of number of items.

# pagination_max_limit = -1

# Maximum number of DNS nameservers per subnet

# max_dns_nameservers = 5

# Maximum number of host routes per subnet

# max_subnet_host_routes = 20

# Maximum number of fixed ips per port

# max_fixed_ips_per_port = 5

# Maximum number of routes per router

# max_routes = 30

# Default Subnet Pool to be used for IPv4 subnet-allocation.

# Specifies by UUID the pool to be used in case of subnet-create

# being called without a subnet-pool ID. The default of None means

# that no pool will be used unless passed explicitly to subnet

# If no pool is used, then a CIDR must be passed to create a subnet

# and that subnet will not be allocated from any pool; it will be

# considered part of the tenant's private address space.

# default_ipv4_subnet_pool =

# Default Subnet Pool to be used for IPv6 subnet-allocation.

# Specifies by UUID the pool to be used in case of subnet-create

# being called without a subnet-pool ID.

# default_ipv6_subnet_pool =

# New in Kilo, MTU for instances done in a different config file.

# ==== items for MTU selection and advertisement =============

# Advertise MTU. If True, effort is made to advertise MTU

# settings to VMs via network methods (ie. DHCP and RA MTU options)

# when the network's preferred MTU is known.

# advertise_mtu = False

# ==== end of items for MTU selection and advertisement =========

# =========== items for agent management extension =============

# Seconds to regard the agent as down; should be at least twice

# report_interval, to be sure the agent is down for good

# agent_down_time = 75

# =========== end of items for agent management extension =====

# =========== items for agent scheduler extension =============

# Driver to use for scheduling network to DHCP agent

Page 70: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

#

https://github.com/openstack/neutron/blob/master/neutron/scheduler/d

hcp_agent_scheduler.py

# network_scheduler_driver =

# neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler

# Driver to use for scheduling router to a default L3 agent

#

https://github.com/openstack/neutron/blob/master/neutron/scheduler/l

3_agent_scheduler.py

# router_scheduler_driver =

neutron.scheduler.l3_agent_scheduler.ChanceScheduler

# Driver to use for scheduling a loadbalancer pool to an lbaas agent

# loadbalancer_pool_scheduler_driver =

neutron.services.loadbalancer.agent_scheduler.ChanceScheduler

# (StrOpt) Representing the resource type whose load is being

# reported by the agent.

# This can be 'networks','subnets' or 'ports'. When specified

# (Default is networks), the server will extract particular load

# sent as part of its agent configuration object from the agent

# report state, which is the number of resources being consumed, at

# every report_interval.

# dhcp_load_type can be used in combination with

# network_scheduler_driver =

# neutron.scheduler.dhcp_agent_scheduler.WeightScheduler

# When the network_scheduler_driver is WeightScheduler,

# dhcp_load_type can be configured to represent the choice for the

# resource being balanced.

# Example: dhcp_load_type = networks

# Values:

# networks - number of networks hosted on the agent

# subnets - number of subnets associated with the networks

# hosted on the agent

# ports - number of ports associated with the networks hosted

# on the agent

# dhcp_load_type = networks

# Allow auto scheduling networks to DHCP agent. It will schedule

# non-hosted networks to first DHCP agent which sends

# get_active_networks message to neutron server

# network_auto_schedule = True

# Allow auto scheduling routers to L3 agent. It will schedule non-

# hosted routers to first L3 agent which sends sync_routers message

# to neutron server

# router_auto_schedule = True

# Allow automatic rescheduling of routers from dead L3 agents with

# admin_state_up set to True to alive agents.

# allow_automatic_l3agent_failover = False

# Allow automatic removal of networks from dead DHCP agents with

# admin_state_up set to True.

Page 71: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

# Networks could then be rescheduled if network_auto_schedule is

# True

# allow_automatic_dhcp_failover = True

# Number of DHCP agents scheduled to host a network. This enables

# redundant DHCP agents for configured networks.

# dhcp_agents_per_network = 1

# Enable services on agents with admin_state_up False.

# If this option is False, when admin_state_up of an agent is

# turned to False, services on it will be disabled. If this option

# is True, services on agents with admin_state_up False keep

# available and manual scheduling to such agents is available.

# Agents with admin_state_up False are not selected for automatic

# scheduling regardless of this option.

# enable_services_on_agents_with_admin_state_down = False

# =========== end of items for agent scheduler extension =====

# =========== items for l3 extension ==============

# Enable high availability for virtual routers.

# l3_ha = False

#

# Maximum number of l3 agents which a HA router will be scheduled

# on.If it is set to 0 the router will be scheduled on every agent.

# max_l3_agents_per_router = 3

#

# Minimum number of l3 agents which a HA router will be scheduled

# on. The default value is 2.

# min_l3_agents_per_router = 2

#

# CIDR of the administrative network if HA mode is enabled

# l3_ha_net_cidr = 169.254.192.0/18

# =========== end of items for l3 extension =======

# =========== items for metadata proxy configuration ==============

# User (uid or name) running metadata proxy after its

# initialization (if empty: agent effective user)

# metadata_proxy_user =

# Group (gid or name) running metadata proxy after its

# initialization (if empty: agent effective group)

# metadata_proxy_group =

# Enable/Disable log watch by metadata proxy, it should be disabled

# when metadata_proxy_user/group is not allowed to read/write its

# log file and

# 'copytruncate' logrotate option must be used if logrotate is

# enabled on metadata proxy log files. Option default value is

# deduced from metadata_proxy_user: watch log is enabled if

# metadata_proxy_user is agent effective user id/name.

# metadata_proxy_watch_log =

Page 72: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

# Location of Metadata Proxy UNIX domain socket

# metadata_proxy_socket = $state_path/metadata_proxy

# ===== end of items for metadata proxy configuration ============

# ========== items for VLAN trunking networks ==========

# Setting this flag to True will allow plugins that support it to

# create VLAN transparent networks. This flag has no effect for

# plugins that do not support VLAN transparent networks.

# vlan_transparent = False

# ========== end of items for VLAN trunking networks ==========

# == WSGI (Web Server Gateway Interface) API server parameters ==

# Number of separate worker processes to spawn. The default, 0,

# runs the worker thread in the current process. Greater than 0

# launches that number of child processes as workers. The parent

# process manages them.

# api_workers = 0

# Number of separate RPC worker processes to spawn. The default,

# 0, runs the worker thread in the current process. Greater than 0

# launches that number of child processes as RPC workers. The

# parent process manages them. This feature is experimental until

# issues are addressed and testing has been enabled for various

# plugins for compatibility.

# rpc_workers = 0

# Timeout for client connections socket operations. If an

# incoming connection is idle for this number of seconds it

# will be closed. A value of '0' means wait forever. (integer

# value)

# client_socket_timeout = 900

# wsgi keepalive option. Determines if connections are allowed to

# be held open by clients after a request is fulfilled. A value of #

False will ensure that the socket connection will be explicitly

# closed once a response has been sent to the client.

# wsgi_keep_alive = True

# Sets the value of TCP_KEEPIDLE in seconds to use for each server

# socket when starting API server. Not supported on OS X.

# tcp_keepidle = 600

# Number of seconds to keep retrying to listen

# retry_until_window = 30

# Number of backlog requests to configure the socket with.

# backlog = 4096

# Max header line to accommodate large tokens

# max_header_line = 16384

# Enable SSL on the API server

# use_ssl = False

Page 73: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

# Certificate file to use when starting API server securely

# ssl_cert_file = /path/to/certfile

# Private key file to use when starting API server securely

# ssl_key_file = /path/to/keyfile

# CA certificate file to use when starting API server securely to

# verify connecting clients. This is an optional parameter only

required if

# API clients need to authenticate to the API server using SSL

# certificates signed by a trusted CA

# ssl_ca_file = /path/to/cafile

# ===== end of WSGI parameters related to the API server =========

# ======== neutron nova interactions ==========

# Send notification to nova when port status is active.

notify_nova_on_port_status_changes = True

# Send notifications to nova when port data (fixed_ips/floatingips)

# change so nova can update it's cache.

notify_nova_on_port_data_changes = True

# URL for connection to nova (Only supports one nova region

currently).

nova_url = http://controller1:8774/v2

# Name of nova region to use. Useful if keystone manages more than

one region

# nova_region_name =

# Username for connection to nova in admin context

# nova_admin_username =

# The uuid of the admin nova tenant

# nova_admin_tenant_id =

# The name of the admin nova tenant. If the uuid of the admin nova

# tenant is set, this is optional. Useful for cases where the uuid

# of the admin nova tenant is not available when configuration is

# being done.

# nova_admin_tenant_name =

# Password for connection to nova in admin context.

# nova_admin_password =

# Authorization URL for connection to nova in admin context.

# nova_admin_auth_url =

# CA file for novaclient to verify server certificates

# nova_ca_certificates_file =

# Boolean to control ignoring SSL errors on the nova url

Page 74: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

# nova_api_insecure = False

# Number of seconds between sending events to nova if there are any

events to send

# send_events_interval = 2

# ======== end of neutron nova interactions ==========

# ================= AMQP ======================

# Use durable queues in amqp. (boolean value)

# Deprecated group/name - [DEFAULT]/rabbit_durable_queues

# amqp_durable_queues=false

# Auto-delete queues in amqp. (boolean value)

# amqp_auto_delete=false

# Size of RPC connection pool. (integer value)

# rpc_conn_pool_size=30

# The messaging driver to use, defaults to rabbit. Other

# drivers include qpid and zmq. (string value)

rpc_backend = rabbit

# The default exchange under which topics are scoped. May be

# overridden by an exchange name specified in the

# transport_url option. (string value)

# control_exchange=openstack

[matchmaker_redis]

#

# Options defined in oslo.messaging

#

# Host to locate redis. (string value)

# host=127.0.0.1

# Use this port to connect to redis host. (integer value)

# port=6379

# Password for Redis server (optional). (string value)

# password=

[matchmaker_ring]

#

# Options defined in oslo.messaging

#

# Matchmaker ring file (JSON). (string value)

# Deprecated group/name - [DEFAULT]/matchmaker_ringfile

# ringfile=/etc/oslo/matchmaker_ring.json

Page 75: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

# ================= end of AMQP ======================

# ================= quotas ======================

# Default driver to use for quota checks

# quota_driver = neutron.db.quota_db.DbQuotaDriver

# Resource name(s) that are supported in quota features

# quota_items = network,subnet,port

# Default number of resource allowed per tenant. A negative value

means

# unlimited.

# default_quota = -1

# Number of networks allowed per tenant. A negative value means

unlimited.

# quota_network = 10

# Number of subnets allowed per tenant. A negative value means

unlimited.

# quota_subnet = 10

# Number of ports allowed per tenant. A negative value means

unlimited.

# quota_port = 50

# Number of security groups allowed per tenant. A negative value

means

# unlimited.

# quota_security_group = 10

# Number of security group rules allowed per tenant. A negative

value means

# unlimited.

# quota_security_group_rule = 100

# Number of vips allowed per tenant. A negative value means

unlimited.

# quota_vip = 10

# Number of pools allowed per tenant. A negative value means

unlimited.

# quota_pool = 10

# Number of pool members allowed per tenant. A negative value means

unlimited.

# The default is unlimited because a member is not a real resource

consumer

# on Openstack. However, on back-end, a member is a resource

consumer

# and that is the reason why quota is possible.

# quota_member = -1

Page 76: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

# Number of health monitors allowed per tenant. A negative value

means

# unlimited.

# The default is unlimited because a health monitor is not a real

resource

# consumer on Openstack. However, on back-end, a member is a

resource consumer

# and that is the reason why quota is possible.

# quota_health_monitor = -1

# Number of loadbalancers allowed per tenant. A negative value means

unlimited.

# quota_loadbalancer = 10

# Number of listeners allowed per tenant. A negative value means

unlimited.

# quota_listener = -1

# Number of v2 health monitors allowed per tenant. A negative value

means

# unlimited. These health monitors exist under the lbaas v2 API

# quota_healthmonitor = -1

# Number of routers allowed per tenant. A negative value means

unlimited.

# quota_router = 10

# Number of floating IPs allowed per tenant. A negative value means

unlimited.

# quota_floatingip = 50

# Number of firewalls allowed per tenant. A negative value means

unlimited.

# quota_firewall = 1

# Number of firewall policies allowed per tenant. A negative value

means

# unlimited.

# quota_firewall_policy = 1

# Number of firewall rules allowed per tenant. A negative value

means

# unlimited.

# quota_firewall_rule = 100

# ================= end of quotas ======================

# ================= agent ======================

# Use "sudo neutron-rootwrap /etc/neutron/rootwrap.conf" to use the

# real root filter facility. Change to "sudo" to skip the filtering

# and just run the command directly

root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf

Page 77: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

# Set to true to add comments to generated iptables rules that

# describe each rule's purpose. (System must support the iptables

# comments module.)

# comment_iptables_rules = True

# Root helper daemon application to use when possible.

# root_helper_daemon =

# Use the root helper when listing the namespaces on a system. This

# may not be required depending on the security configuration. If

# the root helper is not required, set this to False for a

# performance improvement.

# use_helper_for_ns_read = True

# The interval to check external processes for failure in seconds

(0=disabled)

# check_child_processes_interval = 60

# Action to take when an external process spawned by an agent dies

# Values:

# respawn - Respawns the external process

# exit - Exits the agent

# check_child_processes_action = respawn

# ================= end of agent ======================

# =========== items for agent management extension =============

# seconds between nodes reporting state to server; should be less

# than agent_down_time, best if it is half or less than

# agent_down_time

# report_interval = 30

# =========== end of items for agent management extension =====

# ============== keystone_authtoken ==========================

auth_uri = http://controller1:5000

auth_url = http://controller1:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = neutron

password = asd08f756ae0ftya

# ============== end of keystone_authtoken ========================

# ====================== database ==========================

# This line MUST be changed to actually run the plugin.

# Example: connection = mysql://root:[email protected]:3306/neutron

# Replace 127.0.0.1 above with the IP address of the database used

# by the main neutron server. (Leave it as is if the database runs

# on this host.)

# NOTE: In deployment the [database] section and its connection

Page 78: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

# attribute may be set in the corresponding core plugin '.ini'

# file. However, it is suggested to put the [database] section and

# its connection attribute in this configuration file.

connection = mysql://neutron:k9xBUqUbsjf8BY6wRmVz@localhost/neutron

# Database engine for which script will be generated when using

# offline migration

# engine =

# The SQLAlchemy connection string used to connect to the slave

database

# slave_connection =

# Database reconnection retry times - in event connectivity is lost

# set to -1 implies an infinite retry count

# max_retries = 10

# Database reconnection interval in seconds - if the initial

# connection to the database fails

# retry_interval = 10

# Minimum number of SQL connections to keep open in a pool

# min_pool_size = 1

# Maximum number of SQL connections to keep open in a pool

# max_pool_size = 10

# Timeout in seconds before idle sql connections are reaped

# idle_timeout = 3600

# If set, use this value for max_overflow with sqlalchemy

# max_overflow = 20

# Verbosity of SQL debugging information. 0=None, 100=Everything

# connection_debug = 0

# Add python stack traces to SQL as comment strings

# connection_trace = False

# If set, use this value for pool_timeout with sqlalchemy

# pool_timeout = 10

# ====================== end of database ==========================

# =========================== nova ==============================

# Name of the plugin to load

# auth_plugin =

# Config Section from which to load plugin specific options

# auth_section =

# PEM encoded Certificate Authority to use when verifying HTTPs

connections.

# cafile =

Page 79: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

# PEM encoded client certificate cert file

# certfile =

# Verify HTTPS connections.

# insecure = False

# PEM encoded client certificate key file

# keyfile =

# Timeout value for http requests

# timeout =

auth_url = http://controller1:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

region_name = regionOne

project_name = service

username = nova

password = asd08f756ae0ftya

# ========================= end of nova ===========================

# ====================== oslo_concurrency =========================

# Directory to use for lock files. For security, the specified

# directory should only be writable by the user running the

# processes that need locking. Defaults to environment variable

# OSLO_LOCK_PATH. If external locks are used, a lock path must be

# set.

lock_path = $state_path/lock

# Enables or disables inter-process locks.

# disable_process_locking = False

# ========================= oslo_policy ==========================

# The JSON file that defines policies.

# policy_file = policy.json

# Default rule. Enforced when a requested rule is not found.

# policy_default_rule = default

# Directories where policy configuration files are stored.

# They can be relative to any directory in the search path defined

# by the config_dir option, or absolute paths. The file defined by

# policy_file must exist for these directories to be searched.

# Missing or empty directories are ignored.

# policy_dirs = policy.d

# ===================== oslo_messaging_amqp =======================

# Address prefix used when sending to a specific server (string

value)

# Deprecated group/name - [amqp1]/server_request_prefix

Page 80: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

# server_request_prefix = exclusive

# Address prefix used when broadcasting to all servers (string

value)

# Deprecated group/name - [amqp1]/broadcast_prefix

# broadcast_prefix = broadcast

# Address prefix when sending to any server in group (string value)

# Deprecated group/name - [amqp1]/group_request_prefix

# group_request_prefix = unicast

# Name for the AMQP container (string value)

# Deprecated group/name - [amqp1]/container_name

# container_name =

# Timeout for inactive connections (in seconds) (integer value)

# Deprecated group/name - [amqp1]/idle_timeout

# idle_timeout = 0

# Debug: dump AMQP frames to stdout (boolean value)

# Deprecated group/name - [amqp1]/trace

# trace = false

# CA certificate PEM file for verifing server certificate (string

value)

# Deprecated group/name - [amqp1]/ssl_ca_file

# ssl_ca_file =

# Identifying certificate PEM file to present to clients (string

value)

# Deprecated group/name - [amqp1]/ssl_cert_file

# ssl_cert_file =

# Private key PEM file used to sign cert_file certificate (string

value)

# Deprecated group/name - [amqp1]/ssl_key_file

# ssl_key_file =

# Password for decrypting ssl_key_file (if encrypted) (string value)

# Deprecated group/name - [amqp1]/ssl_key_password

# ssl_key_password =

# Accept clients using either SSL or plain TCP (boolean value)

# Deprecated group/name - [amqp1]/allow_insecure_clients

# allow_insecure_clients = false

# ==================== oslo_messaging_rabbit ======================

# Use durable queues in AMQP. (boolean value)

# Deprecated group/name - [DEFAULT]/rabbit_durable_queues

# amqp_durable_queues=False

# Auto-delete queues in AMQP. (boolean value)

# Deprecated group/name - [DEFAULT]/amqp_auto_delete

# amqp_auto_delete = false

Page 81: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

# Size of RPC connection pool. (integer value)

# Deprecated group/name - [DEFAULT]/rpc_conn_pool_size

# rpc_conn_pool_size = 30

# SSL version to use (valid only if SSL enabled). Valid values are

TLSv1 and

# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on

some

# distributions. (string value)

# Deprecated group/name - [DEFAULT]/kombu_ssl_version

# kombu_ssl_version =

# SSL key file (valid only if SSL enabled). (string value)

# Deprecated group/name - [DEFAULT]/kombu_ssl_keyfile

# kombu_ssl_keyfile =

# SSL cert file (valid only if SSL enabled). (string value)

# Deprecated group/name - [DEFAULT]/kombu_ssl_certfile

# kombu_ssl_certfile =

# SSL certification authority file (valid only if SSL enabled).

(string value)

# Deprecated group/name - [DEFAULT]/kombu_ssl_ca_certs

# kombu_ssl_ca_certs =

# How long to wait before reconnecting in response to an AMQP

consumer cancel

# notification. (floating point value)

# Deprecated group/name - [DEFAULT]/kombu_reconnect_delay

# kombu_reconnect_delay = 1.0

# The RabbitMQ broker address where a single node is used. (string

value)

# Deprecated group/name - [DEFAULT]/rabbit_host

rabbit_host = rabbitmq1

# The RabbitMQ broker port where a single node is used. (integer

value)

# Deprecated group/name - [DEFAULT]/rabbit_port

rabbit_port = 5672

# RabbitMQ HA cluster host:port pairs. (list value)

# Deprecated group/name - [DEFAULT]/rabbit_hosts

# rabbit_hosts = $rabbit_host:$rabbit_port

# Connect over SSL for RabbitMQ. (boolean value)

# Deprecated group/name - [DEFAULT]/rabbit_use_ssl

rabbit_use_ssl = False

# The RabbitMQ userid. (string value)

# Deprecated group/name - [DEFAULT]/rabbit_userid

rabbit_userid = guest

Page 82: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

# The RabbitMQ password. (string value)

# Deprecated group/name - [DEFAULT]/rabbit_password

rabbit_password = ousthf67thb9R8876RBASD

# The RabbitMQ login method. (string value)

# Deprecated group/name - [DEFAULT]/rabbit_login_method

# rabbit_login_method = AMQPLAIN

# The RabbitMQ virtual host. (string value)

# Deprecated group/name - [DEFAULT]/rabbit_virtual_host

rabbit_virtual_host = /

# How frequently to retry connecting with RabbitMQ. (integer value)

# rabbit_retry_interval=1

# How long to backoff for between retries when connecting to

RabbitMQ. (integer

# value)

# Deprecated group/name - [DEFAULT]/rabbit_retry_backoff

# rabbit_retry_backoff=2

# Maximum number of RabbitMQ connection retries. Default is 0

(infinite retry

# count). (integer value)

# Deprecated group/name - [DEFAULT]/rabbit_max_retries

# rabbit_max_retries=0

# Use HA queues in RabbitMQ (x-ha-policy: all). If you change this

# option, you must wipe the RabbitMQ database. (boolean value)

# Deprecated group/name - [DEFAULT]/rabbit_ha_queues

# rabbit_ha_queues=False

# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake

(boolean value)

# Deprecated group/name - [DEFAULT]/fake_rabbit

fake_rabbit = False

# ===================== service providers ====================

# Specify service providers (drivers) for advanced services like

# loadbalancer, VPN, Firewall. Must be in form:

# service_provider=<service_type>:<name>:<driver>[:default]

# List of allowed service types includes LOADBALANCER, FIREWALL, VPN

# Combination of <service type> and <name> must be unique; <driver>

must also be unique

# This is multiline option, example for default provider:

#

service_provider=LOADBALANCER:name:lbaas_plugin_driver_path:default

# example of non-default provider:

# service_provider=FIREWALL:name2:firewall_driver_path

service_provider =

LOADBALANCER:Haproxy:neutron_lbaas.services.loadbalancer.drivers.hap

roxy.plugin_driver.HaproxyOnHostPluginDriver:default

Page 83: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

service_provider =

VPN:openswan:neutron_vpnaas.services.vpn.service_drivers.ipsec.IPsec

VPNDriver:default

Page 84: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Nova configuration file

/etc/nova/nova.conf

# Control for checking for default networks (string value)

#use_neutron_default_nets=False

# Default tenant id when creating neutron networks (string

# value)

#neutron_default_tenant_id=default

# Number of private networks allowed per project (integer

# value)

#quota_networks=3

# The full class name of the network API class to use (string

# value)

network_api_class = nova.network.neutronv2.api.API

# === Options defined in nova.network.driver ===

# Driver to use for network creation (string value)

#network_driver=nova.network.linux_net

# === Options defined in nova.network.floating_ips ===

# Default pool for floating IPs (string value)

default_floating_pool = public

# Autoassigning floating IP to VM (boolean value)

#auto_assign_floating_ip=false

====== Options defined in nova.network.neutronv2.api ======

# URL for connecting to neutron (string value)

url = http://controller1:9696

# User id for connecting to neutron in admin context.

# DEPRECATED: specify an auth_plugin and appropriate

# credentials instead. (string value)

#admin_user_id=<None>

# Username for connecting to neutron in admin context

# DEPRECATED: specify an auth_plugin and appropriate

# credentials instead. (string value)

admin_username = neutron

# Password for connecting to neutron in admin context

# DEPRECATED: specify an auth_plugin and appropriate

# credentials instead. (string value)

admin_password = asd08f756ae0ftya

# Tenant id for connecting to neutron in admin context

# DEPRECATED: specify an auth_plugin and appropriate

# credentials instead. (string value)

#admin_tenant_id=<None>

Page 85: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

# Tenant name for connecting to neutron in admin context. This

# option will be ignored if neutron_admin_tenant_id is set.

# Note that with Keystone V3 tenant names are only unique

# within a domain. DEPRECATED: specify an auth_plugin and

# appropriate credentials instead. (string value)

admin_tenant_name = service

# Region name for connecting to neutron in admin context

# (string value)

#region_name=<None>

# Authorization URL for connecting to neutron in admin

# context. DEPRECATED: specify an auth_plugin and appropriate

# credentials instead. (string value)

admin_auth_url = http://controller1:35357/v2.0

# Authorization strategy for connecting to neutron in admin

# context. DEPRECATED: specify an auth_plugin and appropriate

# credentials instead. If an auth_plugin is specified strategy

# will be ignored. (string value)

auth_strategy = keystone

# Name of Integration Bridge used by Open vSwitch (string

# value)

#ovs_bridge=br-int

# Number of seconds before querying neutron for extensions

# (integer value)

#extension_sync_interval=600

Page 86: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

ML2 plugin configuration file

/etc/neutron/plugins/ml2_conf.ini

# Drivers to be subsequently used in tenant_network_types

# (ListOpt) List of network type driver entrypoints to be loaded

# from the neutron.ml2.type_drivers namespace.

# Example: type_drivers = local,flat,vlan,gre,vxlan

type_drivers = flat,vxlan

# (ListOpt) Ordered list of network_types to allocate as tenant

# networks. The default value 'local' is useful for single-box

# testing but provides no connectivity between hosts.

#

# Example: tenant_network_types = vlan,gre,vxlan

tenant_network_types = vxlan

# (ListOpt) Ordered list of networking mechanism driver entrypoints

# to be loaded from the neutron.ml2.mechanism_drivers namespace.

# Example: mechanism_drivers = openvswitch,mlnx

# Example: mechanism_drivers = arista

# Example: mechanism_drivers = cisco,logger

# Example: mechanism_drivers = openvswitch,brocade

# Example: mechanism_drivers = linuxbridge,brocade

mechanism_drivers = openvswitch,l2population

# (ListOpt) Ordered list of extension driver entrypoints

# to be loaded from the neutron.ml2.extension_drivers namespace.

# extension_drivers =

# Example: extension_drivers = anewextensiondriver

# =========== items for MTU selection and advertisement =========

# MTU is possible to be set in different places, here is one

# option, in DHCP response is another

# (IntOpt) Path MTU. The maximum permissible size of an

# unfragmented packet travelling from and to addresses where

# encapsulated Neutron traffic is sent. Drivers calculate

# maximum viable MTU for validating tenant requests based on this

# value (typically, path_mtu - max encap header size). If <=0,

# the path MTU is indeterminate and no calculation takes place.

# path_mtu = 0

# (IntOpt) Segment MTU. The maximum permissible size of an

# unfragmented packet travelling a L2 network segment. If <=0,

# the segment MTU is indeterminate and no calculation takes place.

# segment_mtu = 0

# (ListOpt) Physical network MTUs. List of mappings of physical

# network to MTU value. The format of the mapping is

# <physnet>:<mtu val>. This mapping allows specifying a

# physical network MTU value that differs from the default

# segment_mtu value.

# physical_network_mtus =

# Example: physical_network_mtus = physnet1:1550, physnet2:1500

Page 87: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

# ====================== flat networks ======================

# (ListOpt) List of physical_network names with which flat

# networks

# can be created. Use * to allow flat networks with arbitrary

# physical_network names.

#

# Example:flat_networks = physnet1,physnet2

# Example:flat_networks = *

flat_networks = external

# ====================== vlan networks ======================

# (ListOpt) List of <physical_network>[:<vlan_min>:<vlan_max>]

tuples

# specifying physical_network names usable for VLAN provider and

# tenant networks, as well as ranges of VLAN tags on each

# physical_network available for allocation as tenant networks.

#

# network_vlan_ranges =

# Example: network_vlan_ranges = physnet1:1000:2999,physnet2

# ====================== gre networks ======================

# (ListOpt) Comma-separated list of <tun_min>:<tun_max> tuples

enumerating ranges of GRE tunnel IDs that are available for tenant

network allocation

# tunnel_id_ranges =

# ====================== vxlan networks ======================

# (ListOpt) Comma-separated list of <vni_min>:<vni_max> tuples

enumerating

# ranges of VXLAN VNI IDs that are available for tenant network

allocation.

#

vni_ranges = 65537:69999

# (StrOpt) Multicast group for the VXLAN interface. When configured,

will

# enable sending all broadcast traffic to this multicast group. When

left

# unconfigured, will disable multicast VXLAN mode.

#

# vxlan_group =

# Example: vxlan_group = 239.1.1.1

# ====================== security groups ======================

# Controls if neutron security group is enabled or not.

# It should be false when you use nova security group.

enable_security_group = True

# Use ipset to speed-up the iptables security groups. Enabling ipset

support

# requires that ipset is installed on L2 agent node.

Page 88: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

enable_ipset = True

Page 89: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

OVS plugin configuration file

/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

[ovs]

# (BoolOpt) Set to True in the server and the agents to enable

support

# for GRE or VXLAN networks. Requires kernel support for OVS patch

ports and

# GRE or VXLAN tunneling.

enable_tunneling = True

# Do not change this parameter unless you have a good reason to.

# This is the name of the OVS integration bridge. There is one per

hypervisor.

# The integration bridge acts as a virtual "patch bay". All VM VIFs

are

# attached to this bridge and then "patched" according to their

network

# connectivity.

#

integration_bridge = br-int

# Do not change this parameter unless you have a good reason to.

# Only used for the agent if tunnel_id_ranges is not empty for

# the server. In most cases, the default value should be fine.

#

# tunnel_bridge = br-tun

# Peer patch port in integration bridge for tunnel bridge

# int_peer_patch_port = patch-tun

# Peer patch port in tunnel bridge for integration bridge

# tun_peer_patch_port = patch-int

# Uncomment this line for the agent if tunnel_id_ranges is not

# empty for the server. Set local-ip to be the local IP address

# of this hypervisor.

#

# local_ip =

# (ListOpt) Comma-separated list of <physical_network>:<bridge>

tuples

# mapping physical network names to the agent's node-specific OVS

# bridge names to be used for flat and VLAN networks. The length of

# bridge names should be no more than 11. Each bridge must

# exist, and should have a physical network interface configured as

a

# port. All physical networks configured on the server should have

# mappings to appropriate bridges on each agent.

#

# Example: bridge_mappings = physnet1:br-eth1

bridge_mappings = external:br-ex

Page 90: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

# (BoolOpt) Use veths instead of patch ports to interconnect the

integration

# bridge to physical networks. Support kernel without ovs patch port

support

# so long as it is set to True.

# use_veth_interconnection = False

# (StrOpt) Which OVSDB backend to use, defaults to 'vsctl'

# vsctl - The backend based on executing ovs-vsctl

# native - The backend based on using native OVSDB

# ovsdb_interface = vsctl

# (StrOpt) The connection string for the native OVSDB backend

# To enable ovsdb-server to listen on port 6640:

# ovs-vsctl set-manager ptcp:6640:127.0.0.1

# ovsdb_connection = tcp:127.0.0.1:6640

[agent]

# Agent's polling interval in seconds

polling_interval = 15

# Minimize polling by monitoring ovsdb for interface changes

# minimize_polling = True

# When minimize_polling = True, the number of seconds to wait before

# respawning the ovsdb monitor after losing communication with it

# ovsdb_monitor_respawn_interval = 30

# (ListOpt) The types of tenant network tunnels supported by the

agent.

# Setting this will enable tunneling support in the agent. This can

be set to

# either 'gre' or 'vxlan'. If this is unset, it will default to []

and

# disable tunneling support in the agent.

# You can specify as many values here as your compute hosts

supports.

#

# Example: tunnel_types = gre

# Example: tunnel_types = vxlan

# Example: tunnel_types = vxlan, gre

tunnel_types = vxlan

# (IntOpt) The port number to utilize if tunnel_types includes

'vxlan'. By

# default, this will make use of the Open vSwitch default value of

'4789' if

# not specified.

#

# vxlan_udp_port =

# Example: vxlan_udp_port = 8472

# (IntOpt) This is the MTU size of veth interfaces.

Page 91: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

# Do not change unless you have a good reason to.

# The default MTU size of veth interfaces is 1500.

# This option has no effect if use_veth_interconnection is False

# veth_mtu =

# Example: veth_mtu = 1504

# L2 population – speed up of the ARP requests

# (BoolOpt) Flag to enable l2-population extension. This option

# should only be used in conjunction with ml2 plugin and

# l2population mechanism driver. It'll enable plugin to populate

# remote ports macs and IPs (using fdb_add/remove RPC calbbacks

# instead of tunnel_sync/update) on OVS agents in order to

# optimize tunnel management.

#

l2_population = True

# Enable local ARP responder. Requires OVS 2.1. This is only used

# by the l2 population ML2 MechanismDriver.

#

arp_responder = False

# Enable suppression of ARP responses that don't match an IP address

that

# belongs to the port from which they originate.

# Note: This prevents the VMs attached to this agent from spoofing,

# it doesn't protect them from other devices which have the

capability to spoof

# (e.g. bare metal or VMs attached to agents without this flag set

to True).

# Requires a version of OVS that can match ARP headers.

#

# prevent_arp_spoofing = False

# (BoolOpt) Set or un-set the don't fragment (DF) bit on outgoing IP

packet

# carrying GRE/VXLAN tunnel. The default value is True.

#

# dont_fragment = True

# (BoolOpt) Set to True on L2 agents to enable support

# for distributed virtual routing.

#

enable_distributed_routing = False

# (IntOpt) Set new timeout in seconds for new rpc calls after agent

receives

# SIGTERM. If value is set to 0, rpc timeout won't be changed"

#

# quitting_rpc_timeout = 10

[securitygroup]

# Firewall driver for realizing neutron security group function.

Page 92: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

# Example: firewall_driver =

neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDrive

r

firewall_driver =

neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDrive

r

# Controls if neutron security group is enabled or not.

# It should be false when you use nova security group.

# enable_security_group = True

dnsmasq config file – MTU settings

dhcp-option-force=26,1380

Page 93: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Nodes

L3 agent configuration file

/etc/neutron/l3_agent.ini

[DEFAULT]

# Print more verbose output (set logging level to INFO instead of

default WARNING level).

verbose = True

# Show debugging output in log (sets DEBUG log level output)

# debug = False

# L3 requires that an interface driver be set. Choose the one that

best

# matches your plugin.

# Example of interface_driver option for OVS based plugins (OVS,

# Ryu, NEC) that supports L3 agent

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

# Use veth for an OVS interface or not.

# Support kernels with limited namespace support

# (e.g. RHEL 6.5) so long as ovs_use_veth is set to True.

# ovs_use_veth = False

# Example of interface_driver option for LinuxBridge

# interface_driver =

neutron.agent.linux.interface.BridgeInterfaceDriver

# Allow overlapping IP (Must have kernel build with

# CONFIG_NET_NS=y and iproute2 package that supports namespaces).

# This option is deprecated and will be removed in a future

# release, at which point the old behavior

# of use_namespaces = True will be enforced.

use_namespaces = True

# If use_namespaces is set as False then the agent can only

configure one router.

# This is done by setting the specific router_id.

# router_id =

# When external_network_bridge is set, each L3 agent can be

# associated with no more than one external network. This value

# should be set to the UUID of that external network. To allow L3

# agent support multiple external networks, both the

# external_network_bridge and gateway_external_network_id must be

# left empty.

# gateway_external_network_id =

# With IPv6, the network used for the external gateway does not

# need to have an associated subnet, since the automatically

Page 94: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

# assigned link-local address (LLA) can be used. However, an IPv6

# gateway address is needed for use as the next-hop for the

# default route. If no IPv6 gateway address is configured here,

# (and only then) the neutron router will be configured to get

# its default route from router advertisements (RAs) from the

# upstream router; in which case the upstream router must also be

# configured to send these RAs.

# The ipv6_gateway, when configured, should be the LLA of the

# interface on the upstream router. If a next-hop using a global

# unique address (GUA) is desired, it needs to be done via a

# subnet allocated to the network and not through this parameter.

# ipv6_gateway =

# Indicates that this L3 agent should also handle routers that do

# not have an external network gateway configured. This option

# should be True only for a single agent in a Neutron deployment,

# and may be False for all agents if all routers must have an

#external network gateway

handle_internal_only_routers = True

# Name of bridge used for external network traffic. This should

# be set to empty value for the linux bridge. when this parameter

# is set, each L3 agent can be associated with no more than one

# external network.

external_network_bridge = br-ex

# TCP Port used by Neutron metadata server

# metadata_port = 9697

# Send this many gratuitous ARPs for HA setup. Set it below or equal

to 0

# to disable this feature.

# send_arp_for_ha = 3

# seconds between re-sync routers' data if needed

# periodic_interval = 40

# seconds to start to sync routers' data after

# starting agent

# periodic_fuzzy_delay = 5

# enable_metadata_proxy, which is true by default, can be set to

False

# if the Nova metadata server is not available

enable_metadata_proxy = True

# Iptables mangle mark used to mark metadata valid requests

# metadata_access_mark = 0x1

# Iptables mangle mark used to mark ingress from external network

# external_ingress_mark = 0x2

Page 95: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

# router_delete_namespaces, which is false by default, can be set to

True if

# namespaces can be deleted cleanly on the host running the L3

agent.

# Do not enable this until you understand the problem with the Linux

iproute

# utility mentioned in

https://bugs.launchpad.net/neutron/+bug/1052535 and

# you are sure that your version of iproute does not suffer from the

problem.

# If True, namespaces will be deleted when a router is destroyed.

router_delete_namespaces = False

# Timeout for ovs-vsctl commands.

# If the timeout expires, ovs commands will fail with ALARMCLOCK

error.

# ovs_vsctl_timeout = 10

# The working mode for the agent. Allowed values are:

# - legacy: this preserves the existing behavior where the L3 agent

is

# deployed on a centralized networking node to provide L3 services

# like DNAT, and SNAT. Use this mode if you do not want to adopt

DVR.

# - dvr: this mode enables DVR functionality, and must be used for

an L3

# agent that runs on a compute host.

# - dvr_snat: this enables centralized SNAT support in conjunction

with

# DVR. This mode must be used for an L3 agent running on a

centralized

# node (or in single-host deployments, e.g. devstack).

agent_mode = legacy

# Location to store keepalived and all HA configurations

# ha_confs_path = $state_path/ha_confs

# VRRP authentication type AH/PASS

# ha_vrrp_auth_type = PASS

# VRRP authentication password

# ha_vrrp_auth_password =

# The advertisement interval in seconds

# ha_vrrp_advert_int = 2

allow_automatic_l3agent_failover = False

Page 96: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

DHCP agent configuration file

/etc/neutron/dhcp_agent.ini

[DEFAULT]

# Print more verbose output (set logging level to INFO instead of

default WARNING level).

verbose = True

# Show debugging output in log (sets DEBUG log level output)

# debug = False

# The DHCP agent will resync its state with Neutron to recover from

any

# transient notification or rpc errors. The interval is number of

# seconds between attempts.

# resync_interval = 5

# The DHCP agent requires an interface driver be set. Choose the one

that best

# matches your plugin.

# Example of interface_driver option for OVS based plugins(OVS, Ryu,

NEC, NVP,

# BigSwitch/Floodlight)

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

# Name of Open vSwitch bridge to use

ovs_integration_bridge = br-int

# Use veth for an OVS interface or not.

# Support kernels with limited namespace support

# (e.g. RHEL 6.5) so long as ovs_use_veth is set to True.

# ovs_use_veth = False

# Example of interface_driver option for LinuxBridge

# interface_driver =

neutron.agent.linux.interface.BridgeInterfaceDriver

# The agent can use other DHCP drivers. Dnsmasq is the simplest and

requires

# no additional setup of the DHCP server.

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

# Allow overlapping IP (Must have kernel build with CONFIG_NET_NS=y

and

# iproute2 package that supports namespaces). This option is

# deprecated and will be removed in a future release, at which

# point the old behavior of use_namespaces = True will be

# enforced.

use_namespaces = True

# The DHCP server can assist with providing metadata support on

# isolated networks. Setting this value to True will cause the

# DHCP server to append specific host routes to the DHCP request.

Page 97: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

# The metadata service will only be activated when the subnet

# does not contain any router port. The guest instance must be

# configured to request host routes via DHCP (Option 121).

enable_isolated_metadata = True

# Allows for serving metadata requests coming from a dedicated

# metadata access network whose cidr is 169.254.169.254/16 (or

# larger prefix), and is connected to a Neutron router from which

# the VMs send metadata request. In this case DHCP Option 121

# will not be injected in VMs, as they will be able to reach

# 169.254.169.254 through a router.

# This option requires enable_isolated_metadata = True

enable_metadata_network = False

# Number of threads to use during sync process. Should not exceed

# connection pool size configured on server.

# num_sync_threads = 4

# Location to store DHCP server config files

# dhcp_confs = $state_path/dhcp

# Domain to use for building the hostnames

# dhcp_domain = openstacklocal

# Override the default dnsmasq settings with this file

dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf

# Comma-separated list of DNS servers which will be used by

# dnsmasq as forwarders.

dnsmasq_dns_servers = 8.8.8.8,8.8.4.4

# Limit number of leases to prevent a denial-of-service.

# dnsmasq_lease_max = 16777216

# Location to DHCP lease relay UNIX domain socket

# dhcp_lease_relay_socket = $state_path/dhcp/lease_relay

# Use broadcast in DHCP replies

# dhcp_broadcast_reply = False

# dhcp_delete_namespaces, which is false by default, can be set

# to True if namespaces can be deleted cleanly on the host

# running the dhcp agent.

# Do not enable this until you understand the problem with the

# Linux iproute utility mentioned in

# https://bugs.launchpad.net/neutron/+bug/1052535 and

# you are sure that your version of iproute does not suffer from

# the problem.

# If True, namespaces will be deleted when a dhcp server is

disabled.

dhcp_delete_namespaces = False

# Timeout for ovs-vsctl commands.

Page 98: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

# If the timeout expires, ovs commands will fail with ALARMCLOCK

error.

# ovs_vsctl_timeout = 10

Page 99: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Metadata agent configuration file

/etc/neutron/metadata_agent.ini

[DEFAULT]

# Print more verbose output (set logging level to INFO instead of

default WARNING level).

verbose = True

# Show debugging output in log (sets DEBUG log level output)

# debug = True

# The Neutron user information for accessing the Neutron API.

auth_uri = http://controller1:5000

auth_url = http://controller1:35357

auth_region = regionOne

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = neutron

password = asd08f756ae0ftya

# Turn off verification of the certificate for ssl

# auth_insecure = False

# Certificate Authority public key (CA cert) file for ssl

# auth_ca_cert =

# Network service endpoint type to pull from the keystone catalog

# endpoint_type = adminURL

# IP address used by Nova metadata server

nova_metadata_ip = controller1

# TCP Port used by Nova metadata server

nova_metadata_port = 8775

# Which protocol to use for requests to Nova metadata server, http

or https

nova_metadata_protocol = http

# Whether insecure SSL connection should be accepted for Nova

metadata server

# requests

# nova_metadata_insecure = False

# Client certificate for nova api, needed when nova api requires

client

# certificates

# nova_client_cert =

# Private key for nova client certificate

# nova_client_priv_key =

Page 100: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

# When proxying metadata requests, Neutron signs the Instance-ID

header with a

# shared secret to prevent spoofing. You may select any string for

a secret,

# but it must match here and in the configuration used by the Nova

Metadata

# Server. NOTE: Nova uses the same config key, but in [neutron]

section.

metadata_proxy_shared_secret = spCRNxYrm4sLvfJb

# Location of Metadata Proxy UNIX domain socket

# metadata_proxy_socket = $state_path/metadata_proxy

# Metadata Proxy UNIX domain socket mode, 3 values allowed:

# 'deduce': deduce mode from metadata_proxy_user/group values,

# 'user': set metadata proxy socket mode to 0o644, to use when

# metadata_proxy_user is agent effective user or root,

# 'group': set metadata proxy socket mode to 0o664, to use when

# metadata_proxy_group is agent effective group,

# 'all': set metadata proxy socket mode to 0o666, to use otherwise.

# metadata_proxy_socket_mode = deduce

# Number of separate worker processes for metadata server. Defaults

to

# half the number of CPU cores

# metadata_workers =

# Number of backlog requests to configure the metadata server socket

with

metadata_backlog = 4096

# URL to connect to the cache backend.

# default_ttl=0 parameter will cause cache entries to never expire.

# Otherwise default_ttl specifies time in seconds a cache entry is

valid for.

# No cache is used in case no value is passed.

# cache_url = memory://?default_ttl=5

Page 101: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

ML2 plugin configuration file

/etc/neutron/plugins/ml2_conf.ini

### Settings identical to the controller equivalent, with the

### following additions:

firewall_driver =

neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDrive

r

[ovs]

local_ip = 10.0.1.11

enable_tunneling = True

[agent]

tunnel_types = vxlan

OVS plugin configuration file

/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

### Identical to the controller equivalent

Page 102: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

OpenStack network architecture Sources: openstack.org

‘Global’ network architecture

OpenStack ’global’ network architecture – source:openstack.org

API network – not connected to Nodes (not interesting for our topic)

Management network – not part of the OpenStack itself, not available to VMs (not interesting for our topic)

External / public flat network – L3 routing, connection to the internet

Guest network – using any segmentation (VLAN, GRE, VXLAN)

In the following, only Guest (VLAN or Tunnel) and External networks are described.

Page 103: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

‘Operational’ network architecture

OpenStack network architecture – source:openstack.org

Instances = VMs

Firewall = iptables

Switch = open vSwitch

Router SNAT / DNAT = Neutron router

Page 104: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Network Node

(source: openstack.org)

________________________________________________________________________________________

Page 105: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Compute Node

(source: openstack.org)

________________________________________________________________________________________

Page 106: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Network and Compute Node combined into a single machine – source: openstack.org, edited

Page 107: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

External network

Create

Manage (update) – what can admin do in dashboard & in console

Allocation – routing on the host level

Guest network

Handling described in section OpenStack commands for Neutron checking.

Page 108: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Network traffic schemes

VM <-> VM

Setup

Project network ‘private1’ o Network e.g. 192.168.1.0/24 o Gateway PRIVATE_1_GATEWAY

e.g. 192.168.1.1 with MAC address PRIVATE_1_GATEWAY_MAC

Project network ‘private2’ o Network e.g. 192.168.2.0/24 o Gateway PRIVATE_2_GATEWAY

e.g. 192.168.2.1 with MAC address PRIVATE_2_GATEWAY_MAC

Compute node ‘compute1’ o Instance ‘vm1’

private address: VM_1_PRIVATE_IP (private1 network)

e.g. 192.168.1.3 MAC address: VM_1_MAC

Compute node ‘compute2’ o Instance ‘vm2’

private address: VM_2_PRIVATE_IP (private2 network)

e.g. 192.168.2.3 MAC address: VM_2_MAC

Router ‘router1’ o interface address on private1 network: ROUTER_PRIVATE_1_IP (==PRIVATE_1_GATEWAY) o interface address on private2 network: ROUTER_PRIVATE_2_IP (==PRIVATE_2_GATEWAY)

Network node ‘network1’

Page 109: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Use case

The machine vm1 sends a packet to vm2 using its private address VM_2_PRIVATE_IP.

Notes:

This setup only describes what is important for the use case, obviously, some external connectivity to machines has to be available and this is taken care of by both of them having another interface configured which is connected to a router connected to the external network.

When created, instances might happen to be created on the same compute node. If they are on the same one, no tunneling is done.

o the tunnel is not even created – this is done dynamically upon being needed.

If the instances are on the same tenant network, there is no routing involved, but still they are generally on different nodes, so tunneling is involved.

Even if instances are on the same network and on the same compute node, dhcp requests for the particular network can be handled by a dhcp-agent from a different node.

In the following, deployment with a dedicated network node is presented. The deployment with compute node and network node bundled together do not differ from this in the case routing (L3 routing) is involved, because the two instances and the router can truly reside on three different nodes and only the l3-agent service is used on the network1 node.

compute1

The vm1 tap interface (1) forwards the packet to the Linux bridge qbr.

Page 110: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

# vm1 tap interface

$ ifconfig -a | grep -A5 tap23bc0b1f-a1 # 23bc... is port id

tap23bc0b1f-a1 Link encap:Ethernet HWaddr fe:16:3e:0d:ea:b0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:4551 errors:0 dropped:0 overruns:0 frame:0

TX packets:29688 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:500

RX bytes:418474 (408.6 KiB) TX bytes:37616758 (35.8 MiB)

# linux bridge qbr in interface list

$ ifconfig -a | grep -A5 qbr23bc0b1f-a1 # 23bc... is port id

qbr23bc0b1f-a1 Link encap:Ethernet HWaddr 5a:00:22:a4:95:e1

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:83 errors:0 dropped:0 overruns:0 frame:0

TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:3360 (3.2 KiB) TX bytes:0 (0.0 B)

# the bridge shown in brctl show output

$ brctl show | grep qbr23bc0b1f-a1

bridge name|bridge id|STP enabled|interfaces

qbr23bc0b1f-a1|8000.5a0022a495e1|no|qvb23bc0b1f-a1 tap23bc0b1f-a1

# qvb is the ‘Linux Bridge part’ of the Linux Bridge <-> OVS veth

pair qvb <-> qvo:

$ ifconfig -a | grep -A5 qv[bo]23bc0b1f-a1

qvb23bc0b1f-a1 Link encap:Ethernet HWaddr 5a:00:22:a4:95:e1

UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1

RX packets:29740 errors:0 dropped:0 overruns:0 frame:0

TX packets:4591 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:37621347 (35.8 MiB) TX bytes:421205 (411.3 KiB)

--

qvo23bc0b1f-a1 Link encap:Ethernet HWaddr 6e:e4:7d:0c:6b:a3

UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1

RX packets:4591 errors:0 dropped:0 overruns:0 frame:0

TX packets:29740 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:421205 (411.3 KiB) TX bytes:37621347 (35.8 MiB)

The packet contains destination MAC address PRIVATE_1_GATEWAY_MAC because the destination resides

on another network.

Security group rules (2) on the Linux bridge qbr handle state tracking for the packet.

(security group rules implementation shown in forthcoming sections).

The Linux bridge qbr forwards the packet to the Open vSwitch integration bridge br-int.

The Open vSwitch integration bridge br-int adds the internal tag for private1.

For VLAN project networks:

Page 111: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

The Open vSwitch integration bridge br-int forwards the packet to the Open vSwitch VLAN bridge br-vlan.

The Open vSwitch VLAN bridge br-vlan replaces the internal tag with the actual VLAN tag of private1.

The Open vSwitch VLAN bridge br-vlan forwards the packet to network1 via the VLAN interface.

For VXLAN and GRE project networks:

The Open vSwitch integration bridge br-int forwards the packet to the Open vSwitch tunnel bridge br-tun.

$ ovs-vsctl show

Bridge br-tun

fail_mode: secure

Port patch-int

Interface patch-int

type: patch

options: {peer=patch-tun}

Port br-tun

Interface br-tun

type: internal

Port "vxlan-0a00010c"

Interface "vxlan-0a00010c"

type: vxlan

options: {df_default="true", in_key=flow,

local_ip="10.0.1.11", out_key=flow, remote_ip="10.0.1.12"}

The Open vSwitch tunnel bridge br-tun wraps the packet in a VXLAN or GRE tunnel and adds a tag to identify private1.

The Open vSwitch tunnel bridge br-tun forwards the packet to network1 via the tunnel interface.

$ ovs-ofctl show br-tun

$ ovs-ofctl dump-flows br-tun

network1

For VLAN project networks:

The VLAN interface forwards the packet to the Open vSwitch VLAN bridge br-vlan.

The Open vSwitch VLAN bridge br-vlan forwards the packet to the Open vSwitch integration bridge br-int.

The Open vSwitch integration bridge br-int replaces the actual VLAN tag of private1 with the internal tag.

For VXLAN and GRE project networks:

The tunnel interface forwards the packet to the Open vSwitch tunnel bridge br-tun.

The Open vSwitch tunnel bridge br-tun unwraps the packet and adds the internal tag for private1.

The Open vSwitch tunnel bridge br-tun forwards the packet to the Open vSwitch integration bridge br-int.

Page 112: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

# The same rules, only for different OVS port

$ ovs-ofctl show br-tun

$ ovs-ofctl dump-flows br-tun

The Open vSwitch integration bridge br-int forwards the packet to the qr interface (3) in the router

namespace qrouter on private1.

$ ovs-vsctl show

...

Bridge br-int

...

# We can identify the correct router by its interfaces

# which are shown in neutron (API / dashboard)

Port "qr-08c54a3d-d7"

tag: 12

Interface "qr-08c54a3d-d7"

type: internal

Port "qr-9e3fc2ff-c7"

tag: 11

Interface "qr-9e3fc2ff-c7"

type: internal

Port "qr-e5c327d2-0d"

tag: 11

Interface "qr-e5c327d2-0d"

type: internal

$ ip netns list

qrouter-2b881a48-6a45-4395-92a8-b7060aea2f67

qrouter-7c5068d8-7d23-46f3-89d6-64c3345609c2

# The ids here are ids of the router in neutron

# The namespace contains both interfaces to both virtual networks.

$ ip netns exec qrouter-2b881a48-6a45-4395-92a8-b7060aea2f67

ifconfig –a

# This is the one on private2

qr-08c54a3d-d7 Link encap:Ethernet HWaddr fa:16:3e:30:52:b9

inet addr:192.168.2.1 Bcast:192.168.2.255

Mask:255.255.255.0

inet6 addr: fe80::f816:3eff:fe30:52b9/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:2119 errors:0 dropped:0 overruns:0 frame:0

TX packets:701 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:505820 (493.9 KiB) TX bytes:57269 (55.9 KiB)

# This is the one on private1

qr-9e3fc2ff-c7 Link encap:Ethernet HWaddr fa:16:3e:66:5f:71

inet addr:192.168.1.254 Bcast:192.168.1.255

Mask:255.255.255.0

inet6 addr: fe80::f816:3eff:fe66:5f71/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:503 errors:0 dropped:0 overruns:0 frame:0

Page 113: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

TX packets:102 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:45279 (44.2 KiB) TX bytes:9168 (8.9 KiB) This qr

interface contains ROUTER_PRIVATE_1_IP.

This qr interface contains the private1 gateway IP address ROUTER_PRIVATE_1_IP.

This qr interface contains the private2 gateway IP address ROUTER_PRIVATE_2_IP.

The router namespace qrouter routes the packet to the qr interface on private2 (4).

$ ip netns exec qrouter-2b881a48-6a45-4395-92a8-b7060aea2f67 route

Kernel IP routing table

Destination Gateway Genmask Flags Metric Ref

Use Iface

192.168.1.0 * 255.255.255.0 U 0 0

0 qr-9e3fc2ff-c7

192.168.2.0 * 255.255.255.0 U 0 0

0 qr-08c54a3d-d7

The router namespace qrouter then forwards the packet to the Open vSwitch integration bridge br-int.

The Open vSwitch integration bridge br-int adds the internal tag for private2.

For VLAN project networks:

The Open vSwitch integration bridge br-int forwards the packet to the Open vSwitch VLAN bridge br-vlan.

The Open vSwitch VLAN bridge br-vlan replaces the internal tag with the actual VLAN tag of private2.

The Open vSwitch VLAN bridge br-vlan forwards the packet to compute2 via the VLAN interface.

For VXLAN and GRE project networks:

The Open vSwitch integration bridge br-int forwards the packet to the Open vSwitch tunnel bridge br-tun.

The Open vSwitch tunnel bridge br-tun wraps the packet in a VXLAN or GRE tunnel and adds a tag to identify private2.

The Open vSwitch tunnel bridge br-tun forwards the packet to compute2 via the tunnel interface.

compute2

For VLAN project networks:

The VLAN interface forwards the packet to the Open vSwitch VLAN bridge br-vlan.

The Open vSwitch VLAN bridge br-vlan forwards the packet to the Open vSwitch integration bridge br-int.

The Open vSwitch integration bridge br-int replaces the actual VLAN tag of private2 with the internal tag.

Page 114: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

For VXLAN and GRE project networks:

The tunnel interface forwards the packet to the Open vSwitch tunnel bridge br-tun.

The Open vSwitch tunnel bridge br-tun unwraps the packet and adds the internal tag for private2.

The Open vSwitch tunnel bridge br-tun forwards the packet to the Open vSwitch integration bridge br-int.

The Open vSwitch integration bridge br-int forwards the packet to the Linux bridge qbr.

Security group rules (5) on the Linux bridge qbr handle firewalling and state tracking for the packet.

The Linux bridge qbr forwards the packet to the tap interface (6) on vm2.

Page 115: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

VM <-> VM – source: openstack.org, edited

Page 116: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

VM -> Outside world (North->South)

Setup

External network ‘public’ o Network e.g. 203.0.113.0/24 o IP address allocation pool PUBLIC_ALLOCATION_POOL

e.g. from 203.0.113.101 to 203.0.113.200

Project network ‘private1’ o Network e.g. 192.168.1.0/24 o Gateway PRIVATE_GATEWAY

e.g. 192.168.1.1 with MAC address PRIVATE_GATEWAY_MAC

Compute node ‘compute1’ o Instance ‘vm1’

private address: VM_PRIVATE_IP (private1 network) MAC address: VM_MAC

Router ‘router1’ o interface address on public network: ROUTER_PUBLIC_IP (from PUBLIC_ALLOCATION_POOL)

e.g. 203.0.113.101 o interface address on private1 network: ROUTER_PRIVATE_IP (== PRIVATE_GATEWAY)

Network node ‘network1’

Use case

The machine vm1 sends a packet to a host on public network.

Notes:

The instance and the router can reside on the same machine (when bundled nodes are used), in this case, no tunneling is involved.

compute1

The vm1 tap interface (1) forwards the packet to the Linux bridge qbr. The packet contains destination MAC

address PRIVATE_GATEWAY_MAC because the destination resides on another network.

Security group rules (2) on the Linux bridge qbr handle state tracking for the packet.

$ iptables –L –v

# Egress rules

Chain neutron-openvswi-o23bc0b1f-a (2 references)

pkts bytes target prot opt in out source

destination

2 656 RETURN udp -- any any anywhere

anywhere udp spt:bootpc dpt:bootps /* Allow DHCP client

traffic. */

3040 231K neutron-openvswi-s23bc0b1f-a all -- any any

anywhere anywhere

0 0 DROP udp -- any any anywhere

anywhere udp spt:bootps dpt:bootpc /* Prevent DHCP

Spoofing by VM. */

Page 117: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

2746 208K RETURN all -- any any anywhere

anywhere state RELATED,ESTABLISHED /* Direct packets

associated with a known session to the RETURN chain. */

294 22536 RETURN all -- any any anywhere

anywhere

0 0 DROP all -- any any anywhere

anywhere state INVALID /* Drop packets that appear

related to an existing connection (e.g. TCP ACK/FIN) but do not have

an entry in conntrack. */

0 0 neutron-openvswi-sg-fallback all -- any any

anywhere anywhere /* Send unmatched traffic

to the fallback chain. */

The Linux bridge qbr forwards the packet to the Open vSwitch integration bridge br-int.

The Open vSwitch integration bridge br-int adds the internal tag for private1.

For VLAN project networks:

The Open vSwitch integration bridge br-int forwards the packet to the Open vSwitch VLAN bridge br-vlan.

The Open vSwitch VLAN bridge br-vlan replaces the internal tag with the actual VLAN tag of private1.

The Open vSwitch VLAN bridge br-vlan forwards the packet to network1 via the VLAN interface.

For VXLAN and GRE project networks:

The Open vSwitch integration bridge br-int forwards the packet to the Open vSwitch tunnel bridge br-tun.

The Open vSwitch tunnel bridge br-tun wraps the packet in a VXLAN or GRE tunnel and adds a tag to identify private1.

The Open vSwitch tunnel bridge br-tun forwards the packet to network1 via the tunnel interface.

network1

For VLAN project networks:

The VLAN interface forwards the packet to the Open vSwitch VLAN bridge br-vlan.

The Open vSwitch VLAN bridge br-vlan forwards the packet to the Open vSwitch integration bridge br-int.

The Open vSwitch integration bridge br-int replaces the actual VLAN tag of private1 with the internal tag.

For VXLAN and GRE project networks:

The tunnel interface forwards the packet to the Open vSwitch tunnel bridge br-tun.

The Open vSwitch tunnel bridge br-tun unwraps the packet and adds the internal tag for private1.

The Open vSwitch tunnel bridge br-tun forwards the packet to the Open vSwitch integration bridge br-int.

Page 118: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

The Open vSwitch integration bridge br-int forwards the packet to the qr interface (3) in the router

namespace qrouter. The qr interface contains ROUTER_PRIVATE_IP.

$ ip netns exec qrouter-7c5068d8-7d23-46f3-89d6-64c3345609c2

ifconfig qr-e5c327d2-0d

qr-e5c327d2-0d Link encap:Ethernet HWaddr fa:16:3e:3f:18:b4

inet addr:192.168.1.1 Bcast:192.168.1.255

Mask:255.255.255.0

inet6 addr: fe80::f816:3eff:fe3f:18b4/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:2775 errors:0 dropped:0 overruns:0 frame:0

TX packets:27743 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:244400 (238.6 KiB) TX bytes:37434692 (35.7

MiB)The router namespace qrouter forwards the packet to the Open

vSwitch integration bridge br-int.

The iptables service (4) performs SNAT on the packet using the qg interface (5) as the source IP address.

$ ip netns exec qrouter-7c5068d8-7d23-46f3-89d6-64c3345609c2

iptables -t nat –L

Chain neutron-l3-agent-float-snat (1 references)

target prot opt source destination

SNAT all -- host-192-168-1-3.openstacklocal anywhere

to:192.168.150.243

Chain neutron-l3-agent-snat (1 references)

target prot opt source destination

neutron-l3-agent-float-snat all -- anywhere anywhere

SNAT all -- anywhere anywhere

to:203.0.113.101

SNAT all -- anywhere anywhere mark

match ! 0x2 ctstate DNAT to:203.0.113.101

Chain neutron-postrouting-bottom (1 references)

target prot opt source destination

neutron-l3-agent-snat all -- anywhere anywhere

/* Perform source NAT on outgoing traffic. */

neutron-vpnaas-a-snat all -- anywhere anywhere

/* Perform source NAT on outgoing traffic. */

Chain neutron-vpnaas-a-OUTPUT (1 references)

target prot opt source destination

Chain neutron-vpnaas-a-POSTROUTING (1 references)

target prot opt source destination

ACCEPT all -- anywhere anywhere !

ctstate DNAT

Chain neutron-vpnaas-a-PREROUTING (1 references)

target prot opt source destination

Page 119: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

REDIRECT tcp -- anywhere 169.254.169.254 tcp

dpt:http redir ports 9697

Chain neutron-vpnaas-a-float-snat (1 references)

target prot opt source destination

Chain neutron-vpnaas-a-snat (1 references)

target prot opt source destination

neutron-vpnaas-a-float-snat all -- anywhere anywhere

SNAT all -- anywhere anywhere

to:203.0.113.101

SNAT all -- anywhere anywhere mark

match ! 0x2 ctstate DNAT to:203.0.113.101

The qg interface contains ROUTER_PUBLIC_IP.

$ ip netns

qrouter-7c5068d8-7d23-46f3-89d6-64c3345609c2

$ ip netns exec qrouter-7c5068d8-7d23-46f3-89d6-64c3345609c2

ifconfig qg-fe71d0cc-ea

qg-fe71d0cc-ea Link encap:Ethernet HWaddr fa:16:3e:87:4c:b2

inet addr: 203.0.113.101 Bcast: 203.0.113.255

Mask:255.255.255.0

inet6 addr: fe80::f816:3eff:fe87:4cb2/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1440 Metric:1

RX packets:32046 errors:0 dropped:0 overruns:0 frame:0

TX packets:2824 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:37615279 (35.8 MiB) TX bytes:244589 (238.8 KiB)

The router namespace qrouter forwards the packet to the Open vSwitch integration bridge br-int via the qg

interface.

The Open vSwitch integration bridge br-int forwards the packet to the Open vSwitch external bridge br-ex.

The Open vSwitch external bridge br-ex forwards the packet to public via the external interface.

$ ovs-vsctl show

2b86e0d4-7022-464d-bfca-778e4c09ce4e

Bridge br-ex # external bridge

Port phy-br-ex # patch to internal bridge

Interface phy-br-ex

type: patch

options: {peer=int-br-ex}

Port "eth2" # external “physical” interface

Interface "eth2"

Page 120: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

VM -> Outside world – source: openstack.org, edited

Outside world -> VM (Floating IP)

Setup

External network ‘public’ o Network e.g. 203.0.113.0/24 o IP address allocation pool PUBLIC_ALLOCATION_POOL

e.g. from 203.0.113.101 to 203.0.113.200

Project network ‘private1’

Page 121: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

o Network e.g. 192.168.1.0/24 o Gateway PRIVATE_GATEWAY

e.g. 192.168.1.1 with MAC address PRIVATE_GATEWAY_MAC

Compute node ‘compute1’ o Instance ‘vm1’

private address: VM_PRIVATE_IP (private1 network)

e.g. 192.168.1.3 MAC address: VM_MAC floating IP address: VM_FLOATING_IP

e.g. 203.0.113.200

Router ‘router1’ o interface address on public network: ROUTER_PUBLIC_IP (from PUBLIC_ALLOCATION_POOL)

e.g. 203.0.113.101 o interface address on private1 network: ROUTER_PRIVATE_IP (== PRIVATE_GATEWAY)

Network node ‘network1’

Use case

An external host sends a packet to VM_FLOATING_IP

network1

The external interface forwards the packet to the Open vSwitch external bridge br-ex.

The Open vSwitch external bridge br-ex forwards the packet to the Open vSwitch integration bridge br-int

via a patch:

$ ovs-vsctl show

2b86e0d4-7022-464d-bfca-778e4c09ce4e

Bridge br-ex # external bridge

Port phy-br-ex # patch to internal bridge

Interface phy-br-ex

type: patch

options: {peer=int-br-ex}

Port "eth2" # external “physical” interface

Interface "eth2"

...

Bridge br-int # internal bridge

Port br-int # internal bridge interface

Interface br-int

type: internal

Port int-br-ex # patch to external bridge

Interface int-br-ex

type: patch

options: {peer=phy-br-ex}

The Open vSwitch integration bridge forwards the packet to the qg interface (1) in the router namespace

qrouter.

The qg interface contains ROUTER_PUBLIC_IP.

$ ip netns

Page 122: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

qrouter-7c5068d8-7d23-46f3-89d6-64c3345609c2

$ ip netns exec qrouter-7c5068d8-7d23-46f3-89d6-64c3345609c2

ifconfig qg-fe71d0cc-ea

qg-fe71d0cc-ea Link encap:Ethernet HWaddr fa:16:3e:87:4c:b2

inet addr: 203.0.113.101 Bcast: 203.0.113.255

Mask:255.255.255.0

inet6 addr: fe80::f816:3eff:fe87:4cb2/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1440 Metric:1

RX packets:32046 errors:0 dropped:0 overruns:0 frame:0

TX packets:2824 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:37615279 (35.8 MiB) TX bytes:244589 (238.8 KiB)

The iptables service (2) performs DNAT on the packet using the qr interface (3) as the source IP address.

$ ip netns exec qrouter-7c5068d8-7d23-46f3-89d6-64c3345609c2

iptables -t nat –L

Chain neutron-l3-agent-OUTPUT (1 references)

target prot opt source destination

DNAT all -- anywhere 203.0.113.200

to:192.168.1.3

Chain neutron-l3-agent-POSTROUTING (1 references)

target prot opt source destination

ACCEPT all -- anywhere anywhere !

ctstate DNAT

Chain neutron-l3-agent-PREROUTING (1 references)

target prot opt source destination

DNAT all -- anywhere 203.0.113.200

to:192.168.1.3

The qr interface contains ROUTER_PRIVATE_IP.

$ ip netns exec qrouter-7c5068d8-7d23-46f3-89d6-64c3345609c2

ifconfig qr-e5c327d2-0d

qr-e5c327d2-0d Link encap:Ethernet HWaddr fa:16:3e:3f:18:b4

inet addr:192.168.1.1 Bcast:192.168.1.255

Mask:255.255.255.0

inet6 addr: fe80::f816:3eff:fe3f:18b4/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:2775 errors:0 dropped:0 overruns:0 frame:0

TX packets:27743 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:244400 (238.6 KiB) TX bytes:37434692 (35.7

MiB)The router namespace qrouter forwards the packet to the Open

vSwitch integration bridge br-int.

The Open vSwitch integration bridge br-int adds the internal tag for private1.

For VLAN project networks:

Page 123: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

The Open vSwitch integration bridge br-int forwards the packet to the Open vSwitch VLAN bridge br-vlan.

The Open vSwitch VLAN bridge br-vlan replaces the internal tag with the actual VLAN tag of private1.

The Open vSwitch VLAN bridge br-vlan forwards the packet to compute1 via the VLAN interface.

For VXLAN and GRE project networks:

The Open vSwitch integration bridge br-int forwards the packet to the Open vSwitch tunnel bridge br-tun.

The Open vSwitch tunnel bridge br-tun wraps the packet in a VXLAN or GRE tunnel and adds a tag to identify private1.

The Open vSwitch tunnel bridge br-tun forwards the packet to compute1 via the tunnel interface.

compute1

For VLAN project networks:

The VLAN interface forwards the packet to the Open vSwitch VLAN bridge br-vlan.

The Open vSwitch VLAN bridge br-vlan forwards the packet to the Open vSwitch integration bridge br-int.

The Open vSwitch integration bridge br-int replaces the actual VLAN tag private1 with the internal tag.

For VXLAN and GRE project networks:

The tunnel interface forwards the packet to the Open vSwitch tunnel bridge br-tun.

The Open vSwitch tunnel bridge br-tun unwraps the packet and adds the internal tag for private1.

The Open vSwitch tunnel bridge br-tun forwards the packet to the Open vSwitch integration bridge br-int.

The Open vSwitch integration bridge br-int forwards the packet to the Linux bridge qbr.

$ brctl show # qbr bridge

bridge name|bridge id|STP enabled|interfaces

qbr23bc0b1f-a1|8000.5a0022a495e1|no|qvb23bc0b1f-a1 tap23bc0b1f-a1

Security group rules (4) on the Linux bridge qbr handle firewalling and state tracking for the packet.

$ iptables –L –v

# Ingress rules

Chain neutron-openvswi-i23bc0b1f-a (1 references)

pkts bytes target prot opt in out source

destination

27878 37M RETURN all -- any any anywhere

anywhere state RELATED,ESTABLISHED /* Direct packets

associated with a known session to the RETURN chain. */

2 730 RETURN udp -- any any host-192-168-1-

2.openstacklocal anywhere udp spt:bootps dpt:bootpc

109 5245 RETURN all -- any any anywhere

anywhere

0 0 RETURN tcp -- any any 56.Red-88-12-

34.staticIP.rima-tde.net anywhere tcp dpt:ssh

Page 124: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

0 0 DROP all -- any any anywhere

anywhere state INVALID /* Drop packets that appear

related to an existing connection (e.g. TCP ACK/FIN) but do not have

an entry in conntrack. */

0 0 neutron-openvswi-sg-fallback all -- any any

anywhere anywhere /* Send unmatched traffic

to the fallback chain. */

The Linux bridge qbr forwards the packet to the tap interface (5) on vm1.

Outside world -> VM (Floating IP) – source: openstack.org, edited

Page 125: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Network traffic inspection, troubleshooting

Traffic tools

tcpdump

$ tcpdump -envi br-int

$ tcpdump -envi br-tun

$ ip netns exec qrouter-UUID tcpdump -i qr-63ea2815-b5 icmp

$ ip netns exec qrouter-UUID tcpdump -i qg-e7110dba-a9 icmp

$ tcpdump -envi 192.168.122.163

$ tcpdump -envi br-ex

$ tcpdump -i eth0 -n arp or icmp

$ tcpdump -i br-ex -n icmp

$ tcpdump -i eth0 -n icmp

$ tcpdump -i any -n icmp

$ tcpdump -i tape7110dba-a9 -n icmp

$ tcpdump -envi qvbb71536f2-dd -n arp or icmp

$ tcpdump -i eth0 -n not port 22

$ tcpdump -i eth0 -n not port 22 and not port amqp

$ tcpdump -i eth2 -f "not arp and not icmp and not port 22 and not

port amqp"

tshark

similar to tcpdump

$ tshark -i eth2 -f "not arp and not icmp and not port 22 and not

port amqp"

ovs-vsctl

$ ovs-vsctl show

$ ovs-dpctl show

$ ovs-dpctl dump-flows

$ ovs-ofctl dump-flows br-tun

$ ovs-ofctl dump-flows br-tun table=21

$ ovs-ofctl dump-flows br-int

DHCP state files

$ ls /var/lib/neutron/dhcp

Page 126: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

Static tools

ip

$ ip addr

$ ip route

$ ip –d link

$ ip netns

$ in netns qrouter-UUID ip a

$ ip netns qrouter-UUID exec ip link

$ ip netns qrouter-UUID exec route -n

$ ip netns qrouter-UUID exec iptables -L -t nat

$ ip netns qdhcp-UUID ip a

$ ip netns qdhcp-UUID ip link

$ ip netns qdhcp-UUID exec route -n

ifconfig

$ ifconfig -a

iptables

$ iptables –L –v

$ iptables -t nat –L

$ iptables -t nat –L | grep SNAT

$ iptables -t nat –L | grep DNAT

brctl

$ brctl show

cat /proc/<proc-id>/cmdline

e.g. dnsmasq

$ cat /proc/980(dnsmasq-proc-id)/cmdline

dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --

interface=tap6cb4855a-ce --except-interface=lo --pid-

file=/var/lib/neutron/dhcp/e58741d0-60c7-457e-8780-366ecd3200aa/pid

--dhcp-hostsfile=/var/lib/neutron/dhcp/e58741d0-60c7-457e-8780-

366ecd3200aa/host

--addn-hosts=/var/lib/neutron/dhcp/e58741d0-60c7-457e-8780-

366ecd3200aa/addn_hosts --dhcp-

optsfile=/var/lib/neutron/dhcp/e58741d0-60c7-457e-8780-

366ecd3200aa/opts --dhcp-leasefile=/var/lib/neutron/dhcp/e58741d0-

60c7-457e-8780-366ecd3200aa/leases --dhcp-

Page 127: Neutron   kilo

Lukáš Korous

Neutron Networking / OpenStack Kilo

range=set:tag0,192.168.150.0,static,86400s --dhcp-lease-max=256 --

conf-file=/etc/neutron/dnsmasq-neutron.conf --server=8.8.8.8 --

server=8.8.4.4 --domain=openstacklocal

Logging

/var/log/neutron/*

/var/log/syslog # dnsmasq

Links

Troubleshooting

http://docs.openstack.org/openstack-ops/content/network_troubleshooting.html

https://www.rdoproject.org/Networking_in_too_much_detail

http://www.yet.org/2014/09/openvswitch-troubleshooting/

General

http://docs.openstack.org/admin-guide-cloud/networking.html

http://docs.openstack.org/developer/neutron/devref/layer3.html

http://docs.openstack.org/admin-guide-cloud/networking_adv-features.html#provider-networks

https://access.redhat.com/documentation/en/red-hat-enterprise-linux-openstack-platform/version-

7/architecture-guide/

https://access.redhat.com/documentation/en/red-hat-enterprise-linux-openstack-platform/version-

7/networking-guide/

https://www.rdoproject.org/networking/neutron-with-existing-external-network/

https://access.redhat.com/documentation/en-

US/Red_Hat_Enterprise_Linux_OpenStack_Platform/5/html/Cloud_Administrator_Guide/section_networki

ng-scenarios.html

https://access.redhat.com/documentation/en-

US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_Installation_Guide/ch

ap-Virtualization_Host_Configuration_and_Guest_Installation_Guide-SR_IOV.html

http://docs.openstack.org/kilo/config-reference/content/neutron-conf-changes-kilo.html