openstack networking internals - first part

Post on 08-Jul-2015

798 Views

Category:

Technology

2 Downloads

Preview:

Click to see full reader

DESCRIPTION

Openstack Networking Internals - first part Description of the Virtual Network Infrastructure inside an OpenStack cluster The pictures of the VNI were taken with the "Show my network state" tool https://sites.google.com/site/showmynetworkstate/

TRANSCRIPT

Giuliano Santandrea – CIRI ICT University of Bologna

● OpenStack description● Openstack components and allocation● Neutron abstractions● The virtual network infrastructure

vif: virtual network interface VR: virtual router VNI: virtual network infrastructure OVS: Open vSwitch© virtual bridge LB: Linux bridge

● OpenStack is a cloud platform management software● A cloud platform is a cluster of machines that host

some servers (instances): the servers are offered to the users as as a  “service”.  The  user  is  able  to  create  a  “virtual  infrastructure”  composed  of  servers  and  network appliances (firewalls, routers, ...)

● The servers can be implemented as:– VM (KVM,VMWare,..)– light container (LXC-Docker,..)– bare metal  (PXE  boot,…)

OS is composed of the following components:

● Web dashboard (Horizon)

● Compute (Nova): it manages instances lifecycle

● Keystone: credentials, service catalog of all the OS services (list of REST service endpoints)

● Glance: image management. An image is a blob file containing a file system with a “pre-cooked”  VM:  it  can  be  used  by  hypervisors  to  boot  a  new  instance!

● Networking(Neutron): network management

● Block storage (Cinder): persistent storage (volumes)

● Other services: – Object storage (Swift): distributed storage for non-structured data

External net

Management net

Data net (flat)

CPU node 1 CPU node 2 CPU node 3Controller Network node

Internet

● Data net● Mgmt net● External/API net

These networks are implemented as physical separated networks.

Cesena  cluster:  I  configured  a  switch  with  “port-based,  access  mode”  VLANs

External net

Management net

Data net (flat)

CPU node 1 CPU node 2 CPU node 3Controller Network node

Internet

Allows the admin to access the cluster nodes and it is used for inter-service communicationEVERY NODE IS ON THIS NET

External net

Management net

Data net (flat)

CPU node 1 CPU node 2 CPU node 3Controller Network node

Internet

Used for inter-VM communication.Depending on the chosen network virtualization mechanism, the packets will beVLAN tagged packets, or encapsulated packets (VXLAN, GRE)

External net

Management net

Data net (flat)

CPU node 1 CPU node 2 CPU node 3Controller Network node

Internet

It allows the VMs to access the Internet, the user to access the VMs

External net

Management net

Data net (flat)

CPU node 1 CPU node 2 CPU node 3Controller Network node

Internet

●Keystone●Nova

●API: REST endpoint, receives user requests●Scheduler: chooses a compute node

●Glance:●API●Registry

●Neutron●Server: REST API endpoint ●plugin: implements the VNI

●Cinder:●API,…

●message queue: middleware for inter-service communication

External net

Management net

Data net (flat)

CPU node 1 CPU node 2 CPU node 3Controller Network node

Internet

Neutron• plugin: implements the VNI• L3: virtual routers creation• dhcp• metadata

External net

Management net

Data net (flat)

CPU node 1 CPU node 2 CPU node 3Controller Network node

Internet

In each Compute node:●Neutron

●plugin: implements VNI●Nova

●compute: manages the hypervisor

● User send REST API calls to the service enpoints, using:– the web dashboard– CLI clients

● OS components communicate between each other using:– Message passing (an AMQP server resides in the

controller node)– rest API calls

● Some components (neutron-server, keystone, etc.) access directly to a DB to save/modify their state

Sometimes the external network is notdirectly connected the internet, but there is a datacenter gateway allowing the access to the internet

Public net

gatewayREST APIInternet

Private cloud

External net

Public net

gateway

External net

Mgmt net

data net

CPU node 1 CPU node 2 CPU node 3Controller

nova api

Network node

REST API

User

port forwarding

Internet

REST protocol

Public net

gateway

External net

CPU node 1 CPU node 2 CPU node 3Controller Network node

REST API

User

port forwarding

Internet

Horizon(Web server

apache2)

HTTP request

Mgmt net

data net

Public net

gateway

External net

rete dati (flat)

CPU node 1 CPU node 2 CPU node 3Controller Network node

REST API

User

port forwarding

Internet

xvncproxy

VNC protocol

Port forwarding on 6080 port!!

The hypervisor pipes the VM video output through the network

hypervisor

sudo iptables -t nat -I PREROUTING -i eth1 -p tcp --dport 80 -j DNAT --to-destination 10.250.0.1:80

VM

Mgmt net

Public net

gateway

External net

CPU node 1 CPU node 2 CPU node 3Controller Network node

REST APIInternet

novacompute

nova api

messagequeue server

Mgmt net

data net

Public net

gateway

External net

CPU node 1 CPU node 2 CPU node 3Controller Network node

REST APIInternet

novacompute

Glance API REST endpoint

REST API call

Mgmt net

data net

Public net

gateway

External net

CPU node 1 CPU node 2 CPU node 3Controller Network node

REST APIInternet

Mgmt net

data net

VM VM

Public net

gateway

External net

CPU node 1 CPU node 2 CPU node 3Controller Network node

REST APIInternet

Mgmt net

data net

VM VMVirtual router

Public net

gateway

External net

CPU node 1 CPU node 2 CPU node 3Controller Network node

REST APIInternet

Mgmt net

data net

VMVirtual router

NAT/port forwarding

User

Public net

gateway

External net

CPU node 1 CPU node 2 CPU node 3Controller Network node

REST APIInternet

Mgmt net

data net

VMVirtual router

(NAT/Floating IP)

NAT/port forwarding

User

The VM has:• a fixed private IP on a

private tenant net• an optional floating IP on

the external network

The VR does:• The NAT for

private IPs• DNAT of the

floating IPs

Public net

gateway

External net

CPU node 1 CPU node 2 CPU node 3Controller Network node

REST APIInternet

Mgmt net

data net

hypervisorVM

storage

The VM sees the disk (block device) as a local device!

Network file system protocol (NFS, iSCSI,  …)

Storage node

Neutron defines these network abstractions: Network – an isolated L2 network segment Subnet – an IP address block on a certain network Router – a gateway between subnets Fixed IP – an IP on a tenant network Floating IP – a mapping between IP of external

networks and a private fixed IP Port – an attachment point to a network.

Users only see these abstractions!!!

Neutron implements these abstraction inside the VNI in the cluster nodes (i.e. at the host level) so that VMs(guest level) can see the virtual networks

Tenant network: a network created by a clouduser. The VM takes a fixed IP from this net (notmodifiable after the VM creation)◦ implementation detail: the VM receives the IP from a

DHCP server configured to give always the same fixed IP to that VM!

Provider network: network external to the cluster, allows outside connectivity, passing through the network node. A VM can allocate a floating IP to gain external visibility (OpenStack maps eachfloating IP to the related fixed IP). Floating Ips are deallocable.

VIRTUAL NETWORKManaged by Neutron

Physicalinterface of the network node

They are “leaky abstractions”! For example:◦ Net creation is limited to the effective VLAN ID

availability for the physical network!◦ During the network creation you could specify low

level implementation details ( such as the VLAN ID) or leave OpenStack decide them for you

server (REST API endpoint): receives API requests, saves allthe network info in a database, instructs the agents

plugin agent: implements the VNI inside the cluster node,using the technology specified (GRE tunnel, VLAN, VXLAN, …).

dhcp: implements the dhcp servers L3: implements the virtual routers Metadata: the VMs contact the metadata service at the

creation

● Create a L2 network– neutron net-create net1

● Associate a L3 subnet to the network– neutron subnet-create net1 10.0.0.0/24 --name subnet1

● Boot a VM on that subnet– nova boot --image img --flavor flv --nic net-id=uuid vm_name

A user that wants to create a VM:

1. Sends a REST API (via CLI client or web dashboartd) to the Keystone REST endpoint (request+auth) and Nova endpoint

2. “nova-scheduler”,  internally,  chooses  the  best  suitable  compute    (CPU)  node  that  will host the VM

3. In  that  CPU  node,  the  “nova-compute”  component  does  the  following  things:

1. prepares the hypervisor

2. asks Glance the VM image

3. asks Neutron components to allocate the VNI

4. asks Cinder to allocate the persistent block storage (volumes) for the VM

source: http://goo.gl/n3Bb5s

Network namespaces is a technology that allows to separate/isolate multiple network domains inside a single host by replicating the network software stack

A process executed in a namespace sees only specific:◦ physical/virtual network interfaces◦ routing/arp tables◦ firewall/NAT rules

You can: ◦ Create a netns◦ Create a process/virtual network component inside that netns

Networknamespace 1

Linux kernelglobal namespace(PID 1)

Hardware

Network namespace N

Network namespace 2

...

A global namespace createdat boot time. Processes

usually reside here

You can:• Create a netns• create a vif inside the netns• start a linux process

VIF

processprocess

process

process

Networknamespace 1

Linux kernelglobal namespace(PID 1)

Hardware

Network namespace N

Network namespace 2

...

Namespaces guarantee L3 isolation, so the interfaces can have overlapping IP addresses!

VIFVIF

The virtual bridges can be connected to (physical or virtual) interfaces thatreside in different namespaces: the virtual bridges act as bridges betweenthe namespaces

Networknamespace 1

Network namespace 2

VIF packet

PIF

Virtual bridge

process

Physical host

Physical host

process process

process process

They are completelyisolated!

In each node there is:◦ An integration bridge◦ A bridge for each physical network, connected to

The integration bridge The physical network interface (pif)

public net

gatewayExternal net

Mgmt net

Data net

CPU node 1Controller Network node

br-data

br-intlinuxbridge

VM

br-data

br-int

br-ex

br-data

br-int

br-data: connected to the data net

Internet

gateway

CPU node 1Controller Network node

br-data

br-intlinuxbridge

VM

br-data

br-int

br-ex

br-data

br-int

br-int: intermediate bridge, act as a hub of a

star network

External net

Mgmt net

Data net

public net

Internet

gateway

CPU node 1Controller Network node

br-data

br-intlinuxbridge

VM

br-data

br-int

br-ex

br-data

br-int

br-ex: connected to the external network (only

present only in the network node!!)

External net

Mgmt net

Data net

public net

Internet

Advantages: namespaces allow to manage multiple L3 functions at the host level in the same node.

This is a key enabler is implementing VNIs with advancedfunctionalities: the cloud users to create overlapping virtualL3 networks!!!◦ Two tenants can create isolated L3 networks with the same IP addresses

The namespaces are used only in the network node (because L3 functionalities at the host level are present only inside the network node)

A new network namespace is created by OpenStack when you create…◦ … a new virtual L2 network◦ … a new virtual router

In the compute node:◦ nova-compute makes a REST API call to neutron-

server asking for a port allocation (and a fixed IP for the VM)

◦ neutron-agent configures the virtual bridges (br-data, br-int) via OpenFlow protocol

◦ nova-compute boots the VM

In the network node:◦ neutron-dhcp

creates a netns ("qdhcp-…") creates a vif inside that netns spawns a dnsmasq process (DHCP server) using that vif

◦ neutron-l3 creates a netns for the virtual router ("qrouter-…") creates a vif inside that netns configure the routing tables inside that netns

In the configuration files Neutron associateseach “physical network” to a virtual bridge. For example:◦ Physnet1: br-data◦ Extphysnet: br-ex

The admin, during the creation of a provider network, must specify the associated physicalnetwork

Use case: a user◦ creates a private user network( 10.0.0.0/24)◦ boots a VM on this network

source ~/devstack/openrc demo demo pass && nova boot --key-name hc01 --image cirros-0.3.1-x86_64-disk --flavor 2 --nic net-id=61821a27-69b8-43c2-afa8-633304d8be50,v4-fixed-ip=10.0.0.66 myserver

The admin can even specify the VLAN Id used on the physical data network (even outside the VLAN ID pool specified in the configurationfile)

The user cloud is not allowed to do this (OpenStack autonomously picksa VLAN from the VLAN pool available in the data network)

source ~/devstack/openrc admin admin pass && neutron net-create net2 --provider:network_typevlan --provider:physical_network physnet1 --provider:segmentation_id 1000

+---------------------------+--------------------------------------+| Field | Value |+---------------------------+--------------------------------------+| admin_state_up | True || id | 61821a27-69b8-43c2-afa8-633304d8be50 || name | private || provider:network_type | vlan || provider:physical_network | physnet1 || provider:segmentation_id | 1000 || router:external | False || shared | False || status | ACTIVE || subnets | bbc97757-f297-4c7b-b032-e70768fe8485 || tenant_id | a370af83e43a432abb3adfbf976d1cf8 |+---------------------------+--------------------------------------+

NB this VLAN is that used on the physical network!

PIF on the data network

Veth pair: a pair of vif that act as a pipe (everythingentering from oneexits from the other)

VMeth0

LB managment interface

Tap interface (host-level view of the VM interface)

Veth pair che collega br-int al linux bridge

Veth pair interfacesconnecting the bridges

specificroutingtables

dhcp server

dhcp server

Network namespaces

No traffic here

VMeth0 eth0

User network (internal)The VMs will receive a fixed IP on thisnet.

Provider network (external)NB it’s the physical external network of the cluster

top related