fuel

51
fuel Release Latest Open Platform for NFV Feb 01, 2020

Upload: others

Post on 26-Apr-2022

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: fuel

fuelRelease Latest

Open Platform for NFV

Feb 01, 2020

Page 2: fuel
Page 3: fuel

CONTENTS

1 OPNFV Fuel Release Notes 11.1 Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Important Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5 Release Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.6 Known Limitations, Issues and Workarounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.7 Test Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 OPNFV Fuel Installation Instruction 52.1 Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.3 Preparations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.4 Hardware Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.5 Top of the Rack (TOR) Configuration Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . 92.6 OPNFV Software Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.7 OPNFV Software Configuration (XDF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.8 OPNFV Software Installation and Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.9 Release Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3 OPNFV Fuel User Guide 293.1 Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293.2 Network Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293.3 Accessing the Salt Master Node (cfg01) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323.4 Accessing the MaaS Node (mas01) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323.5 Accessing Cluster Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333.6 Debugging MaaS Comissioning/Deployment Issues . . . . . . . . . . . . . . . . . . . . . . . . . . 333.7 Recovering Failed Deployments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363.8 Exploring the Cloud with Salt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373.9 Accessing Openstack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393.10 Guest Operating System Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403.11 OpenStack Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413.12 OpenStack Endpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423.13 Reclass Model Viewer Tutorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423.14 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

4 OPNFV Fuel Scenarios 454.1 os-nosdn-nofeature-noha overview and description . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

i

Page 4: fuel

4.2 os-nosdn-nofeature-ha overview and description . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454.3 os-odl-nofeature-noha overview and description . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464.4 os-odl-nofeature-ha overview and description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

ii

Page 5: fuel

CHAPTER

ONE

OPNFV FUEL RELEASE NOTES

1.1 Abstract

This document provides the release notes for Iruya release with the Fuel deployment toolchain.

Starting with Gambia release, both x86_64 and aarch64 architectures are supported at the same time by the fuelcodebase.

1.2 License

All Fuel and “common” entities are protected by the Apache License 2.0.

1.3 Important Notes

This is the OPNFV Iruya release that implements the deploy stage of the OPNFV CI pipeline via Fuel.

Fuel is based on the MCP installation tool chain. More information available at Mirantis Cloud Platform Documenta-tion.

The goal of the Iruya release and this Fuel-based deployment process is to establish a lab ready platform acceleratingfurther development of the OPNFV infrastructure.

Carefully follow the installation instructions.

1.4 Summary

Iruya release with the Fuel deployment toolchain will establish an OPNFV target system on a Pharos compliantlab infrastructure. The current definition of an OPNFV target system is OpenStack Queens combined with an SDNcontroller, such as OpenDaylight. The system is deployed with OpenStack High Availability (HA) for most OpenStackservices.

Fuel also supports non-HA deployments, which deploys a single controller, one gateway node and a number of com-pute nodes.

Fuel supports x86_64, aarch64 or mixed architecture clusters.

Furthermore, Fuel is capable of deploying scenarios in a baremetal, virtual or hybrid fashion. virtualdeployments use multiple VMs on the Jump Host and internal networking to simulate the baremetal deployment.

1

Page 6: fuel

fuel, Release Latest

For Iruya, the typical use of Fuel as an OpenStack installer is supplemented with OPNFV unique components suchas:

• OpenDaylight

As well as OPNFV-unique configurations of the Hardware and Software stack.

This Iruya artifact provides Fuel as the deployment stage tool in the OPNFV CI pipeline including:

• Automated (Jenkins, RTD) documentation build & publish (multiple documents);

• Automated (Jenkins) build & publish of Salt Master Docker image;

• Automated (Jenkins) deployment of Iruya running on baremetal or a nested hypervisor environment (KVM);

• Automated (Jenkins) validation of the Iruya deployment

1.5 Release Data

Project fuelRepo/tag opnfv-9.0.0Release designation Iruya 9.0Release date January 31, 2020Purpose of the delivery OPNFV Iruya 9.0 release

1.5.1 Version Change

Module Version Changes

This is the first tracked version of the Iruya release with the Fuel deployment toolchain. It is based on followingupstream versions:

• MCP (Q1`19 GA release)

• OpenStack (Stein release)

• OpenDaylight (Neon release)

• Ubuntu (18.04 release)

Document Changes

This is the Iruya 9.0 release. It comes with the following documentation:

• OPNFV Fuel Installation Instruction

• Release notes (This document)

• OPNFV Fuel Userguide

1.5.2 Reason for Version

Feature Additions

Due to reduced schedule, this is a maintainance release.

2 Chapter 1. OPNFV Fuel Release Notes

Page 7: fuel

fuel, Release Latest

Bug Corrections

N/A

Software Deliverables

• fuel git repository with multiarch (x86_64, aarch64 or mixed) installer script files

Documentation Deliverables

• OPNFV Fuel Installation Instruction

• Release notes (This document)

• OPNFV Fuel Userguide

1.5.3 Scenario Matrix

baremetal virtual hybridos-nosdn-nofeature-noha x86_64os-nosdn-nofeature-ha x86_64, aarch64os-odl-nofeature-noha x86_64os-odl-nofeature-ha x86_64,

1.6 Known Limitations, Issues and Workarounds

1.6.1 System Limitations

• Max number of blades: 1 Jumpserver, 3 Controllers, 20 Compute blades

• Min number of blades: 1 Jumpserver

• Storage: Cinder is the only supported storage configuration

• Max number of networks: 65k

1.6.2 Known Issues

None

1.6.3 Workarounds

None

1.7 Test Results

The Iruya 9.0 release with the Fuel deployment tool has undergone QA test runs, see separate test results.

1.6. Known Limitations, Issues and Workarounds 3

Page 8: fuel

fuel, Release Latest

1.8 References

For more information on the OPNFV Iruya 9.0 release, please see:

1. OPNFV Home Page

2. OPNFV Documentation

3. OPNFV Software Downloads

4. OPNFV Iruya Wiki Page

5. ‘OpenStack Queens Release Artifacts‘_

6. OpenStack Documentation

7. OpenDaylight Artifacts

8. Mirantis Cloud Platform Documentation

4 Chapter 1. OPNFV Fuel Release Notes

Page 9: fuel

CHAPTER

TWO

OPNFV FUEL INSTALLATION INSTRUCTION

2.1 Abstract

This document describes how to install the Iruya release of OPNFV when using Fuel as a deployment tool, coveringits usage, limitations, dependencies and required system resources.

This is an unified documentation for both x86_64 and aarch64 architectures. All information is common for botharchitectures except when explicitly stated.

2.2 Introduction

This document provides guidelines on how to install and configure the Iruya release of OPNFV when using Fuel asa deployment tool, including required software and hardware configurations.

Although the available installation options provide a high degree of freedom in how the system is set up, including ar-chitecture, services and features, etc., said permutations may not provide an OPNFV compliant reference architecture.This document provides a step-by-step guide that results in an OPNFV Iruya compliant deployment.

The audience of this document is assumed to have good knowledge of networking and Unix/Linux administration.

Before starting the installation of the Iruya release of OPNFV, using Fuel as a deployment tool, some planning mustbe done.

2.3 Preparations

Prior to installation, a number of deployment specific parameters must be collected, those are:

1. Provider sub-net and gateway information

2. Provider VLAN information

3. Provider DNS addresses

4. Provider NTP addresses

5. How many nodes and what roles you want to deploy (Controllers, Computes)

This information will be needed for the configuration procedures provided in this document.

5

Page 10: fuel

fuel, Release Latest

2.4 Hardware Requirements

Mininum hardware requirements depend on the deployment type.

Warning: If baremetal nodes are present in the cluster, the architecture of the nodes running the control plane(kvm01, kvm02, kvm03 for HA scenarios, respectively ctl01, gtw01, odl01 for noHA scenarios) and thejumpserver architecture must be the same (either x86_64 or aarch64).

Tip: The compute nodes may have different architectures, but extra configuration might be required for schedulingVMs on the appropiate host. This use-case is not tested in OPNFV CI, so it is considered experimental.

2.4.1 Hardware Requirements for virtual Deploys

The following minimum hardware requirements must be met for the virtual installation of Iruya using Fuel:

HW As-pect

Requirement

1Jumpserver

A physical node (also called Foundation Node) that will host a Salt Master container and each of theVM nodes in the virtual deploy

CPU Minimum 1 socket with Virtualization supportRAM Minimum 32GB/server (Depending on VNF work load)Disk Minimum 100GB (SSD or 15krpm SCSI highly recommended)

2.4.2 Hardware Requirements for baremetal Deploys

The following minimum hardware requirements must be met for the baremetal installation of Iruya using Fuel:

6 Chapter 2. OPNFV Fuel Installation Instruction

Page 11: fuel

fuel, Release Latest

HW Aspect Requirement1 Jumpserver A physical node (also called Foundation Node) that

hosts the Salt Master and MaaS containers# of nodes Minimum 5

• 3 KVM servers which will run all the controllerservices

• 2 Compute nodes

Warning:kvm01,kvm02,kvm03nodesandthejumpservermusthavethesamear-chi-tec-ture(ei-therx86_64oraarch64).

Note: aarch64 nodes should run an UEFI compati-ble firmware with PXE support (e.g. EDK2).

CPU Minimum 1 socket with Virtualization supportRAM Minimum 16GB/server (Depending on VNF work load)Disk Minimum 256GB 10kRPM spinning disksNetworks Mininum 4

• 3 VLANs (public, mgmt, private) - can bea mix of tagged/native

• 1 Un-Tagged VLAN for PXE Boot - PXE/admin Network

Note: These can be allocated to a single NIC or spreadout over multiple NICs.

Warning:Noex-ter-nalDHCPservershouldbepresentinthePXE/adminnet-workseg-ment,asitwouldin-ter-ferewithMaaSDHCPdur-ingbaremetalnodecom-mis-sion-ing/deploying.

Power mgmt All targets need to have power management tools thatallow rebooting the hardware (e.g. IPMI).

2.4. Hardware Requirements 7

Page 12: fuel

fuel, Release Latest

2.4.3 Hardware Requirements for hybrid (baremetal + virtual) Deploys

The following minimum hardware requirements must be met for the hybrid installation of Iruya using Fuel:

HW Aspect Requirement1 Jumpserver A physical node (also called Foundation Node) that

hosts the Salt Master and MaaS containers, and each ofthe virtual nodes defined in PDF

# of nodesNote: Depends on PDF configuration.

If the control plane is virtualized, minimum baremetalrequirements are:

• 2 Compute nodesIf the computes are virtualized, minimum baremetal re-quirements are:

• 3 KVM servers which will run all the controllerservices

Warning:kvm01,kvm02,kvm03nodesandthejumpservermusthavethesamear-chi-tec-ture(ei-therx86_64oraarch64).

Note: aarch64 nodes should run an UEFI compati-ble firmware with PXE support (e.g. EDK2).

CPU Minimum 1 socket with Virtualization supportRAM Minimum 16GB/server (Depending on VNF work load)Disk Minimum 256GB 10kRPM spinning disksNetworks Same as for baremetal deploymentsPower mgmt Same as for baremetal deployments

8 Chapter 2. OPNFV Fuel Installation Instruction

Page 13: fuel

fuel, Release Latest

2.4.4 Help with Hardware Requirements

Calculate hardware requirements:

When choosing the hardware on which you will deploy your OpenStack environment, you should think about:

• CPU – Consider the number of virtual machines that you plan to deploy in your cloud environment and theCPUs per virtual machine.

• Memory – Depends on the amount of RAM assigned per virtual machine and the controller node.

• Storage – Depends on the local drive space per virtual machine, remote volumes that can be attached to a virtualmachine, and object storage.

• Networking – Depends on the Choose Network Topology, the network bandwidth per virtual machine, andnetwork storage.

2.5 Top of the Rack (TOR) Configuration Requirements

The switching infrastructure provides connectivity for the OPNFV infrastructure operations, tenant networks(East/West) and provider connectivity (North/South); it also provides needed connectivity for the Storage Area Net-work (SAN).

To avoid traffic congestion, it is strongly suggested that three physically separated networks are used, that is: 1 physicalnetwork for administration and control, one physical network for tenant private and public networks, and one physicalnetwork for SAN.

The switching connectivity can (but does not need to) be fully redundant, in such case it comprises a redundant 10GEswitch pair for each of the three physically separated networks.

Warning: The physical TOR switches are not automatically configured from the OPNFV Fuel reference platform.All the networks involved in the OPNFV infrastructure as well as the provider networks and the private tenantVLANs needs to be manually configured.

Manual configuration of the Iruya hardware platform should be carried out according to the OPNFV Pharos Speci-fication.

2.6 OPNFV Software Prerequisites

Note: All prerequisites described in this chapter apply to the jumpserver node.

2.6.1 OS Distribution Support

The Jumpserver node should be pre-provisioned with an operating system, according to the OPNFV Pharos specifica-tion.

OPNFV Fuel has been validated by CI using the following distributions installed on the Jumpserver:

• CentOS 7 (recommended by Pharos specification);

• Ubuntu Xenial 16.04;

2.5. Top of the Rack (TOR) Configuration Requirements 9

Page 14: fuel

fuel, Release Latest

aarch64 notes

For an aarch64 Jumpserver, the libvirt minimum required version is 3.x, 3.5 or newer highly recom-mended.

Tip: CentOS 7 (aarch64) distro provided packages are already new enough.

Warning: Ubuntu 16.04 (arm64), distro packages are too old and 3rd party repositories should be used.

For convenience, Armband provides a DEB repository holding all the required packages.

To add and enable the Armband repository on an Ubuntu 16.04 system, create a new sources list file /apt/sources.list.d/armband.list with the following contents:

jenkins@jumpserver:~$ cat /etc/apt/sources.list.d/armband.listdeb http://linux.enea.com/mcp-repos/rocky/xenial rocky-armband main

jenkins@jumpserver:~$ sudo apt-key adv --keyserver keys.gnupg.net \--recv 798AB1D1

jenkins@jumpserver:~$ sudo apt-get update

2.6.2 OS Distribution Packages

By default, the deploy.sh script will automatically install the required distribution package dependencies on theJumpserver, so the end user does not have to manually install them before starting the deployment.

This includes Python, QEMU, libvirt etc.

See also:

To disable automatic package installation (and/or upgrade) during deployment, check out the -P deploy argument.

Warning: The install script expects libvirt to be already running on the Jumpserver.

In case libvirt packages are missing, the script will install them; but depending on the OS distribution, the usermight have to start the libvirt daemon service manually, then run the deploy script again.

Therefore, it is recommended to install libvirt explicitly on the Jumpserver before the deployment.

While not mandatory, upgrading the kernel on the Jumpserver is also highly recommended.

jenkins@jumpserver:~$ sudo apt-get install \linux-image-generic-hwe-16.04-edge libvirt-bin

jenkins@jumpserver:~$ sudo reboot

2.6.3 User Requirements

The user running the deploy script on the Jumpserver should belong to sudo and libvirt groups, and have pass-wordless sudo access.

10 Chapter 2. OPNFV Fuel Installation Instruction

Page 15: fuel

fuel, Release Latest

Note: Throughout this documentation, we will use the jenkins username for this role.

The following example adds the groups to the user jenkins:

jenkins@jumpserver:~$ sudo usermod -aG sudo jenkinsjenkins@jumpserver:~$ sudo usermod -aG libvirt jenkinsjenkins@jumpserver:~$ sudo rebootjenkins@jumpserver:~$ groupsjenkins sudo libvirt

jenkins@jumpserver:~$ sudo visudo...%jenkins ALL=(ALL) NOPASSWD:ALL

2.6.4 Local Artifact Storage

The folder containing the temporary deploy artifacts (/home/jenkins/tmpdir in the examples below) needs tohave mask 777 in order for libvirt to be able to use them.

jenkins@jumpserver:~$ mkdir -p -m 777 /home/jenkins/tmpdir

2.6.5 Network Configuration

Relevant Linux bridges should also be pre-configured for certain networks, depending on the type of the deployment.

Network LinuxBridge

Linux Bridge necessity based on deploy typevirtual baremetal hybrid

PXE/adminadmin_brabsent present presentman-age-ment

mgmt_br optional optional, recommended, required forfunctest, yardstick

optional, recommended, required forfunctest, yardstick

internal int_br optional optional presentpublic public_broptional optional, recommended, useful for de-

buggingoptional, recommended, useful for de-bugging

Tip: IP addresses should be assigned to the created bridge interfaces (not to one of its ports).

Warning: PXE/admin bridge (admin_br) must have an IP address.

2.6.6 Changes deploy.sh Will Perform to Jumpserver OS

Warning: The install script will alter Jumpserver sysconf and disable net.bridge.bridge-nf-call.

2.6. OPNFV Software Prerequisites 11

Page 16: fuel

fuel, Release Latest

Warning: On Jumpservers running Ubuntu with AppArmor enabled, when deploying on baremetal nodes (i.e.when MaaS is used), the install script will disable certain conflicting AppArmor profiles that interfere with MaaSservices inside the container, e.g. ntpd, named, dhcpd, tcpdump.

Warning: The install script will automatically install and/or upgrade the required distribution package dependen-cies on the Jumpserver, unless explicitly asked not to (via the -P deploy arg).

2.7 OPNFV Software Configuration (XDF)

New in version 5.0.0.

Changed in version 7.0.0.

Unlike the old approach based on OpenStack Fuel, OPNFV Fuel no longer has a graphical user interface for configur-ing the environment, but instead switched to OPNFV specific descriptor files that we will call generically XDF:

• PDF (POD Descriptor File) provides an abstraction of the target POD with all its hardware characteristics andrequired parameters;

• IDF (Installer Descriptor File) extends the PDF with POD related parameters required by the OPNFV Fuelinstaller;

• SDF (Scenario Descriptor File, not yet adopted) will later replace embedded scenario definitions, describing theroles and layout of the cluster enviroment for a given reference architecture;

Tip: For virtual deployments, if the public network will be accessed from outside the jumpserver node,a custom PDF/IDF pair is required for customizing idf.net_config.public and idf.fuel.jumphost.bridges.public.

Note: For OPNFV CI PODs, as well as simple (no public bridge) virtual deployments, PDF/IDF files arealready available in the pharos git repo. They can be used as a reference for user-supplied inputs or to kick off adeployment right away.

LAB/POD PDF/IDF availability based on deploy typevirtual baremetal hybrid

OPNFVCI POD

available in pharos git repo (e.g.ericsson-virtual1)

available in pharos git repo (e.g.lf-pod2, arm-pod5)

N/A, as currently there are 0hybrid PODs in OPNFV CI

local ornew POD

user-supplied user-supplied user-supplied

Tip: Both PDF and IDF structure are modelled as yaml schemas in the pharos git repo, also included as a gitsubmodule in OPNFV Fuel.

See also:

• mcp/scripts/pharos/config/pdf/pod1.schema.yaml

• mcp/scripts/pharos/config/pdf/idf-pod1.schema.yaml

12 Chapter 2. OPNFV Fuel Installation Instruction

Page 17: fuel

fuel, Release Latest

Schema files are also used during the initial deployment phase to validate the user-supplied input PDF/IDF files.

2.7.1 PDF

The Pod Descriptor File is a hardware description of the POD infrastructure. The information is modeled under ayaml structure.

The hardware description covers the jumphost node and a set of nodes for the cluster target boards. For each nodethe following characteristics are defined:

• Node parameters including CPU features and total memory;

• A list of available disks;

• Remote management parameters;

• Network interfaces list including name, MAC address, link speed, advanced features;

See also:

A reference file with the expected yaml structure is available at:

• mcp/scripts/pharos/config/pdf/pod1.yaml

For more information on PDF, see the OPNFV PDF Wiki Page.

Warning: The fixed IPs defined in PDF are ignored by the OPNFV Fuel installer script and it will instead assignaddresses based on the network ranges defined in IDF.

For more details on the way IP addresses are assigned, see OPNFV Fuel User Guide.

2.7.2 PDF/IDF Role (hostname) Mapping

Upcoming SDF support will introduce a series of possible node roles. Until that happens, the role mapping logic ishardcoded, based on node index in PDF/IDF (which should also be in sync, i.e. the parameters of the n-th clusternode defined in PDF should be the n-th node in IDF structures too).

Node index HA scenario noHA scenario1st kvm01 ctl012nd kvm02 gtw013rd kvm03 odl01/unused4th, 5th, . . . cmp001, cmp002, ... cmp001, cmp002, ...

Tip: To switch node role(s), simply reorder the node definitions in PDF/IDF (make sure to keep them in sync).

2.7.3 IDF

The Installer Descriptor File extends the PDF with POD related parameters required by the installer. This informationmay differ per each installer type and it is not considered part of the POD infrastructure.

2.7. OPNFV Software Configuration (XDF) 13

Page 18: fuel

fuel, Release Latest

idf.* Overview

The IDF file must be named after the PDF it attaches to, with the prefix idf-.

See also:

A reference file with the expected yaml structure is available at:

• mcp/scripts/pharos/config/pdf/idf-pod1.yaml

The file follows a yaml structure and at least two sections (idf.net_config and idf.fuel) are expected.

The idf.fuel section defines several sub-sections required by the OPNFV Fuel installer:

• jumphost: List of bridge names for each network on the Jumpserver;

• network: List of device name and bus address info of all the target nodes. The order must be aligned withthe order defined in the PDF file. The OPNFV Fuel installer relies on the IDF model to setup all node NICs bydefining the expected device name and bus address;

• maas: Defines the target nodes commission timeout and deploy timeout;

• reclass: Defines compute parameter tuning, including huge pages, CPU pinning and other DPDK settings;

---idf:

version: 0.1 # fixed, the only supported version (mandatory)net_config: # POD network configuration overview (mandatory)oob: ... # mandatoryadmin: ... # mandatorymgmt: ... # mandatorystorage: ... # mandatoryprivate: ... # mandatorypublic: ... # mandatory

fuel: # OPNFV Fuel specific section (mandatory)jumphost: # OPNFV Fuel jumpserver bridge configuration (mandatory)bridges: # Bridge name mapping (mandatory)admin: 'admin_br' # <PXE/admin bridge name> or ~mgmt: 'mgmt_br' # <mgmt bridge name> or ~private: ~ # <private bridge name> or ~public: 'public_br' # <public bridge name> or ~

trunks: ... # Trunked networks (optional)maas: # MaaS timeouts (optional)timeout_comissioning: 10 # commissioning timeout in minutestimeout_deploying: 15 # deploy timeout in minutes

network: # Cluster nodes network (mandatory)interface_mtu: 1500 # Cluster-level MTU (optional)ntp_strata_host1: 1.pool.ntp.org # NTP1 (optional)ntp_strata_host2: 0.pool.ntp.org # NTP2 (optional)node: ... # List of per-node cfg (mandatory)

reclass: # Additional params (mandatory)node: ... # List of per-node cfg (mandatory)

idf.net_config

idf.net_config was introduced as a mechanism to map all the usual cluster networks (internal and providernetworks, e.g. mgmt) to their VLAN tags, CIDR and a physical interface index (used to match networks to interfacenames, like eth0, on the cluster nodes).

14 Chapter 2. OPNFV Fuel Installation Instruction

Page 19: fuel

fuel, Release Latest

Warning: The mapping between one network segment (e.g. mgmt) and its CIDR/ VLAN is not configurable on aper-node basis, but instead applies to all the nodes in the cluster.

For each network, the following parameters are currently supported:

idf.net_config.

*key

Details

interfaceThe index of the interface to use for this net. For each cluster node (if network is present), OPNFV Fuel willdetermine the underlying physical interface by picking the element at index interface from the list ofnetwork interface names defined in idf.fuel.network.node.*.interfaces. Required for eachnetwork.

Note: The interface index should be the same on all cluster nodes. This can be achieved by ordering themaccordingly in PDF/IDF.

vlan VLAN tag (integer) or the string native. Required for each network.ip-rangeWhen specified, all cluster IPs dynamically allocated by OPNFV Fuel for that network will be assigned

inside this range. Required for oob, optional for others.

Note: For now, only range start address is used.

networkNetwork segment address. Required for each network, except oob.mask Network segment mask. Required for each network, except oob.gatewayGateway IP address. Required for public, N/A for others.dns List of DNS IP addresses. Required for public, N/A for others.

Sample public network configuration block:

idf:net_config:

public:interface: 1vlan: nativenetwork: 10.0.16.0ip-range: 10.0.16.100-10.0.16.253mask: 24gateway: 10.0.16.254dns:- 8.8.8.8- 8.8.4.4

hybrid POD notes

Interface indexes must be the same for all nodes, which is problematic when mixing virtual nodes (where allinterfaces were untagged so far) with baremetal nodes (where interfaces usually carry tagged VLANs).

Tip: To achieve this, a special jumpserver network layout is used: mgmt, storage, private, public are

2.7. OPNFV Software Configuration (XDF) 15

Page 20: fuel

fuel, Release Latest

trunked together in a single trunk bridge:

• without decapsulating them (if they are also tagged on baremetal); a trunk.<vlan_tag> interfaceshould be created on the jumpserver for each tagged VLAN so the kernel won’t drop the packets;

• by decapsulating them first (if they are also untagged on baremetal nodes);

The trunk bridge is then used for all bridges OPNFV Fuel is aware of in idf.fuel.jumphost.bridges,e.g. for a trunk where only mgmt network is not decapsulated:

idf:fuel:jumphost:bridges:

admin: 'admin_br'mgmt: 'trunk'private: 'trunk'public: 'trunk'

trunks:# mgmt network is not decapsulated for jumpserver infra nodes,# to align with the VLAN configuration of baremetal nodes.mgmt: True

Warning: The Linux kernel limits the name of network interfaces to 16 characters. Extra care is required whenchoosing bridge names, so appending the VLAN tag won’t lead to an interface name length exceeding that limit.

idf.fuel.network

idf.fuel.network allows mapping the cluster networks (e.g. mgmt) to their physical interface name (e.g. eth0)and bus address on the cluster nodes.

idf.fuel.network.node should be a list with the same number (and order) of elements as the cluster nodesdefined in PDF, e.g. the second cluster node in PDF will use the interface name and bus address defined in the secondlist element.

Below is a sample configuration block for a single node with two interfaces:

idf:fuel:network:node:# Ordered-list, index should be in sync with node index in PDF- interfaces:

# Ordered-list, index should be in sync with interface index# in PDF- 'ens3'- 'ens4'

busaddr:# Bus-info reported by `ethtool -i ethX`- '0000:00:03.0'- '0000:00:04.0'

16 Chapter 2. OPNFV Fuel Installation Instruction

Page 21: fuel

fuel, Release Latest

idf.fuel.reclass

idf.fuel.reclass provides a way of overriding default values in the reclass cluster model.

This currently covers strictly compute parameter tuning, including huge pages, CPU pinning and other DPDK settings.

idf.fuel.reclass.node should be a list with the same number (and order) of elements as the cluster nodesdefined in PDF, e.g. the second cluster node in PDF will use the parameters defined in the second list element.

The following parameters are currently supported:

idf.fuel.reclass.node.*key

Details

nova_cpu_pinningList of CPU cores nova will be pinned to.

Note: Currently disabled.

compute_hugepages_sizeSize of each persistent huge pages.Usual values are 2M and 1G.

compute_hugepages_countTotal number of persistent huge pages.compute_hugepages_mountMount point to use for huge pages.compute_kernel_isolcpuList of certain CPU cores that are isolated from Linux scheduler.compute_dpdk_driverKernel module to provide userspace I/O support.compute_ovs_pmd_cpu_maskHexadecimal mask of CPUs to run DPDK Poll-mode drivers.compute_ovs_dpdk_socket_memSet of amount huge pages in MB to be used by OVS-DPDK daemon taken for each NUMA node. Set

size is equal to NUMA nodes count, elements are divided by comma.compute_ovs_dpdk_lcore_maskHexadecimal mask of DPDK lcore parameter used to run DPDK processes.compute_ovs_memory_channelsNumber of memory channels to be used.dpdk0_driverNIC driver to use for physical network interface.dpdk0_n_rxqNumber of RX queues.

Sample compute_params configuration block (for a single node):

idf:fuel:reclass:node:- compute_params:

common: &compute_params_commoncompute_hugepages_size: 2Mcompute_hugepages_count: 2048compute_hugepages_mount: /mnt/hugepages_2M

dpdk:<<: *compute_params_commoncompute_dpdk_driver: uiocompute_ovs_pmd_cpu_mask: "0x6"compute_ovs_dpdk_socket_mem: "1024"compute_ovs_dpdk_lcore_mask: "0x8"compute_ovs_memory_channels: "2"dpdk0_driver: igb_uiodpdk0_n_rxq: 2

2.7. OPNFV Software Configuration (XDF) 17

Page 22: fuel

fuel, Release Latest

2.7.4 SDF

Scenario Descriptor Files are not yet implemented in the OPNFV Fuel Iruya release.

Instead, embedded OPNFV Fuel scenarios files are locally available in mcp/config/scenario.

2.8 OPNFV Software Installation and Deployment

This section describes the process of installing all the components needed to deploy the full OPNFV reference platformstack across a server cluster.

2.8.1 Deployment Types

Warning: OPNFV releases previous to Iruya used to rely on the virtual keyword being part of the PODname (e.g. ericsson-virtual2) to configure the deployment type as virtual. Otherwise baremetalwas implied.

Gambia and newer releases are more flexbile towards supporting a mix of baremetal and virtual nodes, so thetype of deployment is now automatically determined based on the cluster nodes types in PDF:

PDF has nodes of type Deployment typebaremetal virtualyes no baremetalyes yes hybridno yes virtual

Based on that, the deployment script will later enable/disable certain extra nodes (e.g. mas01) and/or STATE files(e.g. maas).

2.8.2 HA vs noHA

High availability of OpenStack services is determined based on scenario name, e.g. os-nosdn-nofeature-nohavs os-nosdn-nofeature-ha.

Tip: HA scenarios imply a virtualized control plane (VCP) for the OpenStack services running on the 3 kvm nodes.

See also:

An experimental feature argument (-N) is supported by the deploy script for disabling VCP, although it might not besupported by all scenarios and is not being continuosly validated by OPNFV CI/CD.

Warning: virtual HA deployments are not officially supported, due to poor performance and various limita-tions of nested virtualization on both x86_64 and aarch64 architectures.

Tip: virtual HA deployments without VCP are supported, but highly experimental.

18 Chapter 2. OPNFV Fuel Installation Instruction

Page 23: fuel

fuel, Release Latest

Feature HA scenario noHA scenarioVCP (Virtualized Control Plane) yes, disabled with -N noOpenStack APIs SSL yes noStorage GlusterFS NFS

2.8.3 Steps to Start the Automatic Deploy

These steps are common for virtual, baremetal or hybrid deploys, x86_64, aarch64 or mixed (x86_64and aarch64):

• Clone the OPNFV Fuel code from gerrit

• Checkout the Iruya release tag

• Start the deploy script

Note: The deployment uses the OPNFV Pharos project as input (PDF and IDF files) for hardware and networkconfiguration of all current OPNFV PODs.

When deploying a new POD, one may pass the -b flag to the deploy script to override the path for the labconfigdirectory structure containing the PDF and IDF (<URI to configuration repo ...> is the absolute path toa local or remote directory structure, populated similar to pharos git repo, i.e. PDF/IDF reside in a subdirectory calledlabs/<lab_name>).

jenkins@jumpserver:~$ git clone https://git.opnfv.org/fueljenkins@jumpserver:~$ cd fueljenkins@jumpserver:~/fuel$ git checkout opnfv-9.0.0jenkins@jumpserver:~/fuel$ ci/deploy.sh -l <lab_name> \

-p <pod_name> \-b <URI to configuration repo containing the

→˓PDF/IDF files> \-s <scenario> \-D \-S <Storage directory for deploy artifacts> |&

→˓ tee deploy.log

Tip: Besides the basic options, there are other recommended deploy arguments:

• use -D option to enable the debug info

• use -S option to point to a tmp dir where the disk images are saved. The deploy artifacts will be re-used onsubsequent (re)deployments.

• use |& tee to save the deploy log to a file

2.8.4 Typical Cluster Examples

Common cluster layouts usually fall into one of the cases described below, categorized by deployment type(baremetal, virtual or hybrid) and high availability (HA or noHA).

A simplified overview of the steps deploy.sh will automatically perform is:

• create a Salt Master Docker container on the jumpserver, which will drive the rest of the installation;

2.8. OPNFV Software Installation and Deployment 19

Page 24: fuel

fuel, Release Latest

• baremetal or hybrid only: create a MaaS container node, which will be leveraged using Salt to handle OSprovisioning on the baremetal nodes;

• leverage Salt to install & configure OpenStack;

Note: A Docker network mcpcontrol is always created for initial connection of the infrastructure containers(cfg01, mas01) on Jumphost.

Warning: A single cluster deployment per jumpserver node is currently supported, indifferent of its type(virtual, baremetal or hybrid).

Once the deployment is complete, the following should be accessible:

Resource HA scenario noHA scenarioHorizon (Openstack Dashboard) https://<prx public VIP> http://<ctl

VIP>:8078SaltStack Deployment Documenta-tion

http://<prx publicVIP>:8090

N/A

See also:

For more details on locating and importing the generated SSL certificate, see OPNFV Fuel User Guide.

virtual noHA POD

In the following figure there are two generic examples of virtual deploys, each on a separate Jumphost node, bothbehind the same TOR switch:

• Jumphost 1 has only virsh bridges (created by the deploy script);

• Jumphost 2 has a mix of Linux (manually created) and libvirt managed bridges (created by the deployscript);

Tip: If external access to the public network is not required, there is little to no motivation to create a customPDF/IDF set for a virtual deployment.

Instead, the existing virtual PODs definitions in pharos git repo can be used as-is:

• ericsson-virtual1 for x86_64;

• arm-virtual2 for aarch64;

# example deploy cmd for an x86_64 virtual clusterjenkins@jumpserver:~/fuel$ ci/deploy.sh -l ericsson \

-p virtual1 \-s os-nosdn-nofeature-noha \-D \-S /home/jenkins/tmpdir |& tee deploy.log

20 Chapter 2. OPNFV Fuel Installation Instruction

Page 25: fuel

fuel, Release Latest

Fig. 1: OPNFV Fuel Virtual noHA POD Network Layout Examples

cfg01 Salt Master Docker containerctl01 Controller VMgtw01 Gateway VM with neutron services (DHCP agent, L3 agent, metadata agent etc)odl01 VM on which ODL runs (for scenarios deployed with ODL)cmp001, cmp002 Compute VMs

2.8. OPNFV Software Installation and Deployment 21

Page 26: fuel

fuel, Release Latest

baremetal noHA POD

Warning: These scenarios are not tested in OPNFV CI, so they are considered experimental.

baremetal HA POD

# x86_x64 baremetal deploy on pod2 from Linux Foundation lab (lf-pod2)jenkins@jumpserver:~/fuel$ ci/deploy.sh -l lf \

-p pod2 \-s os-nosdn-nofeature-ha \-D \-S /home/jenkins/tmpdir |& tee deploy.log

# aarch64 baremetal deploy on pod5 from Enea ARM lab (arm-pod5)jenkins@jumpserver:~/fuel$ ci/deploy.sh -l arm \

-p pod5 \-s os-nosdn-nofeature-ha \-D \-S /home/jenkins/tmpdir |& tee deploy.log

hybrid noHA POD

2.8.5 Automatic Deploy Breakdown

When an automatic deploy is started, the following operations are performed sequentially by the deploy script:

22 Chapter 2. OPNFV Fuel Installation Instruction

Page 27: fuel

fuel, Release Latest

Fig. 2: OPNFV Fuel Baremetal noHA POD Network Layout Example

cfg01 Salt Master Docker containermas01 MaaS Node Docker containerctl01 Baremetal controller nodegtw01 Baremetal Gateway with neutron services (dhcp agent, L3 agent, metadata, etc)odl01 Baremetal node on which ODL runs (for scenarios deployed with ODL, otherwise unusedcmp001, cmp002 Baremetal ComputesTenant VM VM running in the cloud

2.8. OPNFV Software Installation and Deployment 23

Page 28: fuel

fuel, Release Latest

Fig. 3: OPNFV Fuel Baremetal HA POD Network Layout Example

cfg01 Salt Master Docker containermas01 MaaS Node Docker containerkvm01, kvm02, kvm03 Baremetals which hold the VMs with controller functionsprx01, prx02 Proxy VMs for Nginxmsg01, msg02, msg03 RabbitMQ Service VMsdbs01, dbs02, dbs03 MySQL service VMsmdb01, mdb02, mdb03 Telemetry VMsodl01 VM on which OpenDaylight runs (for scenarios deployed with ODL)cmp001, cmp002 Baremetal ComputesTenant VM VM running in the cloud

24 Chapter 2. OPNFV Fuel Installation Instruction

Page 29: fuel

fuel, Release Latest

Fig. 4: OPNFV Fuel Hybrid noHA POD Network Layout Examples

cfg01 Salt Master Docker containermas01 MaaS Node Docker containerctl01 Controller VMgtw01 Gateway VM with neutron services (DHCP agent, L3 agent, metadata agent etc)odl01 VM on which ODL runs (for scenarios deployed with ODL)cmp001, cmp002 Baremetal Computes

2.8. OPNFV Software Installation and Deployment 25

Page 30: fuel

fuel, Release Latest

Deploy stage DetailsArgument Parsing enviroment variables and command line arguments

passed to deploy.sh are interpretedDistribution Package Installation Install and/or configure mandatory requirements on the

jumpserver node:• Docker (from upstream and not distribution re-

pos, as the version included in Ubuntu Xenialis outdated);

• docker-compose (from upstream, as the ver-sion included in both CentOS 7 and UbuntuXenial 16.04 has dependency issues on mostsystems);

• virt-inst (from upstream, as the version in-cluded in Ubuntu Xenial 16.04 is outdatedand lacks certain required features);

• other miscelaneous requirements, depending onjumpserver distribution OS;

See also:• mcp/scripts/requirements_deb.yaml (Ubuntu)

• mcp/scripts/requirements_rpm.yaml (CentOS)

Warning:Min-inumre-quiredDockerver-sionis17.x.

Warning:Min-inumre-quiredvirt-instver-sionis1.4.

Patch Apply For each git submodule in OPNFV Fuel repository, ifa subdirectory with the same name exists under mcp/patches, all patches in that subdirectory are appliedusing git-am to the respective git submodule.This allows OPNFV Fuel to alter upstream repositoriescontents before consuming them, including:

• Docker container build process customization;• salt-formulas customization;• reclass.system customization;

See also:• mcp/patches/README.rst

SSH RSA Keypair Generation If not already present, a RSA keypair is generated on thejumpserver node at:

• /var/lib/opnfv/mcp.rsa{,.pub}The public key will be added to theauthorized_keys list for ubuntu user, sothe private key can be used for key-based logins on:

• cfg01, mas01 infrastructure nodes;• all cluster nodes (baremetal and/orvirtual), including VCP VMs;

j2 Expansion Based on XDF (PDF, IDF, SDF) and additional de-ployment configuration determined during argumentparsing stage described above, all jinja2 templatesare expanded, including:

• various classes in reclass.cluster;• docker-compose yaml for Salt Master bring-up;• libvirt network definitions (xml);

Jumpserver Requirements Check Basic validation that common jumpserver require-ments are satisfied, e.g. PXE/admin is Linux bridge ifbaremetal nodes are defined in the PDF.

Infrastucture SetupNote: All steps apply to and only to the jumpserver.

• prepare virtual machines;• (re)create libvirt managed networks;• apply sysctl configuration;• apply udev configuration;• create & start virtual machines prepared earlier;• create & start Salt Master (cfg01) Docker con-

tainer;

STATE Files Based on deployment type, scenario and other parame-ters, a STATE file list is constructed, then executed se-quentially.

Tip: The table below lists all current STATE files andtheir intended action.

See also:For more information on how the list of STATE files isconstructed, see OPNFV Fuel User Guide.

Log Collection Contents of /var/log are recursively gathered fromall the nodes, then archived together for later inspection.

26 Chapter 2. OPNFV Fuel Installation Instruction

Page 31: fuel

fuel, Release Latest

STATE Files Overview

STATEfile

Targets involved and main intended action

virtual_initcfg01: reclass node generationjumpserver VMs (if present): basic OS config

maas mas01: OS, MaaS configuration baremetal node commissioning and deploy

Note: Skipped if no baremetal nodes are defined in PDF (virtual deploy).

baremetal_initkvm, cmp: OS install, configdpdk cmp: configure OVS-DPDKnetworksctl: create OpenStack networksneutron_gatewaygtw01: configure Neutron gatewayopendaylightodl01: install & configure ODLopenstack_nohacluster nodes: install OpenStack without HAopenstack_hacluster nodes: install OpenStack with HAvirtual_control_planekvm: create VCP VMs

VCP VMs: basic OS config

Note: Skipped if -N deploy argument is used.

tacker ctl: install & configure Tacker

2.9 Release Notes

Please refer to the OPNFV Fuel Release Notes article.

2.10 References

For more information on the OPNFV Iruya 9.0 release, please see:

1. OPNFV Home Page

2. OPNFV Documentation

3. OPNFV Software Downloads

4. OPNFV Iruya Wiki Page

5. OpenStack Rocky Release Artifacts

6. OpenStack Documentation

7. OpenDaylight Artifacts

8. Mirantis Cloud Platform Documentation

9. Saltstack Documentation

10. Saltstack Formulas

11. Reclass

2.9. Release Notes 27

Page 32: fuel

fuel, Release Latest

28 Chapter 2. OPNFV Fuel Installation Instruction

Page 33: fuel

CHAPTER

THREE

OPNFV FUEL USER GUIDE

3.1 Abstract

This document contains details about using OPNFV Fuel Iruya release after it was deployed. For details on how todeploy OpenStack, check the installation instructions in the References section.

This is an unified documentation for both x86_64 and aarch64 architectures. All information is common for botharchitectures except when explicitly stated.

3.2 Network Overview

Fuel uses several networks to deploy and administer the cloud:

Networkname

Description

PXE/admin Used for booting the nodes via PXE and/or Salt control networkmcpcon-trol

Docker network used to provision the infrastructure hosts (Salt & MaaS)

manage-ment

Used for internal communication between OpenStack components

internal Used for VM data communication within the cloud deploymentpublic Used to provide Virtual IPs for public endpoints that are used to connect to OpenStack services APIs.

Used by Virtual machines to access the Internet

These networks - except mcpcontrol - can be Linux bridges configured before the deploy on the Jumpserver. If theydon’t exists at deploy time, they will be created by the scripts as libvirt managed networks (except mcpcontrol,which will be handled by Docker using the bridge driver).

3.2.1 Network mcpcontrol

mcpcontrol is a virtual network, managed by Docker. Its only purpose is to provide a simple method of assigningan arbitrary INSTALLER_IP to the Salt master node (cfg01), to maintain backwards compatibility with old OPNFVFuel behavior. Normally, end-users only need to change the INSTALLER_IP if the default CIDR (10.20.0.0/24)overlaps with existing lab networks.

mcpcontrol uses the Docker bridge driver, so the Salt master (cfg01) and the MaaS containers (mas01, whenpresent) get assigned predefined IPs (.2, .3, while the jumpserver gets .1).

29

Page 34: fuel

fuel, Release Latest

Host Offset in IP range Default addressjumpserver 1st 10.20.0.1cfg01 2nd 10.20.0.2mas01 3rd 10.20.0.3

This network is limited to the jumpserver host and does not require any manual setup.

3.2.2 Network PXE/admin

Tip: PXE/admin does not usually use an IP range offset in IDF.

Note: During MaaS commissioning phase, IP addresses are handed out by MaaS’s DHCP.

Warning: Default addresses in below table correspond to a PXE/admin CIDR of 192.168.11.0/24 (theusual value used in OPNFV labs).

This is defined in IDF and can easily be changed to something else.

Host Offset in IPrange

Default address

jumpserver 1st 192.168.11.1 (manual as-signment)

cfg01 2nd 192.168.11.2mas01 3rd 192.168.11.3prx01, prx02 4th, 5th 192.168.11.4, 192.168.

11.5gtw01, gtw02, gtw03 . . . ...kvm01, kvm02, kvm03dbs01, dbs02, dbs03msg01, msg02, msg03mdb01, mdb02, mdb03ctl01, ctl02, ctl03odl01, odl02, odl03mon01, mon02, mon03, log01, log02, log03, mtr01,mtr02, mtr03cmp001, cmp002, ...

3.2.3 Network management

Tip: management often has an IP range offset defined in IDF.

30 Chapter 3. OPNFV Fuel User Guide

Page 35: fuel

fuel, Release Latest

Warning: Default addresses in below table correspond to a management IP range of 172.16.10.10-172.16.10.254 (one of the commonly used values in OPNFV labs). This is defined in IDF and can easily be changedto something else. Since the jumpserver address is manually assigned, this is usually not subject to the IP rangerestriction in IDF.

Host Offset in IPrange

Default address

jumpserver N/A 172.16.10.1 (manual assign-ment)

cfg01 1st 172.16.10.11mas01 2nd 172.16.10.12prxprx01, prx02

3rd,4th, 5th

172.16.10.13,172.16.10.14, 172.16.10.15

gtw01, gtw02, gtw03 . . . ...kvm,kvm01, kvm02, kvm03dbs,dbs01, dbs02, dbs03msg,msg01, msg02, msg03mdb,mdb01, mdb02, mdb03ctl,ctl01, ctl02, ctl03odl,odl01, odl02, odl03mon,mon01, mon02, mon03,log,log01, log02, log03,mtr,mtr01, mtr02, mtr03cmp001, cmp002, ...

3.2.4 Network internal

Tip: internal does not usually use an IP range offset in IDF.

Warning: Default addresses in below table correspond to an internal CIDR of 10.1.0.0/24 (the usualvalue used in OPNFV labs). This is defined in IDF and can easily be changed to something else.

Host Offset in IP range Default addressjumpserver N/A 10.1.0.1 (manual assignment, optional)gtw01, gtw02, gtw03 1st, 2nd, 3rd 10.1.0.2, 10.1.0.3, 10.1.0.4cmp001, cmp002, ... 4th, 5th, . . . 10.1.0.5, 10.1.0.6, ...

3.2. Network Overview 31

Page 36: fuel

fuel, Release Latest

3.2.5 Network public

Tip: public often has an IP range offset defined in IDF.

Warning: Default addresses in below table correspond to a public IP range of 172.30.10.100-172.30.10.254 (one of the used values in OPNFV labs). This is defined in IDF and can easily be changed to somethingelse. Since the jumpserver address is manually assigned, this is usually not subject to the IP range restrictionin IDF.

Host Offset in IP range Default addressjumpserver N/A 172.30.10.72 (manual assignment, optional)prx,prx01, prx02

1st,2nd, 3rd

172.30.10.101,172.30.10.102, 172.30.10.103

gtw01, gtw02, gtw03 4th, 5th, 6th 172.30.10.104, 172.30.10.105, 172.30.10.106ctl01, ctl02, ctl03 . . . ...odl,cmp001, cmp002, ...

3.3 Accessing the Salt Master Node (cfg01)

The Salt Master node (cfg01) runs a sshd server listening on 0.0.0.0:22.

To login as ubuntu user, use the RSA private key /var/lib/opnfv/mcp.rsa:

jenkins@jumpserver:~$ ssh -o StrictHostKeyChecking=no \-i /var/lib/opnfv/mcp.rsa \-l ubuntu 10.20.0.2

ubuntu@cfg01:~$

Note: User ubuntu has sudo rights.

Tip: The Salt master IP (10.20.0.2) is not hard set, it is configurable via INSTALLER_IP during deployment.

Tip: Starting with the Gambia release, cfg01 is containerized, so this also works (from jumpserver only):

jenkins@jumpserver:~$ docker exec -it fuel bashroot@cfg01:~$

3.4 Accessing the MaaS Node (mas01)

Starting with the Hunter release, the MaaS node (mas01) is containerized and no longer runs a sshd server. Toaccess it (from jumpserver only):

32 Chapter 3. OPNFV Fuel User Guide

Page 37: fuel

fuel, Release Latest

jenkins@jumpserver:~$ docker exec -it maas bashroot@mas01:~$

3.5 Accessing Cluster Nodes

Logging in to cluster nodes is possible from the Jumpserver, Salt Master etc.

jenkins@jumpserver:~$ ssh -i /var/lib/opnfv/mcp.rsa [email protected]

Tip: /etc/hosts on cfg01 has all the cluster hostnames, which can be used instead of IP addresses.

/root/.ssh/config on cfg01 configures the default user and key: ubuntu, respectively /root/fuel/mcp/scripts/mcp.rsa.

root@cfg01:~$ ssh ctl01

3.6 Debugging MaaS Comissioning/Deployment Issues

One of the most common issues when setting up a new POD is MaaS failing to commission/deploy the nodes, usuallytiming out after a couple of retries.

Such failures might indicate misconfiguration in PDF/IDF, TOR switch configuration or even faulty hardware.

Here are a couple of pointers for isolating the problem.

3.6.1 Accessing the MaaS Dashboard

MaaS web-based dashboard is available at http://<jumpserver IP address>:5240/MAAS.

The administrator credentials are opnfv/opnfv_secret.

3.6.2 Ensure Commission/Deploy Timeouts Are Not Too Small

Some hardware takes longer to boot or to run the initial scripts during commissioning/deployment phases. If that’sthe case, MaaS will time out waiting for the process to finish. MaaS logs will reflect that, and the issue is usuallyeasy to observe on the nodes’ serial console - if the node seems to PXE-boot the OS live image, starts executingcloud-init/curtin hooks without spilling critical errors, then it is powered down/shut off, most likely the timeout washit.

To access the serial console of a node, see your board manufacturer’s documentation. Some hardware no longer has aphysical serial connector these days, usually being replaced by a vendor-specific software-based interface.

If the board supports SOL (Serial Over LAN) over IPMI lanplus protocol, a simpler solution to hook to the serialconsole is to use ipmitool.

Tip: Early boot stage output might not be shown over SOL, but only over the video console provided by the (vendor-specific) interface.

3.5. Accessing Cluster Nodes 33

Page 38: fuel

fuel, Release Latest

jenkins@jumpserver:~$ ipmitool -H <host BMC IP> -U <user> -P <pass> \-I lanplus sol activate

To bypass this, simply set a larger timeout in the IDF.

3.6.3 Check Jumpserver Network Configuration

jenkins@jumpserver:~$ brctl showjenkins@jumpserver:~$ ifconfig -a

Configuration item Expected behaviorIP addresses assigned to bridgeports

IP addresses should be assigned to the bridge, and not to individual bridgeports

3.6.4 Check Network Connectivity Between Nodes on the Jumpserver

cfg01 is a Docker container running on the jumpserver, connected to Docker networks (created by docker-compose automatically on container up), which in turn are connected using veth pairs to their libvirt managedcounterparts (or manually created bridges).

For example, the mgmt network(s) should look like below for a virtual deployment.

jenkins@jumpserver:~$ brctl show mgmtbridge name bridge id STP enabled interfacesmgmt 8000.525400064f77 yes mgmt-nic

veth_mcp2vnet8

jenkins@jumpserver:~$ docker network lsNETWORK ID NAME DRIVER SCOPE81a0fdb3bd78 docker-compose_mgmt macvlan local[...]

jenkins@jumpserver:~$ docker network inspect docker-compose_mgmt[

{"Name": "docker-compose_mgmt",[...]"Options": {

"parent": "veth_mcp3"},

}]

Before investigating the rest of the cluster networking configuration, the first thing to check is that cfg01 has networkconnectivity to other jumpserver hosted nodes, e.g. mas01 and to the jumpserver itself (provided that the jumpserverhas an IP address in that particular network segment).

jenkins@jumpserver:~$ docker exec -it fuel bashroot@cfg01:~# ifconfig -a | grep inet

inet addr:10.20.0.2 Bcast:0.0.0.0 Mask:255.255.255.0inet addr:172.16.10.2 Bcast:0.0.0.0 Mask:255.255.255.0inet addr:192.168.11.2 Bcast:0.0.0.0 Mask:255.255.255.0

34 Chapter 3. OPNFV Fuel User Guide

Page 39: fuel

fuel, Release Latest

For each network of interest (mgmt, PXE/admin), check that cfg01 can ping the jumpserver IP in that networksegment.

Note: mcpcontrol is set up at container bringup, so it should always be available, while the other networks areconfigured by Salt as part of the virtual_init STATE file.

root@cfg01:~# ping -c1 10.20.0.1 # mcpcontrol jumpserver IProot@cfg01:~# ping -c1 10.20.0.3 # mcpcontrol mas01 IP

Tip: mcpcontrol CIDR is configurable via INSTALLER_IP env var during deployment. However, IP offsetsinside that segment are hard set to .1 for the jumpserver, .2 for cfg01, respectively to .3 for mas01 node.

root@cfg01:~# salt 'mas*' pillar.item --out yaml \_param:infra_maas_node01_deploy_address \_param:infra_maas_node01_address

mas01.mcp-ovs-noha.local:_param:infra_maas_node01_address: 172.16.10.12_param:infra_maas_node01_deploy_address: 192.168.11.3

root@cfg01:~# ping -c1 192.168.11.1 # PXE/admin jumpserver IProot@cfg01:~# ping -c1 192.168.11.3 # PXE/admin mas01 IProot@cfg01:~# ping -c1 172.16.10.1 # mgmt jumpserver IProot@cfg01:~# ping -c1 172.16.10.12 # mgmt mas01 IP

Tip: Jumpserver IP addresses for PXE/admin, mgmt and public bridges are user-chosen and manually set, soabove snippets should be adjusted accordingly if the user chose a different IP, other than .1 in each CIDR.

Alternatively, a quick nmap scan would work just as well.

root@cfg01:~# apt update && apt install -y nmaproot@cfg01:~# nmap -sn 10.20.0.0/24 # expected: cfg01, mas01, jumpserverroot@cfg01:~# nmap -sn 192.168.11.0/24 # expected: cfg01, mas01, jumpserverroot@cfg01:~# nmap -sn 172.16.10.0/24 # expected: cfg01, mas01, jumpserver

3.6.5 Check DHCP Reaches Cluster Nodes

One common symptom observed during failed commissioning is that DHCP does not work as expected between clusternodes (baremetal nodes in the cluster; or virtual machines on the jumpserver in case of hybrid deployments) and theMaaS node.

To confirm or rule out this possibility, monitor the serial console output of one (or more) cluster nodes during MaaScommissioning. If the node is properly configured to attempt PXE boot, yet it times out waiting for an IP address frommas01 DHCP, it’s worth checking that DHCP packets reach the jumpserver, respectively the mas01 container.

jenkins@jumpserver:~$ sudo apt update && sudo apt install -y dhcpdumpjenkins@jumpserver:~$ sudo dhcpdump -i admin_br

Tip: If DHCP requests are present, but no replies are sent, iptables might be interfering on the jumpserver.

3.6. Debugging MaaS Comissioning/Deployment Issues 35

Page 40: fuel

fuel, Release Latest

3.6.6 Check MaaS Logs

If networking looks fine, yet nodes still fail to commission and/or deploy, MaaS logs might offer more details aboutthe failure:

• /var/log/maas/maas.log

• /var/log/maas/rackd.log

• /var/log/maas/regiond.log

Tip: If the problem is with the cluster node and not on the MaaS server, node’s kernel logs usually contain usefulinformation. These are saved via rsyslog on the mas01 node in /var/log/maas/rsyslog.

3.7 Recovering Failed Deployments

The first deploy attempt might fail due to various reasons. If the problem is not systemic (i.e. fixing it will not introduceincompatible configuration changes, like setting a different INSTALLER_IP), the environment is safe to be reusedand the deployment process can pick up from where it left off.

Leveraging these mechanisms requires a minimum understanding of how the deploy process works, at least for manualSTATE runs.

3.7.1 Automatic (re)deploy

OPNFV Fuel’s deploy.sh script offers a dedicated argument for this, -f, which will skip executing the first NSTATE files, where N is the number of -f occurrences in the argument list.

Tip: The list of STATE files to be executed for a specific environment depends on the OPNFV scenario chosen,deployment type (virtual, baremetal or hybrid) and the presence/absence of a VCP (virtualized control plane).

e.g.: Let’s consider a baremetal enviroment, with VCP and a simple scenario os-nosdn-nofeature-ha,where deploy.sh failed executing the openstack_ha STATE file.

The simplest redeploy approach (which usually works for any combination of deployment type/VCP/scenario) is toissue the same deploy command as the original attempt used, then adding a single -f:

jenkins@jumpserver:~/fuel$ ci/deploy.sh -l <lab_name> -p <pod_name> \-s <scenario> [...] \-f # skips running the virtual_init STATE file

All STATE files are re-entrant, so the above is equivalent (but a little slower) to skipping all STATE files before theopenstack_ha one, like:

jenkins@jumpserver:~/fuel$ ci/deploy.sh -l <lab_name> -p <pod_name> \-s <scenario> [...] \-ffff # skips virtual_init, maas, baremetal_

→˓init, virtual_control_plane

Tip: For fine tuning the infrastructure setup steps executed during deployment, see also the -e and -P deployarguments.

36 Chapter 3. OPNFV Fuel User Guide

Page 41: fuel

fuel, Release Latest

Note: On rare occassions, the cluster cannot idempotently be redeployed (e.g. broken MySQL/Galera cluster), inwhich case some cleanup is due before (re)running the STATE files. See -E deploy arg, which allows either forcing aMaaS node deletion, then redeployment of all baremetal nodes, if used twice (-EE); or only erasing the VCP VMs ifused only once (-E).

3.7.2 Manual STATE Run

Instead of leveraging the full deploy.sh, one could execute the STATE files one by one (or partially) from thecfg01.

However, this requires a better understanding of how the list of STATE files to be executed is constructed for a specificscenario, depending on the deployment type and the cluster having baremetal nodes, implemented in:

• mcp/config/scenario/defaults.yaml.j2

• mcp/config/scenario/<scenario-name>.yaml

e.g.: For the example presented above (baremetal with VCP, os-nosdn-nofeature-ha), the list of STATE fileswould be:

• virtual_init

• maas

• baremetal_init

• virtual_control_plane

• openstack_ha

• networks

To execute one (or more) of the remaining STATE files after a failure:

jenkins@jumpserver:~$ docker exec -it fuel bashroot@cfg01:~$ cd ~/fuel/mcp/config/statesroot@cfg01:~/fuel/mcp/config/states$ ./openstack_haroot@cfg01:~/fuel/mcp/config/states$ CI_DEBUG=true ./networks

For even finer granularity, one can also run the commands in a STATE file one by one manually, e.g. if the executionfailed applying the rabbitmq sls:

root@cfg01:~$ salt -I 'rabbitmq:server' state.sls rabbitmq

3.8 Exploring the Cloud with Salt

To gather information about the cloud, the salt commands can be used. It is based around a master-minion idea wherethe salt-master pushes config to the minions to execute actions.

For example tell salt to execute a ping to 8.8.8.8 on all the nodes.

root@cfg01:~$ salt "*" network.ping 8.8.8.8^^^ target

^^^^^^^^^^^^ function to execute^^^^^^^ argument passed to the function

3.8. Exploring the Cloud with Salt 37

Page 42: fuel

fuel, Release Latest

Tip: Complex filters can be done to the target like compound queries or node roles.

For more information about Salt see the References section.

Some examples are listed below. Note that these commands are issued from Salt master as root user.

3.8.1 View the IPs of All the Components

root@cfg01:~$ salt "*" network.ip_addrscfg01.mcp-odl-ha.local:

- 10.20.0.2- 172.16.10.100

mas01.mcp-odl-ha.local:- 10.20.0.3- 172.16.10.3- 192.168.11.3

.........................

3.8.2 View the Interfaces of All the Components and Put the Output in a yaml File

root@cfg01:~$ salt "*" network.interfaces --out yaml --output-file interfaces.yamlroot@cfg01:~# cat interfaces.yamlcfg01.mcp-odl-ha.local:enp1s0:hwaddr: 52:54:00:72:77:12inet:- address: 10.20.0.2

broadcast: 10.20.0.255label: enp1s0netmask: 255.255.255.0

inet6:- address: fe80::5054:ff:fe72:7712

prefixlen: '64'scope: link

up: true.........................

3.8.3 View Installed Packages on MaaS Node

root@cfg01:~# salt "mas*" pkg.list_pkgsmas01.mcp-odl-ha.local:

----------accountsservice:

0.6.40-2ubuntu11.3acl:

2.2.52-3acpid:

1:2.0.26-1ubuntu2adduser:

3.113+nmu3ubuntu4anerd:

(continues on next page)

38 Chapter 3. OPNFV Fuel User Guide

Page 43: fuel

fuel, Release Latest

(continued from previous page)

1.........................

3.8.4 Execute Any Linux Command on All Nodes (e.g. ls /var/log)

root@cfg01:~# salt "*" cmd.run 'ls /var/log'cfg01.mcp-odl-ha.local:

alternatives.logaptauth.logboot.logbtmpcloud-init-output.logcloud-init.log

.........................

3.8.5 Execute Any Linux Command on Nodes Using Compound Queries Filter

root@cfg01:~# salt -C '* and cfg01*' cmd.run 'ls /var/log'cfg01.mcp-odl-ha.local:

alternatives.logaptauth.logboot.logbtmpcloud-init-output.logcloud-init.log

.........................

3.8.6 Execute Any Linux Command on Nodes Using Role Filter

root@cfg01:~# salt -I 'nova:compute' cmd.run 'ls /var/log'cmp001.mcp-odl-ha.local:

alternatives.logapache2aptauth.logbtmpceilometercindercloud-init-output.logcloud-init.log

.........................

3.9 Accessing Openstack

Once the deployment is complete, Openstack CLI is accessible from controller VMs (ctl01 . . . ctl03).

Openstack credentials are at /root/keystonercv3.

3.9. Accessing Openstack 39

Page 44: fuel

fuel, Release Latest

root@ctl01:~# source keystonercv3root@ctl01:~# openstack image list+--------------------------------------+----------------------------------------------→˓-+--------+| ID | Name→˓ | Status |+======================================+===============================================+========+| 152930bf-5fd5-49c2-b3a1-cae14973f35f | CirrosImage→˓ | active || 7b99a779-78e4-45f3-9905-64ae453e3dcb | Ubuntu16.04→˓ | active |+--------------------------------------+----------------------------------------------→˓-+--------+

The OpenStack Dashboard, Horizon, is available at http://<proxy public VIP>. The administrator creden-tials are admin/opnfv_secret.

A full list of IPs/services is available at <proxy public VIP>:8090 for baremetal deploys.

3.10 Guest Operating System Support

There are a number of possibilities regarding the guest operating systems which can be spawned on the nodes. Thecurrent system spawns virtual machines for VCP VMs on the KVM nodes and VMs requested by users in OpenStackcompute nodes. Currently the system supports the following UEFI-images for the guests:

40 Chapter 3. OPNFV Fuel User Guide

Page 45: fuel

fuel, Release Latest

OS name x86_64 status aarch64 statusUbuntu 17.10 untested Full supportUbuntu 16.04 Full support Full supportUbuntu 14.04 untested Full supportFedora atomic 27 untested Full supportFedora cloud 27 untested Full supportDebian untested Full supportCentos 7 untested Not supportedCirros 0.3.5 Full support Full supportCirros 0.4.0 Full support Full support

The above table covers only UEFI images and implies OVMF/AAVMF firmware on the host. An x86_64 deploymentalso supports non-UEFI images, however that choice is up to the underlying hardware and the administrator to make.

The images for the above operating systems can be found in their respective websites.

3.11 OpenStack Storage

OpenStack Cinder is the project behind block storage in OpenStack and OPNFV Fuel supports LVM out of the box.

By default x86_64 supports 2 additional block storage devices, while aarch64 supports only one.

More devices can be supported if the OS-image created has additional properties allowing block storage devices to bespawned as SCSI drives. To do this, add the properties below to the server:

root@ctl01:~$ openstack image set --property hw_disk_bus='scsi' \--property hw_scsi_model='virtio-scsi' \<image>

The choice regarding which bus to use for the storage drives is an important one. virtio-blk is the default choicefor OPNFV Fuel, which attaches the drives in /dev/vdX. However, since we want to be able to attach a largernumber of volumes to the virtual machines, we recommend the switch to SCSI drives which are attached in /dev/sdX instead.

virtio-scsi is a little worse in terms of performance but the ability to add a larger number of drives combinedwith added features like ZFS, Ceph et al, leads us to suggest the use of virtio-scsi in OPNFV Fuel for botharchitectures.

More details regarding the differences and performance of virtio-blk vs virtio-scsi are beyond the scope ofthis manual but can be easily found in other sources online like VirtIO SCSI or VirtIO performance.

Additional configuration for configuring images in OpenStack can be found in the OpenStack Glance documentation.

3.11. OpenStack Storage 41

Page 46: fuel

fuel, Release Latest

3.12 OpenStack Endpoints

For each OpenStack service three endpoints are created: admin, internal and public.

ubuntu@ctl01:~$ openstack endpoint list --service keystone+----------------------------------+-----------+--------------+--------------+--------→˓-+-----------+------------------------------+| ID | Region | Service Name | Service Type |→˓Enabled | Interface | URL |+----------------------------------+-----------+--------------+--------------+--------→˓-+-----------+------------------------------+| 008fec57922b4e9e8bf02c770039ae77 | RegionOne | keystone | identity | True→˓ | internal | http://172.16.10.26:5000/v3 || 1a1f3c3340484bda9ef7e193f50599e6 | RegionOne | keystone | identity | True→˓ | admin | http://172.16.10.26:35357/v3 || b0a47d42d0b6491b995d7e6230395de8 | RegionOne | keystone | identity | True→˓ | public | https://10.0.15.2:5000/v3 |+----------------------------------+-----------+--------------+--------------+--------→˓-+-----------+------------------------------+

MCP sets up all Openstack services to talk to each other over unencrypted connections on the internal managementnetwork. All admin/internal endpoints use plain http, while the public endpoints are https connections terminated vianginx at the VCP proxy VMs.

To access the public endpoints an SSL certificate has to be provided. For convenience, the installation script will copythe required certificate to the cfg01 node at /etc/ssl/certs/os_cacert.

Copy the certificate from the cfg01 node to the client that will access the https endpoints and place it under /etc/ssl/certs/. The SSL connection will be established automatically after.

jenkins@jumpserver:~$ ssh -o StrictHostKeyChecking=no -i /var/lib/opnfv/mcp.rsa -l→˓ubuntu 10.20.0.2 \"cat /etc/ssl/certs/os_cacert" | sudo tee /etc/ssl/certs/os_cacert

3.13 Reclass Model Viewer Tutorial

In order to get a better understanding of the reclass model Fuel uses, the reclass-doc tool can be used to visualisethe reclass model.

To avoid installing packages on the jumpserver or another host, the cfg01 Docker container can be used. Sincethe fuel git repository located on the jumpserver is already mounted inside cfg01 container, the results can bevisualized using a web browser on the jumpserver at the end of the procedure.

jenkins@jumpserver:~$ docker exec -it fuel bashroot@cfg01:~$ apt-get updateroot@cfg01:~$ apt-get install -y npm nodejsroot@cfg01:~$ npm install -g reclass-docroot@cfg01:~$ ln -s /usr/bin/nodejs /usr/bin/noderoot@cfg01:~$ reclass-doc --output ~/fuel/mcp/reclass/modeler \

~/fuel/mcp/reclass

The generated documentation should be available on the jumpserver inside fuel git repo subpath mcp/reclass/modeler/index.html.

42 Chapter 3. OPNFV Fuel User Guide

Page 47: fuel

fuel, Release Latest

3.14 References

1. OPNFV Fuel Installation Instruction

2. Saltstack Documentation

3. Saltstack Formulas

4. VirtIO performance

5. VirtIO SCSI

3.14. References 43

Page 48: fuel

fuel, Release Latest

44 Chapter 3. OPNFV Fuel User Guide

Page 49: fuel

CHAPTER

FOUR

OPNFV FUEL SCENARIOS

4.1 os-nosdn-nofeature-noha overview and description

This document provides scenario level details for Iruya 9.0 of deployment with no SDN controller and no extra featuresenabled.

4.1.1 Introduction

This scenario is used primarily to validate and deploy a Stein OpenStack deployment without any NFV features orSDN controller enabled.

4.1.2 Scenario components and composition

This scenario is composed of common OpenStack services enabled by default, including Nova, Neutron, Glance,Cinder, Keystone, Horizon.

4.1.3 Scenario usage overview

Simply deploy this scenario by setting os-nosdn-nofeature-noha as scenario deploy parameter.

4.1.4 Limitations, Issues and Workarounds

Tested on virtual deploy only.

4.1.5 References

For more information on the OPNFV Iruya release, please visit https://www.opnfv.org/software

4.2 os-nosdn-nofeature-ha overview and description

This document provides scenario level details for Iruya 9.0 of deployment with no SDN controller and no extra featuresenabled.

45

Page 50: fuel

fuel, Release Latest

4.2.1 Introduction

This scenario is used primarily to validate and deploy a Stein OpenStack deployment without any NFV features orSDN controller enabled.

4.2.2 Scenario components and composition

This scenario is composed of common OpenStack services enabled by default, including Nova, Neutron, Glance,Cinder, Keystone, Horizon.

All services are in HA, meaning that there are multiple cloned instances of each service, and they are balanced by HAProxy using a Virtual IP Address per service.

4.2.3 Scenario usage overview

Simply deploy this scenario by setting os-nosdn-nofeature-ha as scenario deploy parameter.

4.2.4 Limitations, Issues and Workarounds

None

4.2.5 References

For more information on the OPNFV Iruya release, please visit https://www.opnfv.org/software

4.3 os-odl-nofeature-noha overview and description

This document provides scenario level details for Iruya 9.0 of deployment with OpenDaylight controller.

4.3.1 Introduction

This scenario is used primarily to validate and deploy a Stein OpenStack with OpenDaylight Neon controller enabled.

4.3.2 Scenario components and composition

This scenario is composed of common OpenStack services enabled by default, including Nova, Neutron, Glance,Cinder, Keystone, Horizon. It also installs OpenDaylight as a SDN controller on the dedicated node.

4.3.3 Scenario usage overview

Simply deploy this scenario by setting up os-odl-nofeature-noha as scenario deploy parameter.

4.3.4 Limitations, Issues and Workarounds

Tested on virtual deploy only.

46 Chapter 4. OPNFV Fuel Scenarios

Page 51: fuel

fuel, Release Latest

4.3.5 References

For more information on the OPNFV Iruya release, please visit https://www.opnfv.org/software

4.4 os-odl-nofeature-ha overview and description

This document provides scenario level details for Iruya 9.0 of deployment with OpenDaylight controller.

4.4.1 Introduction

This scenario is used primarily to validate and deploy a Stein OpenStack deployment with OpenDaylight Neon con-troller enabled.

4.4.2 Scenario components and composition

This scenario is composed of common OpenStack services enabled by default, including Nova, Neutron, Glance,Cinder, Keystone, Horizon. All services are in HA, meaning that there are multiple cloned instances of each service,and they are balanced by HA Proxy using a Virtual IP Address per service. OpenDaylight is installed as a SDNcontroller on one of the controller nodes.

4.4.3 Scenario usage overview

Simply deploy this scenario by setting os-odl-nofeature-noha as scenario deploy parameter.

4.4.4 Limitations, Issues and Workarounds

None

4.4.5 References

For more information on the OPNFV Iruya release, please visit https://www.opnfv.org/software

4.4. os-odl-nofeature-ha overview and description 47