Deploying OpenStack Using Docker in Production

Download Deploying OpenStack Using Docker in Production

Post on 15-Feb-2017

480 views

Category:

Technology

2 download

Embed Size (px)

TRANSCRIPT

Deploying OpenStack Using Docker in Production

ERIC:Introductions

This talk is about deploying *OpenStack* with Docker, not deploying docker containers *with* OpenStack

OverviewThe Pain of Operating OpenstackPossible SolutionsWhy Docker WorksWhy Docker Doesnt WorkDocker @ TWCLessons Learned

The Pain of Operating OpenstackPossible SolutionsWhy Docker WorksWhy Docker Doesnt WorkDocker @ TWCLessons Learned

Docker in production in July 2015First service was DesignateAdded Heat, Nova and KeystoneNova using Ceph and Solidfire BackendsNeutron in progressGlance and Cinder later this yearUsing Docker 1.10 and Docker Registry V2Docker & OpenStack @ TWC

Just a bit of backgroundWe first started using Docker in production in July of last yearFirst service we deployed with Docker was DesignateFollowed by Heat, Nova, then KeystoneWith Nova we did a two stage deploy process for control node services followed by compute a while laterWith Nova were running Ceph and Solidfire as storage backendsIt *is* possible to get nova-compute and iscsi working inside a docker containerWell be moving Neutron into Docker next, then coming back to Glance and CinderThe primary short term driver for Neutron is OVS agent restart fixes, which cause small outagesChanges have largely been merged in the last couple of months, but we are trying to use stable release branches for running prod. Seeing these changes get merged into stable branches nowUsing Docker 1.10 with Docker Registry V2

Started with packages for deploymentsDont like big-bang upgradesWant to be able to carry local patchesWant to run mixed versions of servicesSmaller upgrades, more oftenHow Did We End Up Here?

So how did we end up deploying OpenStack services with Docker?Weve traditionally used packages for deploymentsOver time we realized packages really werent meeting our requirements very wellPackages tend to lead to a big bang type of upgrade.We run multiple services on the same set of control servers, and when doing upgrades our API outages are longer and riskier than we wanted them to beWe want to be able to carry local patches and cherry-pick fixes from master branchesMany times we run into a bug, find it on launchpad and see that a fix is committed on master, but not backported. Or a fix is backported, but the package is not ready yet.We can do some of those backports ourselvesWe dont want to have to run the same version of OpenStack for all servicesFor example, were much more aggressive about upgrading services like Horizon and Heat than Nova and NeutronWe want to upgrade services independently of each other.We also want to follow stable updates more aggressively than distros doOnly a few stable releases are done over the six month lifetime of an OpenStack releaseDistros usually lag behind those by weeks, if not longerWe want to do smaller upgrades, more often, and one or two services at a time.So you may be thinking: Why cant you do this with:Packages? Virtualenvs, etc?We looked at and tried some different options

Why Not Packages?Built packages for KeystoneWorked for local patchesWorked for updating stable branchesDoesnt work for mixed releasesLimited by distro python packagingPackaging workflow is a painPackages slow down your workflowPackage may not exist yet

We tried packages for KeystoneWe took the packages from Canonical, replaced the source in them, left them mostly the same otherwiseThis worked reasonably well for carrying patches, worked well for stable updatesThis didnt work well for mixed openstack releasesWith normal distro packaging, you cant have two versions of the same python library installed at the same timeThere are significant conflicts in library requirements across OpenStack releasesBecause of this we were still dependent on Canonical for packaging the python libraries that the services depended on.Package workflow on Debian/Ubuntu isnt rocket science, but it clearly hasnt changed much in the last ten years. I hate it.There are certain times where we want the latest and greatest of some python library, which may not even have a package built for it. If you use pip install to install python libraries in system space, there is no telling what you might end up with - especially installing from git urls

Why Not Python Virtual Envs?Deployed Designate with Virtual EnvsMirrored Python packages internallyBuilt Virtual Envs on serversWas slow to deployStill have to install/manage non-Python deps

Another option you sometime hear people using is Python virtual environmentsWe use virtual environments for horizon, it probably has the most dependenciesWe originally deployed Designate using Python virtual environments, because there were no packages availableWe mirrored the python packages internally, built them into wheels, created the virtualenvs on the servers at deploy timeThis met most of our requirements, but: Was slowHad issues with python modules that required external commands, shared libraries, etcStill an issue with shared dependencies, such as an oslo library that reads from a shared location on the filesystem like /etc/nova/foo etc

Why Docker?

Everyone Else Is Doing It?

Everyone else is doing it?Im only kind of kidding hereYes, you may have weird problems with Docker in some cases, but nearly every problem weve had, other people have had also.Its getting betterIts being actively developed and its maturing at an impressive pace.Packaging tools arent improving, and openstThere arent lots of mature toolchains deploying python-based virtual environments across dev, staging, and prod.Dont discount the value of following the crowd in this case.Besides, youre running OpenStack, already right? Youre used to deploying software to production that has what we might call a quirky personality?

Reproducible buildsEasy to distribute artifactsContains all dependenciesEasy to install multiple versions of an imageWhy Docker?

But aside from that, why docker?Being able to reproduce builds and deployment are really important for usWhen we do a build were able to encapsulate everything that is needed to run that serviceWhen we do a deploy, were only dependent on our internal Docker registryIts easy to automate building and distributing docker images.And when you do build your images, it solves the issue of needing to manage shared libraries and other dependencies. Its all inside the image.Its also easy to install multiple versions of a Docker image on a given serverWhen weve done upgrades in the past, the majority of the time to do the upgrades is the package download, install and configuration timeWith Docker we can prestage the new image.An upgrade just ends up being running database migrations, making any needed config changes and starting the service with the new image.

Restarting docker restarts containers Intermittent bugginessComplex services are hard to fit into DockerRequires new tooling for build/deployment/etcWhy Not Docker?

So why wouldnt you want to use Docker for deploying OpenStack?Restarting docker restarts all containers - Fixed in some future versionThis can be a major issue for things like the Neutron OVS agentDocker does have bugs:Weve seen intermittent issues with the aufs backendWeve also seen intermittent issues on new installs with the docker bridge not being configured correctlyHowever, weve been about to work around these relatively minor issuesSome services like keystone or heat are pretty easy to get into a containerHowever, more complex services like Neutron require a lot of specific configuration in order to talk to OVS and create network namespaces,etcNova requires special configuration for talking to storage and libvirt, etcAlso, unless youre already deploying services with Docker, youre going to need some new toolingThis includes building images, installing them and making sure they run. This is yet another thing to manage and versionFor example, the existing Puppet modules for OpenStack dont have any direct Docker support. Thats something were maintaining ourselves, but well talk about that more in a bit.Lets talk a little bit about how we deploy OpenStack using Docker at Time Warner Cable

Docker @ TWC: ImagesBuilding base images using debootstrapBuild openstack-dev image based on thatContains all common depsImage per OpenStack ServicePer service base requirements.txt and a frozen oneFrozen requirements.txt is used for image buildsUses upper-constraints.txt for frozen requirements1https://github.com/openstack/requirements/blob/master/upper-constraints.txt

CLAYTON: So weve covered some background and reasons why and why not to use Docker, so lets talk about how were deploying services using Docker, starting with how we build our Docker images

We build our base images from an internal Ubuntu mirror using debootstrapWe build an image we call openstack-dev on top of thatThis is a relatively fat image that all OpenStack services are built on top ofThis includes all the shared libraries and command-line tools needed by any serviceFrom there we build per service images (so nova image, keystone image, etc)One key thing here is that we want to be very explicit about what version of dependencies were going to build the image with so that we have reproducible resultsTo achieve that, we have two requirements.txt files, one is very high level, and the other contains all dependencies pinned to specific versionsFor example, the high level requirements.txt for nova pulls in nova itself, the mysql driver, the memcache client and some internal plugins weve developed.From that high level requirements file, we have a tool that builds a Python virtual environment locallyWe build that virtualenv using the upper-constraints.txt file from the upstream infra projectThat ensures were using tested and supported versions of the libraries going into itFrom that virtual environment