containerised testing at demonware : pycon ireland 2016

34
PyCon Ireland 2016

Upload: thomas-shaw

Post on 12-Apr-2017

210 views

Category:

Technology


0 download

TRANSCRIPT

PyCon Ireland 2016

Containerised Testing at Demonware

“Quality is free, but

only to those willing

to pay heavily for

it.”

T. DeMarco and T. Lister

Who are Demonware?

Demonware provide online services and infrastructure for some of the world’s most popular video game franchises.

Bio

Thomas is a Build Engineer at Demonware and works closely with the development teams to optimise their CI/CD pipelines. His background is in QA and other roles include Configuration Management and Test Tools automation.

James is a Django developer on a Demonware internal product. Before Demonware he worked at a handful of startups, largely in web development.

Pets vs Cattle

CaaS4 Stages of Evolution TaaS

Continuous Delivery

Agenda

The future

Continuous Delivery

Define Develop Build Integrate Test Release Deploy

Continuous Delivery

Continuous Integration

Why Continuous Delivery ?

● Deliver new services, features and updates rapidly

● Reduce manual intervention

● Shorter feedback loop

● Reduce cost of deployments

● Reduce risk

Quality is no longer important ...

It’sC R I T I C A L

Continuous Delivery without Quality

Best case scenario Worst case scenario

Time to deliver

● The Continuous Delivery pipeline is only as fast as the slowest stage

● The Build and Deploy stages took minutes

● The Test stage took 6+ hours for a full set of testsuites to complete

● The Test stage was a blockage in our CD pipeline

Development teams began optimising their tests.

• Split across containers• Measure execution time to

manage slices of tests• Quality metrics like static analysis

and coverage reports

The Build Engineering team began optimising the test environments.

• Test infrastructure as code• Immutable and platform agnostic

test environments• Eliminate “Dependency Hell”

Optimising the Test stage

Tests are run in fresh containers; new test run, new container.

Container images are built from code.

Developers control the test environment dependencies.

Test environments are easily reproducible anywhere.

Test environments are manually configured. Setup and tear down happens with each test run.

Each test environment is a snowflake.

Everyone has root access.

Environments are never updated for fear of breaking tests.

Pets Cattle

Baremetal Vagrant VMs Docker Containers

./setup

./start tests

./teardown

./setup

./start

./teardown

./setup

./start

./teardown

./setup

./start

./teardown

./setup

./start

./teardown

Moving from Pets towards Cattle

2012 2013 2014 - Present

4 Stages of Evolution

1 : Fat containers

2 : Containers wrapped in Bash

3 : Containers defined in yaml

4 : Containers as a service

Fat containers

What is a “Fat” container and how was it created?

In this context it is a container with multiple services installed.We tarred up a CentOS vagrant vm and imported into a docker image.

Pros :

1. It helped get the ball rolling2. Everything needed to run tests was included3. Image caching reduced the time required to build new images

Cons :

1. Large base container image. Approximately 3gig2. The container ran multiple supporting services such as MySQL, RabbitMQ, Apache3. It took almost as long as a VM for all services to become available

Containers wrapped in Bash

As we began splitting services out into their own containers we needed a way to co-ordinate them. In particular we needed to link containers and wait for services to become available. We used a high level bash wrapper to start, link and wait for containers to become ready.

Pros :

1. The script was initially quite simple2. It filled a gap in tooling around multi container deployments (June 2014)

Cons :

1. The script was duplicated across multiple projects and became unwieldy 2. A lot of logic required to check a service health/status3. Fragile

Containers defined in yaml

We began using docker-compose in August 2015. Compose allowed developers to define complex container orchestration in YAML.

Example :

testsuite: build: unittest command: python testrunner.py ports: - "80:80" volumes: - unittests:/unittests links: - perconapercona: image: percona

Containers defined in yaml

Pros :

1. Compose is easy to use2. Portable3. Repeatable4. Container configuration defined in code5. Container orchestration requirements defined in code

Cons :

1. Runs on a single host. Not fully cluster aware. This will change very soon.

Test Containers

Limitation of first 3 stages

● Single host, fixed resources

● Fixed number of Test Containers per host

● Fixed number of Tests per Container

● Local debugging of failures is difficult

● Re-running failed/flaky tests is expensive

Containers as a service

We are currently in this stage. With the release of Docker 1.12 we started to look at how and where tests are being run.

2 key features in 1.12 :

● Swarm Mode○ Natively manage a cluster of Docker Engines called a swarm. Use the Docker CLI to

create a swarm, deploy application services to a swarm, and manage swarm behavior.

● Services○ Docker services enables the use to start a replicated, distributed, load balanced

service on a swarm of Engines.

Containers as a service

Docker Swarm mode and Services enable us to provide a much more flexible yet robust and scalable test environment.

Tests as a service

Like any Swarm service, the number of containers is scalable based on the resources available in the Test Cluster.

We need a dynamic way of splitting tests at execution time to maximise the usefulness of this.

Populate a Redis service with test case names.

Each container grabs a chunk of tests from the Redis service. Tests are run and results stored in a shared mount point across all nodes.

What does the test cluster look like?

Shared volume mounted using GlusterFS. Used for results, logs etc

Manager Manager Manager WorkerWorkerWorker Worker Worker

Team A Unit Tests

Team B Unit Tests

Docker Registry (Global)

Redis (Global)

Tests as a service

How do we create a test service?

● docker service create --name unittests --replicas 1 <image_name>

How do we scale up the test tasks as resources become available ?

● docker service scale unittests=50

How do we scale down when tests are finished ?

● docker service scale unittests=0

What does this look like ?

Check Cluster Resources

Create overlay network

Start Redis Population service

Create Test Service

Cleanup Services

Execute Tests

Stop Redis Population service

Create Redis Global service

Create shared work directory

The code

Create overlay network :

docker network create -d overlay --subnet 10.0.9.0/24 network_unittests_129

Create global Redis service :

docker service create --name 129_redis --network network_unittests_129 --mount

type=bind,src=/home/test_cluster/unittest/129,dst=/data -p 6379:6379 redis

Populate Redis service :

docker service create --name 129_redis_populate --network network_unittests_129 --mount

type=bind,src=/home/test_cluster/unittest/129,dst=/tmp --replicas=1

127.0.0.1:5000/populate_redis:latest -c 'cat /tmp/testlist |redis-cli -h 129_redis -p 6379'

Start Test Service :

docker service create --name 129_tests --env REDIS_SERVICE=129_redis --network

network_unittests_129 --mount type=bind,src=/home/test_cluster/results/129,dst=/results

--replicas=12 127.0.0.1:5000/unittest:129

What other benefits does Docker

Swarm provide ?

● Easy to setup○ docker swarm init○ docker swarm join --token <swarm_token> <ip of manager>

● Secure by default. Uses TLS● Service discovery● Load Balancing● Scaling● Desired State reconciliation● Multi-host networking● Rolling updates

Test Containers

What Docker Swarm solves

● Single host, fixed resources

● Fixed number of Test Containers per host

● Fixed number of Tests per Container

● Local debugging of failures is difficult

● Re-running failed/flaky tests is expensive

SOLVED

SOLVED

SOLVED

SOLVED

SOLVED

Summary

● We increase our resource use, but not by 'throwing resources at it'

- instead we optimise for parallel execution

● With a relatively small time investment we were able to innovate

● The tools used to create and manage the test cluster are open source

■ Docker : https://www.docker.com/

■ Ansible : https://www.ansible.com/

■ GlusterFS : https://www.gluster.org/

■ Swarm Visualizer :

https://github.com/ManoMarks/docker-swarm-visualizer

● Our containerised tests can scale seamlessly across 3 different

platforms

We are hiring:

Full details at demonware.net

Or email : [email protected]

Internships available.

Drop by our stand for more details.

Contact details :

[email protected] / @tomwillfixit

[email protected] / @PROGRAM_IX

Follow us @demonware for engineering

related content.

Meetup : meetup.com/Docker-Dublin/

Upcoming Event