deploying and scaling a rails application with docker and friends
TRANSCRIPT
DockerResource/Network Isolation
Lightweight
Provides Consistency
Simplifies Distribution
Run Software Anywhere
A Dockerfile for Rails
FROM invisiblelines/ruby:2.2.0
MAINTAINER Kieran Johnson "[email protected]"
ENV PORT 5000
RUN curl -sL https://deb.nodesource.com/setup | sudo bash - && \
apt-get -qqy install nodejs -y; \
apt-get clean -y; \
apt-get autoremove -y
RUN apt-get -qq update;\
apt-get -qqy install libpq-dev; \
apt-get clean -y; \
apt-get autoremove -y
Setup the ContainersRun our app and link it to the
database container
$ docker run -d \
--name myapp \
--link db:db \
-e DB_ENV_POSTGRES_USER=postgres \
-e DB_ENV_POSTGRES_PASSWORD=postgres \
myapp
Setup the ContainersAdd Nginx
$ docker run -d \
-p 80:80 \
--name nginx \
--link myapp:myapp \
-v /Users/kieran/Desktop/docker-fig-rails-example/nginx.conf:/etc/nginx/conf.d/default.conf \
nginx
IssuesLinks don't (yet) work across nodes
Scaling services requires manually startingcontainers
Configuration files need updating manually
Docker ComposeDefine and run multicontainer applications thatcan be run with a single command
Simple YAML file
Supports most Docker options
Can scale containers
Docker ComposeSample dockercompose.yml
postgres:
image: postgres:9.4.0
environment:
POSTGRES_USER: 'docker'
POSTGRES_PASSWORD: 'mysupersecretpassword'
app:
build: .
ports:
- "5000:5000"
links:
- db
environment:
RACK_ENV: production
DATABASE_URL: postgres://docker:mysupersecretpassword@postgres/docker
Docker ComposeStart Application Stack
$ docker-compose up -d
Scale a Service
$ docker-compose scale app=6
Docker SwarmSchedule containers on multiple Docker enginesfrom a single point
Add filters/constraints to define which containersrun on which nodes
Docker SwarmEach node runs a Swarm agent pointing to a discovery service
$ docker run -d \
swarm join --addr={{node_ip}}:2375 \
consul://consul.service.consul:8500/swarm
Start a single Swarm manager with the same discovery service
$ docker run -d \
-p {{swarm_port}}:2375 \
swarm manage consul://consul.service.consul:8500/swarm
Docker SwarmInteractions with Swarm are done through a the Swarm manager
$ docker -H tcp://{{swarm_ip}}:{{swarm_port}} info
...
Consul - Service DiscoveryRegister service with consul
$ curl -X PUT \
-d "{\"ID\": \"app001\", \"Name\": \"app\", \"Tags\": [], "Port": 5000}" \
0.0.0.0:8500/v1/agent/services/register
List of services can now be queried via API
$ curl 0.0.0.0:8500/v1/catalog/services
{"app": [], "consul": []}
Consul - Service DiscoveryA single service definition via API
$ curl 0.0.0.0:8500/v1/catalog/service/app
[
{
"ServicePort": 49153,
"ServiceAddress": "",
"ServiceTags": null,
"ServiceName": "app",
"ServiceID": "swarm-0:dockeransible_app_1:5000",
"Address": "10.129.127.240",
"Node": "swarm-0"
}
]
Consul - DNSService can now be also queried by DNS*
$ dig A app.service.consul
; <<>> DiG 9.8.3-P1 <<>> A app.service.consul
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55035
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;app.service.consul. IN A
;; ANSWER SECTION:
app.service.consul. 300 IN A 192.168.0.12
*Consul DNS runs on a nonstandard port so install DNSmasq to query this by default.
Consul - Health ChecksHealth checks monitor service
Remove a service from DNS queries if it fails ahealth check
Consul - Key/Value StoreStore application configuration data in the cluster
Store a value
$ curl -X PUT -d "bar" 0.0.0.0:8500/v1/kv/foo
Retrieve a value
$ curl 0.0.0.0:8500/v1/kv/foo?raw
Consul - WatchesWatch a view of data and run a handler when the data changes
i.e. when services change, or when the value of a key is changed
Consul - WatchesUsing a watch we can perform a deployment across the cluster
$ consul watch -type key -key app/sha /usr/local/bin/deploy.sh
On Swarm Manager node add a watch for the key
When this changes run a script
In the script loop over the app containers stoppingthem one by one and starting the new container
Configuring ServicesManually updating Consul with servicedefinitions in Consul isn't always ideal
Thankfully it can automated
RegistratorRuns as a container
Works with multiple service registry backendsincluding Consul
Provides sane defaults for registered services
Customisable using environment variables whenstarting containers
Dynamic ConfigurationWith services now running anywhere in a cluster we need toupdate some configuration files as the application scales,
e.g nginx upstream
Consul TemplateUpdate templates with values from Consul whenany variable in the template changes
Optionally run a command when the template hasbeen generated
$ consul-template -consul 0.0.0.0:8500 \
-template "/tmp/template.ctmpl:/var/www/nginx.conf:docker exec nginx /usr/sbin/nginx -s reload"
Consul TemplateTemplates use Go templates
{{range service "webapp@datacenter"}}
server {{.Address}}:{{.Port}}{{end}}
All Together NowRun Consul, Swarm and Registrator on each node inthe cluster
Swarm manager runs on one node
Docker Compose runs the predefined containersthrough the Swarm manager which handlerscheduling
Registrator updates Consul with services as theybecome available
Containers query Consul for other services
Consul Template responds to changes in Consuland updates configuration files
Demo NotesInfrastruction provisioned using Terraform (alsofrom Hashicorp)
Each node is bootstrapped using Ansible to setupConsul/Swarm, DNSmasq and addwatches/templates