api microservices with node.js and docker

Post on 21-Jan-2018

5.261 Views

Category:

Software

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

1

Microservices with Node & DockerTony Pujals

CTO & Co-Founder, Atomiq

Monolithic Architecture

2

Data Access

Business

PresentationPresentation

Business

Data Access

Data Access

Business

PresentationPresentation

Business

Data AccessDBDB

Data Access

Business

PresentationPresentation

Business

Data AccessDBDB

Browser

Benefits of monolithic architecture

● Relatively straightforward

● Easier to develop, test, and debug code bundled together in a single executable process

● Fairly easy to reason about data flows and application behavior

● Deployment model is easy

● Application is deployed as a unit

● Scaling model is simple

● Scale up by installing on a more powerful server

● Scale out by placing behind a load balancer to distribute requests

6

Deployment process historically significant

● Creating and maintaining server environments has historically been● laborious and time-consuming● expensive

● Consequently this imposed a practical limit on staging environments● Typical staged pipeline would include some fixed number of servers

● (local) - developer's workstation● dev (or sandbox) - first stage for developers to merge and test● int - integration stage for developers to test with external databases and other services● qa (or test) - for functional and other types of testing● uat - customer user experience testing and demo● preprod (or staging) - exact replica of production environment for final verification tests● production - live site

7

Things improved over time

with the introduction of

● virtual machine technology● vagrant● puppet & chef

Nevertheless, despite positive labor, time, and infrastructure savings due to virtualization, provisioning environments still remained burdensome.

8

How is deployment process significant?

If the process to provision environments and move an application through a pipeline is laborious and time-consuming, it makes sense to coordinate larger releases that are worth the effort.

9

Ramifications

● Inhibits continuous delivery● Continuous delivery attempts to reduce the cost and risk associated with delivering

incremental changes in large part by automating as much of the deployment pipeline as possible

● Nevertheless, as a practical matter for large complex applications there is too much effort, cost, and risk involved in deploying an entire application just to update a single feature

● Sub-optimal scalability● Practical limit on scaling up● Scaling out is a relatively expensive and inefficient way to scale

● Not all components are under the same load, but can't scale out at the individual component level because the unit of scaling is the entire application

10

Ramifications (cont'd)

● Adverse impact on development● Larger codebases are more difficult to maintain and extend, and as cognitive overhead

increases● individual team members become less effective, contributions take greater effort● quality is adversely affected

● Requires greater coordination among teams● everything needs to be in phase for integration

● Entire team forced to march at cadence dictated by application release cycle

● Leads to more epic releases, with all the inherent effort and risks that implies

11

Rise of Microservices

12

Driving the microservice trend

● Proliferation of different types of connected devices, leading to an emphasis on APIs, not applications

● Technology that makes distributed architecture as an alternative to monolithic architecture easier

13

14

APIs Drive Modern Apps

Consequences of emphasis on APIs

● It becomes desirable for APIs to independently● evolve● deploy● scale

● Potential for reducing codebase complexity● through separation of concerns at a physical level● codebase partitions at functional boundaries instead of layered boundaries

● Potential for reducing development friction● Developers liberated from the constraint of delivering and integrating functionality as part

of a larger complex bundle● API teams can move at their own cadence and deploy more frequently

15

Enter Docker● Container technology based on a legacy that goes back to

● chroot● FreeBSD jails● Solaris zones● cgroups● Linux containers (LXC)

● Provides the ability to run processes in isolated operating environments● A Docker host provides the ability to run processes in isolation from each other

● grants controlled access to system resources and dedicated network configuration● Unlike VMs, containers use a shared operating system kernel (don't need a guest OS)● By not virtualizing hardware, containers are far more efficient in terms of system resources● Containers launch essentially as quickly as a process can be started

Docker provides lightweight, isolated micro operating environments with native process performance characteristics that make microservice architecture practical.

16

17

18

19

Enter Node

● Docker provides an efficient operating environment for isolated processes, but doesn't have anything to do with how the process is developed

● Introduced in 2009, Node.js leverages Google's high performance V8 engine for running JavaScript

● JavaScript was a natural fit for a platform-wide asynchronous callback programming model that exploited event-driven, non-blocking I/O

Node provides a platform for building lightweight, fast, and highly scalable network services ideal for serving modern web APIs.

20

21

Node network server

21

Constructing a high performance, callback-based HTTP server is as simple as the following script:

var http = require('http');

http.createServer(function (req, res) {

res.writeHead(200, {'Content-Type': 'application/json'});

res.end('{ "message": "Hello World" }');

}).listen(3000);

console.log('Server listening at http://127.0.0.1:1337/');

22

Launching the server

22

$ node server.js

Server listening at http://127.0.0.1:3000/

23https://modulecounts.com/

24https://npmjs.com/

Node's advantages for microservices

● Lightweight HTTP server processes

Node runtime is based on Google's well-regarded open source, high performance V8 engine:

● Compiles JavaScript to native machine code● Machine code undergoes dynamic optimization during runtime

● V8 is highly tuned for fast startup time, small initial memory footprint, and strong peak performance

● Highly scalable● Node platform designed from the onset for end-to-end asynchronous I/O for high scalability

● No extraordinary operating requirements to support high scalability, so cost and complexity are not special concerns for deployment

● Lightweight for developers● Minimal ceremony involved in creating, publishing, and consuming packages● Encourages high degree of modularization with lightweight, tightly-focused packages● Easy to scaffold network services

25

Recommendation for teams● Don't complicate things

● The Java and .NET platforms evolved in part to address the burden of developing, configuring

and deploying a complex codebase, bundle of related artifacts, and requisite services

● The cornerstone of these platforms are type-safe, object-oriented programming languages with

heavy emphasis on class hierarchies, domain models, and design patterns like inversion of

control through dependency injection, to help developers mediate the challenges of large

codebases● Node microservices should be small, easy to reason about, test, and debug

● Polyglot is OK● JavaScript isn't the best language for everything

● Use workers for compute-intensive work or for work best implemented with another language

more suited for particular types of computing problems● Use Node as the common REST API layer -- don't be polyglot for REST API code

● This is the glue layer that receives requests and validates API contracts, packages results in HTTP

responses with appropriate status codes● Use pipes, sockets, and message or job queues for sending work to workers

26

Takeaway

The convergence of Node and Docker container technology is very well suited for implementing microservice architecture

● Docker makes running server processes in isolated compute environments (containers) cheap and easy. Containers are extremely efficient in terms of system resources and provide excellent performance characteristics, including fast starts.

● Node provides a high performance platform that supports high scalability with lightweight server processes. Its simple package management system makes creating, publishing, and consuming packages easy, facilitating and streamlining the process of building and deploying lightweight services.

● Organizations can achieve higher productivity and quality overall because developers focus their energy on building smaller, narrowly-focused services partitioned along functional boundaries. There is less friction and cognitive overhead with this approach, and services can evolve, be deployed, and scale independently of others.

27

Working with Docker Machine

28

29

List machines

29

docker-machine ls

$ docker-machine ls

NAME ACTIVE DRIVER STATE URL SWARM

default * virtualbox Saved

$

30

Create a new machine (Docker host)

30

docker-machine create --driver driver-name machine-namedocker-machine create –d driver-name machine-name

$ docker-machine create --driver virtualbox machine1

Creating VirtualBox VM...

Creating SSH key...

Starting VirtualBox VM...

Starting VM...

To see how to connect Docker to this machine, run: docker-machine env machine1

$

31

List machines again

31

Can see the new Docker machine is running

$ docker-machine ls

NAME ACTIVE DRIVER STATE URL SWARM

default * virtualbox Saved

machine2 virtualbox Running tcp://192.168.99.100:2376

$

32

Tell Docker client to use the new machine

32

eval "$(docker-machine env machine-name)"

$ docker-machine env machine1

export DOCKER_TLS_VERIFY="1"

export DOCKER_HOST="tcp://192.168.99.100:2376"

export DOCKER_CERT_PATH="/Users/tony/.docker/machine/machines/machine1"

export DOCKER_MACHINE_NAME="machine1"

# Run this command to configure your shell:

# eval "$(docker-machine env machine1)"

$

33

Tell Docker client to use the new machine

33

eval "$(docker-machine env machine-name)"

$ docker-machine env machine1

export DOCKER_TLS_VERIFY="1"

export DOCKER_HOST="tcp://192.168.99.100:2376"

export DOCKER_CERT_PATH="/Users/tony/.docker/machine/machines/machine1"

export DOCKER_MACHINE_NAME="machine1"

# Run this command to configure your shell:

# eval "$(docker-machine env machine1)"

$

Displays environment settings you

should use to configure your

shell

34

Tell Docker client to use the new machine

34

eval "$(docker-machine env machine-name)"

$ docker-machine env machine1

export DOCKER_TLS_VERIFY="1"

export DOCKER_HOST="tcp://192.168.99.100:2376"

export DOCKER_CERT_PATH="/Users/tony/.docker/machine/machines/machine1"

export DOCKER_MACHINE_NAME="machine1"

# Run this command to configure your shell:

# eval "$(docker-machine env machine1)"

$ eval "$(docker-machine env machine1)"

$ Evaluates the environment

settings in the current shell

Displays environment settings you

should use to configure your

shell

35

Stop and start a machine

35

docker-machine stop|start machine-name

$ docker-machine stop machine1

$ docker-machine start machine1

Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.

$

36

ssh into the machine

36

docker-machine ssh machine-name

$ docker-machine ssh machine1 ## . ## ## ## == ## ## ## ## ## === /"""""""""""""""""\___/ === ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~ \______ o __/ \ \ __/ \____\_______/ _ _ ____ _ _| |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| < __/ ||_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|Boot2Docker version 1.8.2, build master : aba6192 - Thu Sep 10 20:58:17 UTC 2015Docker version 1.8.2, build 0a8c2e3docker@machine1:~$

Create a machine using other drivers

https://docs.docker.com/machine/drivers/

● Amazon Web Services (amazonec2)● DigitalOcean (digitalocean)● Exoscale (exoscale)● Google Compute Engine (google)● Generic (generic) - for existing host with ssh● Microsoft Azure (azure)● Microsoft Hyper-V (hyper-v)● OpenStack (openstack)● Rackspace (rackspace)● IBM Softlayer (softlayer)● Oracle VirtualBox (virtualbox)● VMWare vCloud Air (vmwarevcloudair)● VMWare Fusion (vmwarefusion)● VMWare vSphere (vmwarevsphere)

37

docker-machine create –d driver-name machine-name

DigitalOcean Example

38

39

Create a DigitalOcean personal access token:https://cloud.digitalocean.com/settings/applications

$ export DIGITALOCEAN_ACCESS_TOKEN='...'

1

40

Create a DigitalOcean personal access token:https://cloud.digitalocean.com/settings/applications

$ export DIGITALOCEAN_ACCESS_TOKEN='...'

Create a machine

$ docker-machine create --driver digitalocean demo

Creating SSH key...

Creating Digital Ocean droplet...

To see how to connect Docker to this machine, run: docker-machine env demo

1

2

41

Create a DigitalOcean personal access token:https://cloud.digitalocean.com/settings/applications

$ export DIGITALOCEAN_ACCESS_TOKEN='...'

Create a machine

$ docker-machine create --driver digitalocean demo

Creating SSH key...

Creating Digital Ocean droplet...

To see how to connect Docker to this machine, run: docker-machine env demo

Set docker client shell environment

$ eval "$(docker-machine env demo)"

1

2

3

4242

Create a DigitalOcean personal access token:https://cloud.digitalocean.com/settings/applications

$ export DIGITALOCEAN_ACCESS_TOKEN='...'

Create a machine

$ docker-machine create --driver digitalocean demo

Creating SSH key...

Creating Digital Ocean droplet...

To see how to connect Docker to this machine, run: docker-machine env demo

Set docker client shell environment

$ eval "$(docker-machine env demo)"

List the machine

NAME ACTIVE DRIVER STATE URL SWARM

demo digitalocean Running tcp://107.170.201.137:2376

1

2

3

4

Working with Docker

43

44

Launch a container to run a command

44

docker run --rm image [cmd]

$ docker run --rm alpine echo hello

hello

45

Launch a container to run a command

45

docker run --rm image [cmd]

$ docker run --rm alpine echo hello

hello

Create a container from this image

46

Launch a container to run a command

46

docker run --rm image [cmd]

$ docker run --rm alpine echo hello

hello

Run this command in the container

Create a container from this image

47

Launch a container to run a command

47

docker run --rm image [cmd]

$ docker run --rm alpine echo hello

hello

Automatically clean up (remove the container's file system) when the container

exits

Run this command in the container

Create a container from this image

48

Launch a container to run an interactive command

48

docker run --rm -i -t image [cmd]

docker run --rm -it image [cmd]

docker $ docker run -it --rm alpine sh/ # ls -l

total 48

drwxr-xr-x 2 root root 4096 Jun 12 19:19 bin

drwxr-xr-x 5 root root 380 Oct 13 02:18 dev

drwxr-xr-x 15 root root 4096 Oct 13 02:18 etc

drwxr-xr-x 2 root root 4096 Jun 12 19:19 home

drwxr-xr-x 6 root root 4096 Jun 12 19:19 lib

lrwxrwxrwx 1 root root 12 Jun 12 19:19 linuxrc -> /bin/busybox

drwxr-xr-x 5 root root 4096 Jun 12 19:19 media

drwxr-xr-x 2 root root 4096 Jun 12 19:19 mnt

dr-xr-xr-x 150 root root 0 Oct 13 02:18 proc

drwx------ 2 root root 4096 Oct 13 02:18 root

drwxr-xr-x 2 root root 4096 Jun 12 19:19 run

drwxr-xr-x 2 root root 4096 Jun 12 12 19:19 sbin

dr-xr-xr-x 13 root root 0 Oct 13 02:18 sys

drwxrwxrwt 2 root root 4096 Jun 12 19:19 tmp

drwxr-xr-x 7 root root 4096 Jun 12 19:19 usr

drwxr-xr-x 9 root root 4096 Jun 12 19:19 var

Run this command in the container

sh is an interactive command...

49

Launch a container to run an interactive command

49

docker run --rm -i -t image [cmd]

docker run --rm -it image [cmd]

docker $ docker run --rm -it alpine sh/ # ls -l

total 48

drwxr-xr-x 2 root root 4096 Jun 12 19:19 bin

drwxr-xr-x 5 root root 380 Oct 13 02:18 dev

drwxr-xr-x 15 root root 4096 Oct 13 02:18 etc

drwxr-xr-x 2 root root 4096 Jun 12 19:19 home

drwxr-xr-x 6 root root 4096 Jun 12 19:19 lib

lrwxrwxrwx 1 root root 12 Jun 12 19:19 linuxrc -> /bin/busybox

drwxr-xr-x 5 root root 4096 Jun 12 19:19 media

drwxr-xr-x 2 root root 4096 Jun 12 19:19 mnt

dr-xr-xr-x 150 root root 0 Oct 13 02:18 proc

drwx------ 2 root root 4096 Oct 13 02:18 root

drwxr-xr-x 2 root root 4096 Jun 12 19:19 run

drwxr-xr-x 2 root root 4096 Jun 12 12 19:19 sbin

dr-xr-xr-x 13 root root 0 Oct 13 02:18 sys

drwxrwxrwt 2 root root 4096 Jun 12 19:19 tmp

drwxr-xr-x 7 root root 4096 Jun 12 19:19 usr

drwxr-xr-x 9 root root 4096 Jun 12 19:19 var

Run this command in the container

sh is an interactive command...

By default, the console is attached to all 3 standard streams of the process.

-i (--interactive) keeps STDIN open

-t allocates a pseudo-TTY (expected by most command line processes) so you can pass signals, like Ctrl-C (SIGINT)

The combination is needed for interactive processes, like a shell

Working with Docker and Node

50

51

Pull the latest node image

51

docker pull node

$ docker pull node

Using default tag: latest

latest: Pulling from library/node

843e2bded498: Pull complete

8c00acfb0175: Pull complete

8b49fe88b40b: Pull complete

20b348f4d568: Pull complete

16b189cc8ce6: Pull complete

116f2940b0c5: Pull complete

1c4c600b16f4: Pull complete

971759ab10fc: Pull complete

bdf99c85d0f4: Pull complete

a3157e9edc18: Pull complete

library/node:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.

Digest: sha256:559f91e2f6823953800360976e42fb99316044e2f9242b4f322b85a4c23f4c4f

Status: Downloaded newer image for node:latest

52

Run a container and display Node/npm version

52

docker run --rm node node -v; npm -v

$ docker run --rm node node -v; npm -v

v4.1.2

2.14.4

53

Run a container to evaluate a Node statement

53

docker run --rm node node --eval "..."

$ docker run --rm node node -e "console.log('hello')"

hello

54

Run the Node REPL in a container

54

docker run -it --rm node node

$ docker run --rm -it node node

> console.log('hello')

hello

undefined

>

Running a simple Node app

55

56

Create a package.json file

56

demo-app/package.json

{

"name": "simple-docker-node-demo",

"version": "1.0.0",

"scripts": {

"start": "node app.js"

}

}

57

Add the express package

57

npm install --save express

$ npm install --save express

58

Create app.js

58

demo-app/app.js

const app = require('express')();

const port = process.env.PORT || 3000;

app.use('/', function(req, res) {

res.json({ message: 'hello world' });

});

app.listen(port);

console.log('listening on port ' + port);

59

Add a Dockerfile

59

demo-app/Dockerfile

FROM node:onbuild

expose 3000

60

Add a .dockerignore

60

demo-app/.dockerignore

Dockerfile

.dockerignore

node_modules/

61

Build the Docker image for the app

61

docker build -t image-tag .

$ docker build -t demo-app:v1 .

Sending build context to Docker daemon 5.12 kB

The path (or url to a Git repo) defines the Docker build context.

All files are sent to the Docker daemon and are available to Dockerfile

commands while building the image.

62

Build the Docker image for the app

62

docker build -t image-tag .

$ docker build -t subfuzion/demo-app:v1 .

Sending build context to Docker daemon 5.12 kB

Docker images get assigned Image IDs automatically, but you should also provide a tag in this form if you plan on publishing it:

user/repo:tag

63

Build the Docker image for the app

63

docker build -t image-tag .

$ docker build -t subfuzion/demo-app:v1 .

Sending build context to Docker daemon 5.12 kB

Step 0 : FROM node:onbuild

# Executing 3 build triggers

Trigger 0, COPY package.json /usr/src/app/

Step 0 : COPY package.json /usr/src/app/

---> Using cache

Trigger 1, RUN npm install

Step 0 : RUN npm install

---> Using cache

Trigger 2, COPY . /usr/src/app

Step 0 : COPY . /usr/src/app

---> ab7beb9c0287

Removing intermediate container 676c92cf1528

Execution of the 1st statement in the Dockerfile

FROM node:onbuild

64

Node base image

64

https://github.com/nodejs/docker-node

4.2/onbuild/Dockerfile

FROM node:4.0.0

RUN mkdir -p /usr/src/app

WORKDIR /usr/src/app

ONBUILD COPY package.json /usr/src/app/

ONBUILD RUN npm install

ONBUILD COPY . /usr/src/app

CMD [ "npm", "start" ]

Create a directory in the image and make it the working directory for subsequent

commands

65

Node base image

65

https://github.com/nodejs/docker-node

4.2/onbuild/Dockerfile

FROM node:4.0.0

RUN mkdir -p /usr/src/app

WORKDIR /usr/src/app

ONBUILD COPY package.json /usr/src/app/

ONBUILD RUN npm install

ONBUILD COPY . /usr/src/app

CMD [ "npm", "start" ]

Create a directory in the image and make it the working directory for subsequent

commands

When this image is used as a base for another image (child image), these

instructions will be triggered.

As separate steps (layers), copy package.json, run npm install, and finally copy all

the files (recursively) from the build context.

66

Node base image

66

https://github.com/nodejs/docker-node

4.2/onbuild/Dockerfile

FROM node:4.0.0

RUN mkdir -p /usr/src/app

WORKDIR /usr/src/app

ONBUILD COPY package.json /usr/src/app/

ONBUILD RUN npm install

ONBUILD COPY . /usr/src/app

CMD [ "npm", "start" ]

Create a directory in the image and make it the working directory for subsequent

commands

The command to execute when a container is started(can be overridden)

When this image is used as a base for another image (child image), these

instructions will be triggered.

As separate steps (layers), copy package.json, run npm install, and finally copy all

the files (recursively) from the build context.

67

Build the Docker image for the app

67

docker build -t image-tag .

$ docker build -t subfuzion/demo-app:v1 .

Sending build context to Docker daemon 5.12 kB

Step 0 : FROM node:onbuild

# Executing 3 build triggers

Trigger 0, COPY package.json /usr/src/app/

Step 0 : COPY package.json /usr/src/app/

---> Using cache

Trigger 1, RUN npm install

Step 0 : RUN npm install

---> Using cache

Trigger 2, COPY . /usr/src/app

Step 0 : COPY . /usr/src/app

---> ab7beb9c0287

Removing intermediate container 676c92cf1528

Step 1 : EXPOSE 3000

---> Running in f16d963adcb4

---> d785b0f27ffa

Removing intermediate container f16d963adcb4

Successfully built d785b0f27ffa

Execution of the 2nd statement in the Dockerfile

EXPOSE 3000

The app will be listening to port 3000 in the container, but it

can't be accessed outside the container unless exposed

68

You can also add tags after building an image

68

docker tag image-id-or-tag tag

$ docker tag d785b0f27ffa subfuzion/demo-app:latest

or

$ docker tag subfuzion/demo-app:v1 subfuzion/demo-app:latest

69

List the image

69

docker images

$ docker images

REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE

subfuzion/demo-app latest d785b0f27ffa 2 minutes ago 644.2 MB

subfuzion/demo-app v1 d785b0f27ffa 2 minutes ago 644.2 MB

70

Run the Node app in a container

70

docker run --rm -t -p host-port:container-port image

$ docker run --rm -t -p 3000:3000 subfuzion/demo-app:v1

npm info it worked if it ends with ok

npm info using npm@2.14.2

npm info using node@v4.0.0

npm info prestart simple-docker-node-demo@1.0.0

npm info start simple-docker-node-demo@1.0.0

> simple-docker-node-demo@1.0.0 start /usr/src/app

> node app.js

listening on port 3000

71

Run the Node app in a container

71

docker run --rm -t -p host-port:container-port image

$ docker run --rm -t -p 3000:3000 subfuzion/demo-app:v1

npm info it worked if it ends with ok

npm info using npm@2.14.2

npm info using node@v4.0.0

npm info prestart simple-docker-node-demo@1.0.0

npm info start simple-docker-node-demo@1.0.0

> simple-docker-node-demo@1.0.0 start /usr/src/app

> node app.js

listening on port 3000

Create a container from this image and run the default

command(npm start)

72

Run the Node app in a container

72

docker run --rm -t -p host-port:container-port image

$ docker run --rm -t -p 3000:3000 subfuzion/demo-app:v1

npm info it worked if it ends with ok

npm info using npm@2.14.2

npm info using node@v4.0.0

npm info prestart simple-docker-node-demo@1.0.0

npm info start simple-docker-node-demo@1.0.0

> simple-docker-node-demo@1.0.0 start /usr/src/app

> node app.js

listening on port 3000

Map a port on the docker machine to the container's

exposed port

73

Run the Node app in a container in detached mode

73

docker run -d -t -p host-port:container-port image

$ docker run -d -t -p 80:3000 --name demo subfuzion/demo-app:v1

be76984370dd8e3aa4066af955eb54ab4116495007b7cd45743700804392555a

$ docker logs demo

npm info it worked if it ends with ok

npm info using npm@2.14.2

npm info using node@v4.0.0

npm info prestart simple-docker-node-demo@1.0.0

npm info start simple-docker-node-demo@1.0.0

> simple-docker-node-demo@1.0.0 start /usr/src/app

> node app.js

listening on port 3000

74

Run the Node app in a container in detached mode

74

docker run -d -t -p host-port:container-port image

$ docker run -d -t -p 80:3000 --name demo subfuzion/demo-app:v1

be76984370dd8e3aa4066af955eb54ab4116495007b7cd45743700804392555a

$ docker logs demo

npm info it worked if it ends with ok

npm info using npm@2.14.2

npm info using node@v4.0.0

npm info prestart simple-docker-node-demo@1.0.0

npm info start simple-docker-node-demo@1.0.0

> simple-docker-node-demo@1.0.0 start /usr/src/app

> node app.js

listening on port 3000

Good idea to name your containers, especially detached

ones

7575

$ docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

be76984370dd subfuzion/demo-app:v1 "npm start" 2 minutes ago Up 2 minutes 0.0.0.0:8000->3000/tcp demo

$ docker inspect demo

. . .

$ docker stop demo

demo

$ docker rm demo

76

Accessing the running container

76

Mapped Docker machine port to container port, the IP address will be the IP address of the machine

$ docker-machine ip machine1

192.168.99.100

$ curl http://192.168.99.100:8000

{"message":"hello world"}

# or

$ curl http://$(docker-machine ip machine1):8000

{"message":"hello world"}

Link to Mongo container

77

78

Create a Docker data volume container

78

$ docker create --name mongo-db -v /dbdata mongo /bin/true

b582fc52d80e62588816683479a99ce5c11d756372e008221e594af9dafd32a3

79

Start Mongo container & attach data volume container

79

docker run -d --name mongo --volumes-from mongo-db mongo

80ecaa6958a6904ee26259b36ae75ed0e2cd6105e1f4f074243ddf868c491f54

# connect from a mongo client container

$ docker run -it --link mongo:mongo --rm mongo sh \

-c 'exec mongo "$MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT/test"'

# backup database

$ docker run --rm --link mongo:mongo -v /root:/backup mongo bash \

-c 'mongodump --out /backup --host $MONGO_PORT_27017_TCP_ADDR'

$ docker-machine scp -r dev:~/test .

80

Link Node app container to Mongo container

80

$ docker run -p 49100:3000 --link mongo:mongo demo-app

81

Docker provisions Node container environment

81

MONGO_ENV_MONGO_MAJOR=3.0

MONGO_PORT=tcp://172.17.0.89:27017

MONGO_ENV_MONGO_VERSION=3.0.3

MONGO_PORT_27017_TCP=tcp://172.17.0.89:27017

MONGO_PORT_27017_TCP_PROTO=tcp

MONGO_PORT_27017_TCP_ADDR=172.17.0.89

MONGO_NAME=/xapp/mongo

MONGO_PORT_27017_TCP_PORT=27017

82

Accessing Mongo connection in your app

82

var mongoUrl = util.format('mongodb://mongo:%s/test', process.env.MONGO_PORT_27017_TCP_PORT);

For more informationDocker Machine reference

https://docs.docker.com/machine/

Docker Machine basics

http://blog.codefresh.io/docker-machine-basics/

Machine presentation (Nathan LeClaire)

https://www.youtube.com/watch?v=xwj44dAvdYo&feature=youtu.be

Docker Machine Drivers

https://docs.docker.com/machine/drivers/

Linking Containers

https://docs.docker.com/userguide/dockerlinks/

Best practices for writing Dockerfiles

https://docs.docker.com/articles/dockerfile_best-practices/

Node, Docker, and Microservices

http://blog.codefresh.io/node-docker-and-microservices/

83

@subfuzionhttps://twitter.com/subfuzion

https://www.linkedin.com/in/tonypujals/

top related