[white paper] are containers the future ?

18
Are containers the future? A white paper by Orange Business Services (OCB Innovation) and Orange Silicon Valley

Upload: orange-business-services

Post on 11-Aug-2015

309 views

Category:

Technology


7 download

TRANSCRIPT

Page 1: [White Paper] Are containers the future ?

Page 1

Are

containers the future?

A white paper by Orange Business Services (OCB Innovation) and Orange Silicon Valley

Page 2: [White Paper] Are containers the future ?

Page 2

Scope

This paper presents container technology with a particular focus on Docker®, the

company, its technology, comparing containers with the VM approach, its

involvement in the DevOPs and Platform as a service model, and partnerships with

other IT players. It also touches upon the emergence of microservices architecture

along with challenges to enterprise adoption.

Page 3: [White Paper] Are containers the future ?

Page 3

Executive summary

Over the years, information technology has evolved from expensive mainframes to

client-servers and now to cloud computing. A process once handled by a single,

large mainframe was then replaced with hundreds of servers until the birth of the

Internet when hundreds became thousands. As the server count increased, power

and cooling needs skyrocketed as well. By laying the foundation for new computing

architecture over the last decade, server virtualization has helped server

consolidation. Yet with so many advancements happening at only the infrastructure

level, application portability has remained the same over the years, leading to a

number of conflicts between development and operations teams.

Virtual servers that were created by virtualizing the operating system are commonly

known as containers and have been around since the early 2000s. DotCloud® built

their company based on Linux container technology and open sourced its software

before being renamed to Docker® in 2013 and renewing life for containers.

Container technology enables any application and its dependencies to be packaged

up as a lightweight, portable, self-sufficient container that can be deployed

anywhere.

Docker®’s rapid growth within such a short amount of time emphasizes how badly

enterprises have been looking for an application portability solution that would span

across different clouds and different environments within the datacenter during the

application lifecycle. Container technology has been instrumental in the renewed

interest in Platform as a Service (PaaS) model for enterprises, primarily because it

acts as a technology enabler. Containers are developer friendly while ensuring an

easy separation of duties and responsibilities for developer and operations teams,

thereby fully supporting the DevOps concept that is the top to-do list item for many

enterprises.

According to Tracxn®, by looking at the last two years alone one can see the rapid

emergence of the Docker® ecosystem as approximately $200 – $250 million dollars

of funding have been injected into different, related companies. The only competitor

to Docker® is CoreOS®, owner of Rocket® container technology. Security and a

lack of tools are the two biggest challenges for enterprise adoption of container

technology. It is not a question of whether enterprises will adopt containers or not,

rather the question is when they will adopt the technology. Virtual machines have

been the defacto standard for all enterprises over the last decade and while

containers are going to challenge that in the long run, in the short run containers will

co-exist with virtual machines.

Page 4: [White Paper] Are containers the future ?

Page 4

Birth of containers

Over the last fifteen years many organizations have undergone the journey to cloud,

first by virtualizing dedicated physical machines for applications, and then by

subsequently transitioning to an automated and self-service cloud. On this journey

the primary mover to cloud computing has always been the virtual machine. Server

virtualization is the masking of server resources, including the number and identity

of individual physical servers, processors, and operating systems, from server users.

The server administrator uses a software application to divide one physical server

into multiple isolated virtual environments.

There are three ways of creating virtual servers and each of them possesses

differences in how they allocate

resources to the servers.

Hardware virtualization creates

virtual servers that emulate the

physical hardware.

Paravirtualization presents a

software interface that functions like

underlying hardware to virtual

servers.

Operating system virtualization

creates virtual servers that share an

operating system.

Within the last decade the runaway success of hardware virtualization has laid the

foundation for new computing architecture. Paravirtualization was never a success,

and while OS virtualization was temporarily popular it ultimately lost the race to

hardware virtualization. Virtual servers that were created using OS virtualization are

commonly known as containers. Although only recently popular, containers are not

new and several trending container technologies include chroot®, OpenVZ®,

Parallels®, FreeBSD Jail®, Linux Container LXC®, and libcontainer® several of

which were established in the early 2000s. DotCloud® based their company on

Linux container technology and open sourced their software before renaming the

company to Docker® in 2013 which renewed life for containers.

figure 1 – types of virtualization

Page 5: [White Paper] Are containers the future ?

Page 5

Role of containers

The best way to comprehend the role of application containers is by understanding

the analogy between shipping containers and how they revolutionized the

transportation of goods. Prior to World War II, there were several non-standard

methods of containerization across the globe ranging from wooden crates to railway

boxcars. During the war, it was the sheer volume of cargo needed to support the

war effort that led to a standardization of containers. In 1995 the idea of steel box

containers became a reality that allowed stacking in a manner that optimized

efficient loading and unloading of cargo. Over the next few years, ships and ports

changed dramatically to accommodate the shift to containerized transport. Today,

over 90% of bulk cargo travels in standard shipping containers. The use of

containers has shaped not only the landscape of the world’s ports but truck chassis,

pallets and forklift equipment as well as even cranes that have evolved to

accommodate containers.

figure 2 - transportation of goods (source: Docker.io®)

In the IT world, we have different types of applications and services that are

deployed on various hardware environments. The challenge is that every time an

application is written or re-written, it then needs to be deployed or re-deployed to

the different environments with each requiring a unique set of tools. Containers

enable any application and its dependencies to be packaged up into a lightweight,

portable, self-sufficient container that is easily deployable to any location. Docker®

acts as a shipping container for code and some of the key features of shipping an

application in comparison to shipping a container are listed below.

Content agnostic

Shipping container holds any type of cargo

Application container holds any type of payload

Hardware agnostic

Standard shape allows containers to be moved to and from a train, truck,

ship, crane or warehouse

cargo transport pre-1960

cargo transport – today

Page 6: [White Paper] Are containers the future ?

Page 6

Application container run on

VMs, bare metals, OpenStack

clusters or public instances

Separation of duties

Shipper worries about what

goes inside and carrier

worries about the external

factors

Developer worries about the

internal and operations

worries about the external.

Docker®

Company history

In early 2008, a company known as DotCloud® was founded on a Platform as a

Service product offered by Salomon Hykes and Sebastien Pahl, who graduated

from Epitech®, France. After a few years, due to lack of market adoption,

DotCloud® was open sourced in 2013 and within several months the company was

renamed to Docker® shifting the focus entirely on Linux container technologies.

Funding

Series D – 95 Million in Mar 2015 led by Insight Venture® partners

Series C – 40 Million in Sep 2014 led by Sequoia®

Series B – 15 Million in Jan 2014 led by Greylock® partners

Numbers

No. of funded startup in Docker® ecosystem: 40

No. of GitHub projects with Docker® in the name: 33,000

Dockerized applications: 100,000

No. of organizations in DockerHub: 10,000

Docker® containers downloaded: 320 million by 4 million developers

No. of employees: 120 and expected to grow to 200 by end of the year

Companies acquired: KiteMatci®, Koality®, SocketPlane®

figure 3 - transportation of application (source: Docker.io®)

Page 7: [White Paper] Are containers the future ?

Page 7

Key terminology and concepts

LAYER

In traditional Linux, kernel mounts the

root file system as read only and later

switches to read-write mode. When

Docker® mounts the root file system

it starts as read only, and later

instead of turning into a read-write

file system, it adds a read-write file

system over the read-only file

system. In turn, there can be multiple

read-only file systems stacked on top

of each other. Each of the file

systems is called a layer.

IMAGE

In Docker®, a read only layer is

called an image. Each image depends on another image that forms a layer beneath

it. The lower image is referred to as the parent image. An image without a parent

image is called a base image.

The rebirth of containers has had a telling impact on key concepts like DevOps and

PaaS. It is also aiding the rise of a new microservices architecture.

DevOps

DevOps consists of blending tasks performed by a company's application

development and systems operations teams. The goal of DevOps is to ensure that

both the dev and ops team participate from, the point of idea inception all the way

to the stage of successful service implementation or product roll out. Docker® helps

both the developer and operations team work together as a unit while still allowing

them to separate necessary duties and responsibilities. Some of the DevOps related

key terms are:

AGILE DEVELOPMENT

A software development method within which requirements and solutions evolve

through collaboration between self-organizing, cross-functional teams. It

encourages a rapid and flexible response to change.

CONTINUOUS INTEGRATION

In software engineering this is the practice of assembling and packaging all the

working copies of developer code with a shared mainline, several times a day.

figure 4 – Docker® terminology (source: Docker.com®)

Page 8: [White Paper] Are containers the future ?

Page 8

figure 6 – IaaS vs PaaS vs Saas

CONTINUOUS DELIVERY

This is a software

engineering approach in

which teams keep

producing valuable

software in short cycles

and ensure that the

software can be reliably

released at any time.

CONTINUOUS DEPLOYMENT

The released software is

automatically deployed into

production.

Cloud computing service models

Docker® has had a huge impact on two of the following three service models,

Infrastructure as a Service and Platform as a Service but not on Software as a

Service. Docker® has also played an instrumental role in the addition of a new

service model: Container as a Service.

INFRASTRUCTURE AS A SERVICE (IAAS)

The cloud provider delivers cloud computing infrastructure like servers, storage,

network, virtual machines, and

operating system as a service.

PLATFORM AS A SERVICE (PAAS)

The cloud provider delivers hardware

and software tools needed for

application development to its users

as a service. PaaS is targeted at

developers to allow them to perform

rapid software development.

SOFTWARE AS A SERVICE (SAAS)

Software is delivered as a service over

the internet with the entire stack

managed by the cloud provider.

CONTAINER AS A SERVICE (CAAS)

Container as a service can be seen as the layer between IaaS and PaaS, where IaaS

provides hardware virtualization and a template for the operating system but the

customer is responsible for managing the operating system. While PaaS provides

language specific application runtime and focuses on what is running inside the

process, CaaS acts as the glue by providing the generic framework necessary to run

any process on any infrastructure.

figure 5 – DevOps lifecycle

Page 9: [White Paper] Are containers the future ?

Page 9

figure 8 – Docker® components (source: Docker.com)

figure 9 - client-server architecture

Microservices architecture

In traditional monolithic

applications the major

challenge was deployment of

an application and any

subsequent updates, because

the entire application needed

to be updated in

synchronization with all other

components. Microservices

creates a distributed model for

applications where every

component can be changed

independently without affecting any other components. With Docker® it is easier to

manage individual components separately by running them in different containers

rather than having the entirety of applications components in a single container.

Docker® advocates the use of Microservices architecture while designing container

based applications.

Docker® architecture

Docker® is an open platform for distributing and packaging applications and has

two major components:

DockerEngine - a lightweight

application runtime and packaging

tool

DockerHub - a place holder

for storing and managing Docker®

images

Docker® utilizes a client-server architecture where the client interacts with the

daemon. The client and daemon may both

reside in the same system, or the client can

connect to a remote daemon via restful API’s

or a socket connection. The daemon is

responsible for building, running and

distributing containers created from the

images. Images are stored in Docker® Hub,

a registry which can be set to public or

private viewing with the opportunity for

images to be both uploaded and

figure 7 - monolithic vs microservices

Page 10: [White Paper] Are containers the future ?

Page 10

downloaded easily. Docker® is also designed with an open source platform to build,

ship, and run distributed applications.

BUILD-SHIP-RUN

build Docker® daemon uses Dockerfile, a text file that allows the

creation of Docker® images by capturing the Docker® image

creation process in the form of a script. Currently, there are

about a dozen sets of different commands that a Dockerfile can

contain which are then used to build an image. The process to

build a Docker® image includes the following steps:

Select the OS - specify the operating system or base

image to be used

Construct Layers - run different commands as needed,

creating a layer for each command

Set the environment – specify the arguments and

different ports for application configuration

Build image – save the Dockerfile and build an image

Ship Once the image is built, it can then be shared locally or globally

using a private and public repository or registry.

Run Use the image and run containers anywhere regardless of

whether it is on physical, virtual or cloud servers (as long as the

underlying Linux kernel is the same).

Page 11: [White Paper] Are containers the future ?

Page 11

figure 11 – dev to prod using Docker®

(source: Thomason.io, Author - James)

Use cases

Docker® advocates eight different use cases out of which three key examples are

explained below.

figure 10 - use cases

DEVELOPER PRODUCTIVITY

Developers are a very expensive

resource; therefore they should

neither wait for infrastructure

resources nor work on issues

pertaining to environment. Here,

containerization has some obvious

technical advantages including better

resource utilization and control of

guest operating systems. One of the

more nuanced benefits however has

to do with abstraction of the

operating system and its resource

dependencies from the application

and its dependencies. The latter

benefit helps enable an idealized

workflow called immutable infrastructure, where the state of the application

configuration and dependencies is preserved from development through testing and

on into production. Containers have a finite life cycle that is optimized for developer

productivity and changes are not made to a production environment but rather to

the container itself, making it possible for the application to work everywhere.

Page 12: [White Paper] Are containers the future ?

Page 12

figure 12 – release process using Docker®

figure 13 – virtual machines to Docker®

RAPID DEPLOYMENT

Docker® images are built on union

file systems with changes added as

layers. The steps to build an image

are mentioned in Dockerfile, which is

why deploying containers from

images using Dockerfile takes mere

seconds. Since the same image is

then used across different

environments like development, QA,

staging and production, deployment

is much faster. Also, only changes

are pulled into any environment

rather than the whole image for

continuous deployment. For large

scale deployments many

orchestration tools are evolving to

handle the interdependencies

between containers and how

individual containers can be updated without disrupting the operations of other

containers.

SERVER CONSOLIDATION

Virtual machines (VMs) ushered in the age of consolidation, where servers were

better able to utilize computing power

compared to running applications

directly in a physical server. Unlike

VMs, containers do not need to run a

full version of an operating system,

opening the door to a new level of

consolidation. The consolidation ratio

with containers is better at least by

tenfold compared to consolidation

achieved using virtual machines.

Page 13: [White Paper] Are containers the future ?

Page 13

Vendor ecosystem

A rapid emergence of the Docker® ecosystem is obvious when looking at the last

two years alone and seeing approximately $200 to $250 million dollars of funding

injected into related companies.

figure 14 – container market map by investment (source: Tracxn.com®)

Some of the funded companies are listed below.

ClusterHQ®, the company behind open source Flocker® data

volume manager and container manager, is a tool for database

portability and has received $12 million in funding. Flocker®

enables the stateful live migration of an application which is not

feasible using Docker®.

CoreOS® was a partner of Docker®, until they came up with

their own container: Rocket® at the end of 2014. Kubernetes,

Google®’s open source program, allows companies to manage

a cluster of containers as though they were a single system.

CoreOS® also announced Tectonic, a commercial distribution

of CoreOS® and its containers along with Kubernetes.

CoreOS® has received $20 million in funding with $12 million

coming from Google® Ventures and a combined $8 million from

various other ventures.

Shippable® helps companies ship software faster by giving

them a virtual build, test, and deployment environment in the

cloud. It provides a continuous integration platform built natively

on Docker®. They recently received $8 million in funding in

addition to a previously received $2 million.

Page 14: [White Paper] Are containers the future ?

Page 14

Zett® has developed the Weave project, focusing on Docker®’s

network issue by providing Docker® containers with a single

virtual network even when they are in different hosts. Existing

internal systems can also be exposed to application containers

irrespective of their location. Zett® has received $5 million in

funding from Accel® Partners.

Mesosphere®, whose founders are from Twitter® and Airbnb®,

has built a data center OS (DCOS) which functions as an

operating system that spans all servers in a physical or cloud-

based data center and runs on top of any Linux distribution.

They received $36 million in funding headed by Khosla®

ventures, proving that even an old company riding the Docker®

wave can make the company appealing to investors.

figure 15 - partner map

VMWare® is partnering with Docker® to ensure

Docker® containers work from the vSphere to

vCloud air using VMWare products and to ensure

interoperability of Docker® images with vCenter,

vCloud automation center and vCloud air.

VMWare® has developed a mutually beneficial

partnership with Google® that allows support of

Kubernetes from inside VMWare products.

Google® plans on replicating the pod based

Page 15: [White Paper] Are containers the future ?

Page 15

networking model of open vSwitch to enable

multi-cloud integration.

VMWare®, Pivotal® and Docker® will all work

together in enhancing the libcontainer project with

capabilities from warden.

Red Hat® is working on project atomic, where the

focus is on re-architecting its own operating

system for the container world.

Red Hat® announced a partnership with Docker®

and intends to use Docker® containers to replace

its own, working together to ensure native support

for Docker®.

Red Hat® partnered with Google® Kubernetes for

integrating Kubernetes into OpenShift.

Red Hat® GearD converts application sources to

Docker® images while OpenShift broker will act

as the glue between OpenShift and GearD.

Microsoft® re-architected its operating system to

come up with a nano server for the container

world.

Microsoft® partnered with Docker® to ensure that

Docker® containers can work in Azure® hosted

Linux machines.

Microsoft® is working with Docker® to have

containers in Windows® as well.

IBM® partnered with Docker® to announce an

integrated solution called Docker® Hub Enterprise

(DHE).

IBM® is working with Docker® to ensure

containers are supported in IBM®s cloud.

IBM® sees an opportunity to support hybrid cloud

to its enterprise customers using containers.

Challenges to enterprise adoption

Server virtualization laid the groundwork for adoption of a new computing

architecture that allowed the creation of multiple virtual machines and helped

consolidate infrastructure resources (servers, storage and network) into a shared

pool. On the other hand, containers, by abstracting applications from VMs to run on

a common OS layer, have recently gathered steam due to their ability to consolidate

the resources even further and thereby increasing both the CAPEX and OPEX

Page 16: [White Paper] Are containers the future ?

Page 16

savings.

Even though containers can reduce cost, enterprise adoption is still in the early

stages. According to a survey done by StackEngine® it was found that security and

lack of operational tools are major challenges for enterprise adoption.

Security

Security is one of the top most priorities for any enterprise when selecting newer

technologies, especially when the technology is only two

years old. There are several areas of concern when

planning to deploy containers, for instance: change

control, asset tracking and management, patch

management and configuration management. All these

concerns tie back into security and how people are trained

with respect to handling different processes. Docker®

containers are built upon the Linux container (LXC) that is

not designed to provide security, therefore security related

issues must be addressed outside of Docker®. As a result, the ecosystem is

evolving to provide the required governance and ensure that containers work in a

real multi-tenant environment.

Operational Tools

In the shipping analogy mentioned before, it took several years before the entire

cargo travel industry had transitioned to standard shipping

containers for the transportation of goods. This is because

changes were needed for ships, ports, cranes, trucks,

and forklifts to accommodate the new standard

container. One can extend this analogy further to

application containers and also see that it is going

to take some time for the ecosystem to evolve in a

way that will allow enterprises to feel comfortable

using containers. Docker® containers are good at packaging an application so that

the application can run anywhere, but it is not meant for managing the lifecycle of

the containers at scale. Container Lifecycle management, capacity management,

performance management, cluster management, configuration management and

governance are some of the tools that are evolving with the help of a vendor

ecosystem. Service discovery is another field that is gaining attention with the

emergence of Docker®.

Page 17: [White Paper] Are containers the future ?

Page 17

Conclusion

With containers changing the way applications can be transported there will be less

worrying about underlying infrastructure related dependencies, revolutionizing an

application delivery process that had remained stagnant the last few decades. Yet,

even though containers are solving the application delivery process, they do not

possess the same level of security as virtual machines. As a result, virtual machines

compliment containers by providing them with their required isolation. Over the next

few years both virtual machines and containers will co-exist before containers can

be used in a standalone fashion by enterprises. While Docker® is leading the

container race, CoreOS® is the only main competitor with their container

technology, Rocket®.

Page 18: [White Paper] Are containers the future ?

Page 18

Want to learn more?

We can organize targeted briefings with our subject matter experts

at Orange Silicon Valley.

Contact your Orange Business Services account manager.

Copyright © Orange Business Services [2015]. All rights reserved. The information contained within this document is the property of the

Orange Group and its affiliates and subsidiary companies trading as Orange Business Services. Orange, the Orange logo, Orange Business

Services and product and service names are trademarks of Orange Brand Services Limited. All other trademarks are the property of their

respective owners. This publication provides outline information only. Product information, including specifications, is subject to change

without prior notice.