working with kubernetes - agilepragmatic.com · kubernetes is an open-source...

20
Working with Kubernetes ARE YOU READY FOR TAKEOFF? Phani Prasad Thimmapuram | 10/28/2019

Upload: others

Post on 24-Sep-2020

23 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Working with Kubernetes - agilepragmatic.com · Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. It was

Working with Kubernetes ARE YOU READY FOR TAKEOFF?

Phani Prasad Thimmapuram | 10/28/2019

Page 2: Working with Kubernetes - agilepragmatic.com · Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. It was

PAGE 1

Kubernetes is an open-source container-orchestration system for

automating application deployment, scaling, and management. It

was originally designed by Google, and is now maintained by the

Cloud Native Computing Foundation. It aims to provide a

"platform for automating deployment, scaling, and operations of

application containers across clusters of hosts.

Google Kubernetes Engine (GKE) is a managed, production-ready environment for

deploying containerized applications.

Container-based microservices architectures have profoundly changed the way development and operations teams test and deploy modern software. Containers help companies modernize by making it easier to scale and deploy applications, but containers have also introduced new challenges and more complexity by creating an entirely new infrastructure ecosystem.

Large and small software companies alike are now deploying thousands of container instances daily, and that’s a complexity of scale they have to manage. So how do they do it?

Originally developed by Google, Kubernetes is an open-source container orchestration platform designed to automate the deployment, scaling, and management of containerized applications. In fact, Kubernetes has established itself as the defacto standard for container orchestration and is the flagship project of the Cloud Native Computing Foundation (CNCF), backed by key players like Google, AWS, Microsoft, IBM, Intel, Cisco, and Red Hat.

Kubernetes makes it easy to deploy and operate applications in a microservice architecture. It does so by creating an abstraction layer on top of a group of hosts, so that development teams can deploy their applications and let Kubernetes manage:

• Controlling resource consumption by application or team • Evenly spreading application load across a host infrastructure • Automatically load balancing requests across the different instances of an

application • Monitoring resource consumption and resource limits to automatically stop

applications from consuming too many resources and restarting the applications again

• Moving an application instance from one host to another if there is a shortage of resources in a host, or if the host dies

• Automatically leveraging additional resources made available when a new host is added to the cluster

Page 3: Working with Kubernetes - agilepragmatic.com · Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. It was

PAGE 2

• Easily performing canary deployments and rollbacks

Why is Kubernetes?

As more and more organizations move to microservice and cloud native architectures that make use of containers, they’re looking for strong, proven platforms. Practitioners are moving to Kubernetes for four main reasons:

1. Kubernetes helps you move faster. Indeed, Kubernetes allows you to deliver a self-service Platform-as-a-Service (PaaS) that creates a hardware layer abstraction for development teams. Your development teams can quickly and efficiently request the resources they need. If they need more resources to handle additional load, they can get those just as quickly, since resources all come from an infrastructure shared across all your teams.

No more filling out forms to request new machines to run your application. Just provision and go, and take advantage of the tooling developed around Kubernetes for automating packaging, deployment, and testing, such as Helm (more below).

2. Kubernetes is cost efficient. Kubernetes and containers allow for much better resource utilization than hypervisors and VMs do; because containers are so light weight, they require less CPU and memory resources to run.

3. Kubernetes is cloud agnostic. Kubernetes runs on Amazon Web Services

(AWS), Microsoft Azure, and the Google Cloud Platform (GCP), and you can also run it on-premise. You can move workloads without having to redesign your applications or completely rethink your infrastructure—which lets you to standardize on a platform and avoid vendor lock-in.

In fact, companies like Kublr, Cloud Foundry, and Rancher provide tooling to help you deploy and manage your Kubernetes cluster on-premise or on whatever cloud provider you want.

4. Cloud providers will manage Kubernetes for you. As noted earlier, Kubernetes is currently the clear standard for container orchestration tools. It should come as no surprise, then, that major cloud providers are offering plenty of Kubernetes-as-a-Service-offerings. Amazon EKS, Google Cloud Kubernetes Engine, Azure Kubernetes

Service (AKS), Red Hat Openshift, and IBM Cloud Kubernetes Service all provide a full Kubernetes platform management, so you can focus on what matters most to you—shipping applications that delight your users.

Kubernetes Architecture:

The central component of Kubernetes is the cluster. A cluster is made up of many virtual or physical machines that each serve a specialized function either as a master or

Page 4: Working with Kubernetes - agilepragmatic.com · Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. It was

PAGE 3

as a node. Each node hosts groups of one or more containers (which contain your applications), and the master communicates with nodes about when to create or destroy containers. At the same time, it tells nodes how to re-route traffic based on new container alignments.

The following diagram depicts a general outline of a Kubernetes cluster:

The Kubernetes Master

The Kubernetes master is the access point (or the control plane) from which administrators and other users interact with the cluster to manage the scheduling and deployment of containers. A cluster will always have at least one master, but may have more depending on the cluster’s replication pattern.

The master stores the state and configuration data for the entire cluster in ectd, a persistent and distributed key-value data store. Each node has access to ectd and through it, nodes learn how to maintain the configurations of the containers they’re running. You can run etcd on the Kubernetes master, or in standalone configurations.

Masters communicate with the rest of the cluster through the kube-apiserver, the main access point to the control plane. For example, the kube-apiserver makes sure that configurations in etcd match with configurations of containers deployed in the cluster.

Page 5: Working with Kubernetes - agilepragmatic.com · Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. It was

PAGE 4

The kube-controller-manager handles control loops that manage the state of the cluster via the Kubernetes API server. Deployments, replicas, and nodes have controls handled by this service. For example, the node controller is responsible for registering a node and monitoring its health throughout its lifecycle.

Node workloads in the cluster are tracked and managed by the kube-scheduler. This service keeps track of the capacity and resources of nodes and assigns work to nodes based on their availability.

The cloud-controller-manager is a service running in Kubernetes that helps keep it “cloud-agnostic.” The cloud-controller-manager serves as an abstraction layer between the APIs and tools of a cloud provider (for example, storage volumes or load balancers) and their representational counterparts in Kubernetes.

Nodes

All nodes in a Kubernetes cluster must be configured with a container runtime, which is typically Docker. The container runtime starts and manages the containers as they’re deployed to nodes in the cluster by Kubernetes. Your applications (web servers, databases, API servers, etc.) run inside the containers.

Each Kubernetes node runs an agent process called a kubelet that is responsible for managing the state of the node: starting, stopping, and maintaining application containers based on instructions from the control plane. The kubelet collects performance and health information from the node, pods and containers it runs and shares that information with the control plane to help it make scheduling decisions.

The kube-proxy is a network proxy that runs on nodes in the cluster. It also works as a load balancer for services running on a node.

The basic scheduling unit is a pod, which consists of one or more containers guaranteed to be co-located on the host machine and can share resources. Each pod is assigned a unique IP address within the cluster, allowing the application to use ports without conflict.

You describe the desired state of the containers in a pod through a YAML or JSON object called a Pod Spec. These objects are passed to the kubelet through the API server.

A pod can define one or more volumes, such as a local disk or network disk, and expose them to the containers in the pod, which allows different containers to share storage space. For example, volumes can be used when one container downloads content and another container uploads that content somewhere else.

Since containers inside pods are often ephemeral, Kubernetes offers a type of load balancer, called a service, to simplify sending requests to a group of pods. A service targets a logical set of pods selected based on labels (explained below). By default,

Page 6: Working with Kubernetes - agilepragmatic.com · Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. It was

PAGE 5

services can be accessed only from within the cluster, but you can enable public access to them as well if you want them to receive requests from outside the cluster.

Deployments and replicas

A deployment is a YAML object that defines the pods and the number of container instances, called replicas, for each pod. You define the number of replicas you want to have running in the cluster via a ReplicaSet, which is part of the deployment object. So, for example, if a node running a pod dies, the replica set will ensure that another pod is scheduled on another available node.

A DaemonSet deploys and runs a specific daemon (in a pod) on nodes you specify. They’re most often used to provide services or maintenance to pods. A daemon set, for example, is how New Relic Infrastructure gets the Infrastructure agent deployed across all nodes in a cluster.

Namespaces

Namespaces allow you to create virtual clusters on top of a physical cluster. Namespaces are intended for use in environments with many users spread across multiple teams or projects. They assign resource quotas and logically isolate cluster resources.

Labels

Labels are key/value pairs that you can assign to pods and other objects in Kubernetes. Labels allow Kubernetes operators to organize and select subset of objects. For example, when monitoring Kubernetes objects, labels let you quickly drill down to the information you’re most interested in.

Stateful sets and persistent storage volumes

StatefulSets give you the ability to assign unique IDs to pods in case you need to move pods to other nodes, maintain networking between pods, or persist data between them. Similarly, persistent storage volumes provide storage resources for a cluster to which pods can request access as they’re deployed.

Challenges to Kubernetes adoption

Kubernetes clearly has come a long way in the first five years of life. That kind of rapid growth, though, also involves occasional growing pains. Here are a few challenges with Kubernetes adoption:

1. Forward-thinking dev and IT teams don’t always align with business

priorities. When budgets are only allocated to maintain the status quo, it can be hard for teams to get funding to experiment with Kubernetes adoption initiatives, as such

Page 7: Working with Kubernetes - agilepragmatic.com · Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. It was

PAGE 6

experiments often absorb a significant amount of time and team resources. Additionally, enterprise IT teams are often averse to risk and slow to change.

2. Teams are still acquiring the skills required to leverage Kubernetes. It wasn’t until a few years ago that developers and IT operations folks had to readjust their practices to adopt containers—and now, they should adopt container orchestration, as well. Enterprises hoping to adopt Kubernetes need to hire professionals who can code, as well as knowing how to manage operations and understand application architecture, storage, and data workflows.

3. Kubernetes can be difficult to manage. In fact, you can read any number of Kubernetes horror stories—everything from DNS outages to “a cascading failure of distributed systems”— in the Kubernetes Failure Stories GitHub repo.

Kubernetes is a portable, extensible, open-source platform for managing containerized

applications and services that facilitates both declarative configuration and automation.

Kubernetes provides a platform to configure, automate, and manage:

• Intelligent and balanced scheduling of containers

• Creation, deletion, and movement of containers

• Easy scaling of containers

• Monitoring and self-healing abilities

What is container orchestration? Containers support VM-like separation of concerns but with far less overhead and far greater flexibility. As a result, containers have reshaped the way people think about developing, deploying, and maintaining software. In a containerized architecture, the different services that constitute an application are packaged into separate containers and deployed across a cluster of physical or virtual machines. But this gives rise to the need for container orchestration—a tool that automates the deployment, management, scaling, networking, and availability of container-based applications.

Kubernetes vs. Docker and Kubernetes vs. Docker Swarm Kubernetes doesn’t replace Docker, but augments it. However, Kubernetes does replace some of the higher-level technologies that have emerged around Docker.

One such technology is Docker Swarm, an orchestrator bundled with Docker. It’s still possible to use Docker Swarm instead of Kubernetes, but Docker Inc. has chosen to make Kubernetes part of the Docker Community and Docker Enterprise editions going forward.

Page 8: Working with Kubernetes - agilepragmatic.com · Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. It was

PAGE 7

Not that Kubernetes is a drop-in replacement for Docker Swarm. Kubernetes is significantly more complex than Swarm, and requires more work to deploy. But again, the work is intended to provide a big payoff in the long run—a more manageable, resilient application infrastructure. For development work, and smaller container clusters, Docker Swarm presents a simpler choice.

Kubernetes vs. Mesos Another project you might have heard about as a competitor to Kubernetes is Mesos. Mesos is an Apache project that originally emerged from developers at Twitter; it was actually seen as an answer to the Google Borg project.

Mesos does in fact offer container orchestration services, but its ambitions go far beyond that: it aims to be a sort of cloud operating system that can coordinate both containerized and non-containerized components. To that end, a lot of different platforms can run within Mesos—including Kubernetes itself.

Kubernetes applications can run in hybrid and multi-cloud environments

One of the long-standing dreams of cloud computing is to be able to run any app in any cloud, or in any mix of clouds public or private. This isn’t just to avoid vendor lock-in, but also to take advantage of features specific to individual clouds.

Kubernetes provides a set of primitives, collectively known as federation, for keeping multiple clusters in sync with one another across multiple regions and clouds. For instance, a given app deployment can be kept consistent between multiple clusters, and different clusters can share service discovery so that a back-end resource can be accessed from any cluster. Federation can also be used to create highly available or fault-tolerant Kubernetes deployments, whether or not you’re spanning multiple cloud environments.

Federation is still relatively new to Kubernetes. Not all API resources are supported across federated instances yet, and upgrades don’t yet have automatic testing infrastructure. But these shortcomings are slated to be addressed in future versions of Kubernetes.

Where to get Kubernetes Kubernetes is available in many forms—from open source bits to commercially backed distribution to public cloud service—that the best way to figure out where to get it is by use case.

• If you want to do it all yourself: The source code, and pre-built binaries for most common platforms, can be downloaded from the GitHub repository for Kubernetes.

• If you’re using Docker Community or Docker Enterprise: Docker’s most recent editions come with Kubernetes as a pack-in. This is ostensibly the easiest way for

Page 9: Working with Kubernetes - agilepragmatic.com · Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. It was

PAGE 8

container mavens to get a leg up with Kubernetes, since it comes by way of a product you’re almost certainly already familiar with.

• If you’re deploying on-prem or in a private cloud: Chances are good that any infrastructure you choose for your private cloud has Kubernetes built-in. Standard-issue, certified, supported Kubernetes distributions are available from dozens of vendors including Canonical, IBM, Mesosphere, Mirantis, Oracle, Pivotal, Red Hat, Suse, VMware, and many more.

• If you’re deploying in a public cloud: The three major public cloud vendors all offer Kubernetes as a service. Google Cloud Platform offers Google Kubernetes Engine. Microsoft Azure offers the Azure Kubernetes Service. And Amazon has added Kubernetes to its existing Elastic Container Service. Managed Kubernetes services are also available from IBM, Nutanix, Oracle, Pivotal, Platform9, Rancher Labs, Red Hat, VMware, and many other vendors.

Case Studies – Kubernetes implementation

Box’s Kubernetes Journey:

A few years ago, at Box, it was taking up to six months to build a new microservice. Fast forward to today, it takes only a couple of days.

How did they manage to speed up? Two key factors made it possible,

Page 10: Working with Kubernetes - agilepragmatic.com · Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. It was

PAGE 9

1. Kubernetes technology

2. DevOps practices

Founded in 2005, Box was a monolithic PHP application and had grown over time to

millions of lines of code. The monolithic nature of their application led to them basically

building very tightly coupled designs, and this tight coupling was coming in their way. It

was resulting in them not being able to innovate as quickly as they wanted to. Bugs in

one part of their application would require them to roll back the entire application.

So many engineers working on the same code base with millions of lines of code, bugs

were not that uncommon. It was increasingly hard to ship features or even bug fixes on

time. So, they looked out for a solution and decided to go with the microservices

approach. But then they started to face another set of problems....

A 250-Year-Old Bank's Cloud-Native Kubernetes Journey:

For them, it all started with using containers in the beginning and they began to face

some problems in the initial stages since it was a bank (financial sector), they usually

face more challenges with compliance, governance and the priority was more on

security. On the cloud-native landscape, as there are many tools, it was confusing for

them to choose which tool for what as they didn't want each developer team selecting

different tools and facing a catastrophic separation from others and licensing issues.

Page 11: Working with Kubernetes - agilepragmatic.com · Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. It was

PAGE 10

So, the need for them was to come up with some clear guidelines for developers, the

best cloud features they can consume easily before moving to a cloud-native approach

to creating a uniform way of working. They also came up with a plan of having a

regulated team that previously worked on tools and processes to share knowledge and

best practices. They created a team called 'Stratus.' The mission of this team 'Stratus' is

to enable development teams to quickly deliver secure and high-quality software by

providing them with easy to use platforms, security, portability across clouds on

enterprise-level, and reusable software components.

The keynote talk video is here: http://bit.ly/2FfUGUE

How did 'Pokemon Go' able to scale so efficiently?

Page 12: Working with Kubernetes - agilepragmatic.com · Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. It was

PAGE 11

500+ million downloads and 20+ million daily active users. That's HUGE. Pokemon Go

engineers never thought their user base would increase exponentially surpassing

the expectations within a short time. Even the servers couldn't handle this much

traffic.

The Challenge:

The horizontal scaling on one side but Pokemon Go also faced a severe challenge when

it came to vertical scaling because of the real-time activity by millions of users

worldwide. Niantic was not prepared for this. The Solution: The magic of containers.

The application logic for the game ran on Google Container Engine (GKE) powered by

the open-source Kubernetes project. Niantic chose GKE for its ability to orchestrate

their container cluster at planetary-scale, freeing its team to focus on deploying live

changes for their players. In this way, Niantic used Google Cloud to turn Pokémon GO

into a service for millions of players, continuously adapting and improving. This got

them more time to concentrate on building the game's application logic and new

features rather than worrying about the scaling part. “Going Viral” is not always easy

to predict but you can always have Kubernetes in your tech stack.

Page 13: Working with Kubernetes - agilepragmatic.com · Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. It was

PAGE 12

Italy's biggest traditional bank is embracing Kubernetes.

A conventional bank running its real business on such a young technology? Are you

kidding me? Nope, I am not kidding. Italy's banking group, Intesa Sanpaolo, has made

this transition. These are banks who still run their ATM networks on 30-year-old

mainframe technology and embracing the hottest trend & tech is nearly unbelievable.

Even though ING, the banking and financial corporation changed the way the banks

were seen by upgrading itself with Kubernetes and DevOps practices very early in the

game, there was still a stigma with adopting Kubernetes in the highly regulated and

controlled environments like Healthcare, Banks, etc. The bank's engineering team

came up with an initiative strategy in 2018 to throw away the old way of thinking and

started embracing the technologies like microservices, container architecture,

and migrate from monolithic to multi-tier applications. It was transforming itself into a

software company, unbelievable. Today the bank runs more than 3,000 applications.

Of those, more than 120 are now running in production using the new microservices

architecture, including two of the 10 most business-critical for the bank.

Read the full case here: https://lnkd.in/e_c5fbg

Kubernetes success story at Pinterest

With over 250 million monthly active users and serving over 10 billion recommendations

every single day, that is huge. (The numbers might have changed now)

Page 14: Working with Kubernetes - agilepragmatic.com · Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. It was

PAGE 13

As they knew these numbers are going to grow day by day, they began to realize the

pain of scalability and performance issues. Their initial strategy was to move their

workload from EC2 instances to Docker containers; hence they first moved their services

to Docker to free up engineering time spent on Puppet and to have an immutable

infrastructure. And then the next strategy was to move to Kubernetes:) Now they can

take ideas from ideation to production in a matter of minutes whereas earlier they used

to take hours or even days. They have cut down so much of overhead cost by

utilizing Kubernetes and have removed a lot of manual work without making engineers

worry about the underlying infrastructure.

Read their impressive story - https://lnkd.in/eTxwFXX

Airbnb's Kubernetes story

Airbnb's transition from a monolithic to a microservices architecture is amazing.

They needed to scale continuous delivery horizontally, and the goal was to make

continuous delivery available to the company's 1000 or so engineers so they could add

new services. Airbnb adopted to support over 1000 engineers concurrently configuring

and deploying over 250 critical services to Kubernetes (at a frequency of about 500

deploys per day on average).

Page 15: Working with Kubernetes - agilepragmatic.com · Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. It was

PAGE 14

I want you to see this excellent presentation from Melanie Cebula, the infrastructure

engineer at Airbnb: https://lnkd.in/eZWpme3 Also, take a look at these Kubernetes

best practices from frog’s mouth - https://lnkd.in/eZtxx-Z

The New York Times Kubernetes story

Today the majority of their customer-facing applications are running on

Kubernetes. What an amazing story:) The biggest impact has been to speed up

deployment and productivity. Legacy deployments that took up to 45 minutes are now

pushed in just a few.

It's also given developers more freedom and less bottlenecks. The New York Times

has gone from a ticket-based system for requesting resources and weekly deploy

schedules to allowing developers to push updates independently.

Check out the evolution & the fascinating story of The New York Times tech stack

- http://bit.ly/nyttechstack

Page 16: Working with Kubernetes - agilepragmatic.com · Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. It was

PAGE 15

Kubernetes at Reddit

For many years, the Reddit infrastructure team followed traditional ways of provisioning

and configuring. However, this didn't go long until they saw some huge drawbacks and

failures happening while doing the things the old way.

They moved to 'Kubernetes.' See this amazing video where their infrastructure release

engineering manager describes the Kubernetes story at Reddit.

https://lnkd.in/eSH2H8K

Tinder’s Kubernetes story:

Due to high traffic volume, Tinder's engineering team faced challenges of scale and

stability. What did they do? Kubernetes.' Yes, the answer is Kubernetes.

Page 17: Working with Kubernetes - agilepragmatic.com · Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. It was

PAGE 16

Tinder's engineering team solved interesting challenges to migrate 200 services and run

a Kubernetes cluster at scale totaling 1,000 nodes, 15,000 pods, and 48,000 running

containers. Was that easy? No ways. However, they had to do it for the smooth

business operations going further. One of their Engineering leaders said, "As we

onboarded more and more services to Kubernetes, we found ourselves running a DNS

service that was answering 250,000 requests per second." Fantastic culture, Tinder’s

entire engineering organization now has knowledge and experience on how to

containerize and deploy their applications on Kubernetes.

Read this fascinating case study - http://bit.ly/KubernetesatTinder

Try This Simple 5-Step Kubernetes CI/CD Process

Page 18: Working with Kubernetes - agilepragmatic.com · Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. It was

PAGE 17

Step 1. Develop your microservice. This can be a .war or .jar file.

Step 2. Create a Docker Framework using Tomcat and Java-8 on Ubuntu as a base image.

Step 3. Create the Docker image for the microservice by adding the .war/.jar file to the

Docker Framework.

Step 4. Create a Helm chart for the microservice.

Step 5. Deploy the microservice to the Kubernetes cluster using the Helm Chart.

Try: http://bit.ly/CICDPROCESS

Page 19: Working with Kubernetes - agilepragmatic.com · Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. It was

PAGE 18

Kubernetes production checklist.

Page 20: Working with Kubernetes - agilepragmatic.com · Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. It was

PAGE 19

Here are some more tips for taking your containers

all the way to production