lisa2017 kubernetes: hit the ground running

Post on 29-Jan-2018

233 Views

Category:

Technology

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

October 29–November 3, 2017 | San Francisco, CAwww.usenix.org/lisa17 #lisa17

Kubernetes:Hit the Ground Running

Chris "mac" McEniry

Administrvia

2

Goal• Be familiar with the basics of how to use Kubernetes

• Not in scope

• Advanced Usages of Kubernetes

• Administering Kubernetes

3

Structure• Lecture followed by Watch/Follow

• I'll show it, and you can follow along

• Ask questions as we go along

• We'll take a poll with each section - if 50% are good continuing, we will continue

• Given time and size, unable to do individual attention

• But happy to follow up afterwards

4

Biases• Focused on Linux based containers

• Expect some (though not much) familiarity with Docker

• Exercises written on MacOS with /bin/sh

• But should work with minor tweaks

5

Prerequisite Tools• VirtualBox

• docker

• https://store.docker.com/editions/community/docker-ce-desktop-mac

• https://store.docker.com/editions/community/docker-ce-desktop-windows

6

Basic Kubernetes Tools• kubectl

• curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.7.5/bin/darwin/amd64/kubectl

• curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.7.5/bin/windows/amd64/kubectl.exe

• minikube

• https://storage.googleapis.com/minikube/releases/v0.21.0/minikube-darwin-amd64

• https://storage.googleapis.com/minikube/releases/v0.21.0/minikube-windows-amd64.exe

7

Exercises

8

https://github.com/cmceniry/lisa2017-kubernetes

Lecture

9

Containers

10

Container Info Dump• Lightweight isolation

• More like process than like VM but properties of both

• Same Kernel

• Separate File System (with some mappings)

• Average lifetime of minutes-days

• VMs: Average lifetime of hours-weeks

• Machines: Average lifetime of weeks-years

11

Container Info Dump• Container : Process with a specific namespace/capability/cgroups configuration

• Namespaces: Create process visibility separation (e.g. what `ps -ef` show, `ls /` looks different)

• Capabilities: Allow/Disallow privileged functions (e.g. raw sockets for packet captures)

• CGroups: Allocate resources for processes (e.g. max cpu, memory, I/O)

• Container Image

• Packaging which becomes the basis for the root file system of a container

• Different ways to maintain it (tarball, layers, etc) depending on runtime which did packaging

12

Key

Container Info Dump• Multiple Container Runtimes

• Docker : Most common

• Rocket

• CRI-O

• Focus on image management and running individual containers

• Get Image/Start/Stop

13

Packaging -> 12 Factor Model• Containers can be as simple as a packaging format

• Or can follow https://12factor.net/

• Many usages follow the 12 Factor Model, but some support other apps

14

Kubernetes Basics

15

Kubernetes• Container Orchestrator

• Focus on running groups of containers working together

16

Interface• API Driven (provided by Kubernetes' apiserver)

• Ubiquitous client - comment line `kubectl`

• Have multiple resources: pod, replicaset, deployment, job, volume, etc

• Defined by specification

• Put/Get specifications to/from API server

17

Desired State• Focus on the *what* (specification passed to the API server) not the *how*

• "I want 5 small compute units doing my web app front end, and 2 large compute units doing my web app database"

• Orchestrator takes definition of "what" and figures out "how" to reconcile it

18

Controllers• Automation Providers

• Examine the current state of the cluster (via the API server), what the desired state says it should be and takes the necessary steps

• All kinds of controllers:

• Scheduler, Manager

• Cloud Provider(s)

• Ingress

19

Kubernetes Control Plane Components

20

Processes• State Storage (etcd) - 1 cluster : Live DB of cluster

• API Server - enough to cover load : How everything interacts

• Scheduler - 3 for redundancy, 1 active : Decides what should be running where

• Controller Manager - 3 for redundancy, 1 active : Spawns internal controllers which handle all heavy lifting functions

• kubelet - 1 per workload node : Runs wherever a workload runs.

• Kube Proxy - 1 per workload node : Handles traffic forwarding into/out of cluster and to known endpoints

• Container Networking - TBD : Handles routing of workloads to each other

21

Master vs Minion• Master : Core of Control Plane runs here

• Minion : Where the actual workloads run

• kubelet and kube-proxy run on both (depending on control plane installation)

• Lots of variations

• Can have internal/external etcd

• Can run control plane in the cluster even

22

Add ons• Pluggable cluster features

• network overlay : Provides host<->host connectivity which makes the cluster network look connected

• kube-dns : Provides a DNS domain which provides naming for cluster resources

• das

23

Kubernetes Resources

24

Namespaces• Administrative unit

• Hold (most of) the other resources: Pods, Services, CMs, Secrets, etc.

• Apply

25

Pod• Basic unit of compute

• The "What" of the workload

• Meant for portions of a workload that are tightly coupled

• 1 or more containers scheduled together

• Typically expected to run indefinitely

• Containers sharing a network namespace

• I.e. "host" from a network perspective

26

Configuration Resources• Under the 12 Factor model, containers end up being immutable, so you need to be able to get

configuration parameters in.

• ConfigMap : key/value collections which can be made available to Containers in Pods

• Secrets : key/value collections which can be made available to Containers in Pods but should be handled carefully

27

Pod Collections• ReplicaSet : Multiple identically configured pods (differ by IP)

• Deployment : Mutable collection of Pods performing a workload

• Job : Pods meant to run ones

• CronJob : Pods meant to run on a regular but not constant basis

• StatefulSet : Pods meant to have some consistency over their lifetimes (name, Pod IP, etc)

• DaemonSet : Pods meant to running on all nodes (or subset of nodes with a label)

28

Two Most Common

Service• Abstraction of how to get to a workload

• The "Way" to the workload

• Uses labels to decide membership (which Pods are "behind" the Service)

• For Kubernetes base, this mapping is a L4 load balancer

• But add-ons provide this via DNS and other methods

29

(Slight aside) Labels• Not a resource, but is metadata on resources

• Every resource can have labels added to it

• Labels: key/value pairs that tag resource

• Used to select subsets of those resources

• "Gimme all Pods with the label `app=web`

• Services say "connect this frontend with all pods with the label `app=web`"

• Metadata also includes name, annotations (structured data), timestamps, status, etc

30

Other Resources• Authorization: Role, RoleBinding, ClusterRole, ClusterRoleBinding

• StorageClass, PersistentVolume, PersistentVolumeClaim

• Networking: Ingress, NetworkPolicy

• Cluster Management: Node, Cluster (Federation), ComponentStatus (pseudo)

31

Networking

32

Unified Network Space• All Pods are reachable from all other Pods from an IP perspective

• Policy may not allow, but common, non-conflicting, and known IP space

• No need to port map ports for Pod<->Pod inside of the cluster

• May need to map Outside->Pod

• Need to map from Container->Pod (container runtime behavior)

• Typically

• Implemented as an overlay but can be done with direct routing

• Must NAT to outside cluster (IPv6 is coming not full support yet)

33

IPs• Cluster IP Space : Used to provide Unified Network Space. Pool of Pod IPs

• Service IP Space : Used to provide area to map services into

• Node IPs : The IPs assigned to master/minions

• Typically used when referring to getting into/out of the cluster

34

IPTables• Any mapping in the cluster is done via IPTables

• Handles Service IP to backend Pod IPs

• Handles inside->outside IP NAT

• Handles outside->inside IP NAT

• Controlled by kube-proxy (not really a proxy anymore)

35

Useful Pointers

36

Common things to remember• Pods : multiple containers working closely together

• Pod IP : Ultimately how traffic gets into a workload. One CIDR over a cluster.

• IPTables : How cube proxy maps everything into/out of the cluster (and how it maps services)

• Controllers : Entities that do a piece of automation

• Labels and Selectors : Ways to classify resources

• IPTables, IPTables, IPTables

37

Exercises

38

EX00: Starting minikube• Purpose

• Make sure everything is up and running

39

EX00: Starting minikube$ minikube startStarting local Kubernetes v1.7.0 cluster...Starting VM...Downloading Minikube ISO 97.80 MB / 97.80 MB [=====================================] 100.00% 0sGetting VM IP address...Moving files into cluster...Setting up certs...Starting cluster components...Connecting to cluster...Setting up kubeconfig...Kubectl is now configured to use the cluster.

40

EX00: Starting minikube$ minikube statusminikube: Runninglocalkube: Runningkubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100$$ kubectl versionClient Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T09:14:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad", GitTreeState:"clean", BuildDate:"2017-07-26T00:12:31Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

41

Is everything running ok?

EX00: Starting minikube$ kubectl get csNAME STATUS MESSAGE ERRORscheduler Healthy okcontroller-manager Healthy oketcd-0 Healthy {"health": "true"}$$$ kubectl cluster-infoKubernetes master is running at https://192.168.99.100:8443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

42

EX01: helloshell• Purpose

• Show how to run a simple command

• What's going on when you run the command

• Demonstrate pod, replicaset, deployment

43

EX01: helloshell$ kubectl run -it --image=busybox bb1 /bin/shIf you don't see a command prompt, try pressing enter./ # psPID USER TIME COMMAND 1 root 0:00 /bin/sh 7 root 0:00 ps/ #

44

Is everything running ok?

EX01: helloshell$ kubectl get podNAME READY STATUS RESTARTS AGEbb1-1176220718-z09mj 1/1 Running 1 46s$$$$$ kubectl get rsNAME DESIRED CURRENT READY AGEbb1-1176220718 1 1 1 19s$$$ kubectl get deployNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEbb1 1 1 1 1 1m

45

What's this actually running?

*-##########-XXXXX format tends to look like a deployment/replicaset

Why 1?

EX01: helloshell$ kubectl attach bb1-1176220718-z09mj -c bb1 -i -tIf you don't see a command prompt, try pressing enter./ # Session ended, resume using 'kubectl attach bb1-1176220718-z09mj -c bb1 -i -t' command when the pod is running$$$$ kubectl attach bb1-1176220718-z09mj -c bb1 -i -tIf you don't see a command prompt, try pressing enter. error: unable to upgrade connection: container bb1 not found in pod bb1-1176220718-z09mj_default$ $ kubectl get pod bb1-1176220718-z09mjNAME READY STATUS RESTARTS AGEbb1-1176220718-z09mj 0/1 Completed 2 10m

46

Let's try exiting and entering quickly

What's this error?

What does it mean when the pod is running?

0/1 means it's in the middle of (re-)starting

EX01: helloshell$ kubectl get pod bb1-1176220718-z09mj -o yamlapiVersion: v1kind: Pod...spec: containers: ... restartPolicy: Always

47

Pod description says it's going to try to restart

EX01: helloshell$ kubectl delete deploy/bb1deployment "bb1" deleted

48

Let's clean up some

EX02: Official Introduction• Purpose

• Connect docker and kubernetes

• Build artifacts that can go into kubernetes

• Reinforce pod, replicaset, deployment

• Demonstrate services

• From: https://kubernetes.io/docs/tutorials/stateless-application/hello-minikube/

49

EX02: Official Introduction $ eval $(minikube docker-env)$ docker psCONTAINER ID IMAGE COMMAND6d5fac7595bd gcr.io/google_containers/pause-amd64:3.0 "/pause"...

50

Connect to minikube's docker daemon

Now we can interact with it just as if it was a local docker daemon

EX02: Official Introduction$ cd ex02$ lsDockerfile server.js$$ docker build -t intro:0.0.1 .Sending build context to Docker daemon 3.072kBStep 1 : FROM node:6.9.26.9.2: Pulling from library/node75a822cd7888: Pull complete57de64c72267: Pull complete...Step 4 : CMD node server.js ---> Running in 22b57d427b1c ---> 280abd363febRemoving intermediate container 22b57d427b1cSuccessfully built 280abd363feb

51

Build a container image from the official example

EX02: Official Introduction$ kubectl run intro --image=intro:0.0.1 --port=8080$$ kubectl get deploymentsNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEintro 1 1 1 1 7s$$ kubectl get podsNAME READY STATUS RESTARTS AGEintro-1197849725-75kk9 1/1 Running 0 10s$

52

Start the image

So, it's running - now what?

EX02: Official Introduction$ kubectl expose deploy/intro --type=NodePortservice "intro" exposed$$$ kubectl get svcNAME CLUSTER-IP EXTERNAL-IP PORT(S) AGEintro 10.0.0.238 <nodes> 8080:30561/TCP 6skubernetes 10.0.0.1 <none> 443/TCP 9h$$$ minikube service introOpening kubernetes service default/intro in default browser...$$$ minikube service intro --urlhttp://192.168.99.100:30561

53

Make it available over the network

Tells to map it to a port on all (one) of the nodes

Automatically opens browser

Or get the URL yourself

We're going to leave this running, for the next exercise...

EX03: The Dashboard Add-on• Purpose

• Demonstrate the dashboard add-on

• Demonstrate minikube dashboard shortcuts

54

EX03: The Dashboard Add-on$ minikube dashboardOpening kubernetes dashboard in default browser...$$$ minikube dashboard --urlhttp://192.168.99.100:30000$

55

Start up the dashboard

Automatically opens browser

As before, can get it yourself

But before, we did `minikube service $NAME`...

EX03: The Dashboard Add-on$ kubectl get service -n kube-systemNAME CLUSTER-IP EXTERNAL-IP PORT(S) AGEkube-dns 10.0.0.10 <none> 53/UDP,53/TCP 9hkubernetes-dashboard 10.0.0.114 <nodes> 80:30000/TCP 9h$$$ minikube service -n kube-system kubernetes-dashboardOpening kubernetes service kube-system/kubernetes-dashboard in default browser...$ minikube service -n kube-system kubernetes-dashboard --urlhttp://192.168.99.100:30000

56

There is a `kubernetes-dashboard` service running; it's just in the kibe-system namespace (we'll come back to that...)

`minikube dashboard` is a shortcut for `minikube service...` for the dashboard

EX03: The Dashboard Add-on

57

EX04: Add-ons• Purpose

• Explore the add-ons and add-ons manager

• Explore the kibe-system namespace

58

EX04: Add-ons$ kubectl get -n kube-system podsNAME READY STATUS RESTARTS AGEkube-addon-manager-minikube 1/1 Running 0 21hkube-dns-910330662-rnwgp 3/3 Running 0 21hkubernetes-dashboard-tlh94 1/1 Running 0 21h$$$ minikube addons list- ingress: disabled- dashboard: enabled- heapster: disabled- kube-dns: enabled- registry: disabled- registry-creds: disabled- addon-manager: enabled- default-storageclass: enabled

59

Already saw kibe-system services - what about pods?

This are background processes which are managed by the qinikube add-ons manager

EX04: Add-ons$ minikube addons enable heapsterheapster was successfully enabled$$$ kubectl -n kube-system get podsNAME READY STATUS RESTARTS AGEheapster-t00zx 1/1 Running 0 2sinfluxdb-grafana-ll71w 2/2 Running 0 2skube-addon-manager-minikube 1/1 Running 0 22hkube-dns-910330662-rnwgp 3/3 Running 0 22hkubernetes-dashboard-tlh94 1/1 Running 0 22h

60

Let's turn on something else

Check on the pods again

EX04: Add-ons• addon-manager : Controller which provides these add-ons

• dashboard : Web interface for cluster information and status

• kube-dns : Provides cluster DNS mapping (we'll come back to this)

• heapster : Gathers container and node statistics

• registry : Can run a container image registry

• default-storageclass : Provides a simply host path persistent volume

• ingress : Provides a Layer 7 load balancer as Kubernetes primitive

• registry-creds : Simplified way to provide container registry user/password for image pulls

61

EX04: Add-ons

62

EX05: Working with pods• Purpose

• Explore multiple ways of seeing pod information

• Explore the pod spec

63

EX05: Working with pods$ kubectl get podNAME READY STATUS RESTARTS AGEintro-1197849725-g22tx 1/1 Running 0 15m$$$ kubectl get pod -o wideNAME READY STATUS RESTARTS AGE IP NODEintro-1197849725-g22tx 1/1 Running 0 15m 172.17.0.4 minikube$$$ kubectl get pods -o namepods/intro-1197849725-g22tx$

64

Get pods - normal

Get pods - show Pod IP and Node

Get pods - show just name (good for shell loops)

EX05: Working with pods$ kubectl describe pod/intro-1197849725-g22txName: intro-1197849725-g22txNamespace: defaultNode: minikube/192.168.99.100Start Time: Sat, 09 Sep 2017 13:26:55 -0700Labels: pod-template-hash=3978227742

run=introAnnotations: ...Status: RunningIP: 172.17.0.10Created By: ReplicaSet/intro-1197849725Controlled By: ReplicaSet/intro-1197849725Containers: intro: Container ID: docker://... Image: intro:0.0.2 Image ID: docker://sha256:... Port: 8080/TCP State: Running...

65

This is `describe pod`. It gives you some human readable information about the pod.

EX05: Working with pods$ kubectl get pods/intro-1197849725-g22tx -o yamlapiVersion: v1kind: Podmetadata: name: intro-1197849725-g22tx namespace: default ...spec: containers: - image: intro:0.0.1 ...status: hostIP: 192.168.99.100 podIP: 172.17.0.4 ...

66

This is what a pod spec looks like. This can be used for specific search/display or to configure the system.

EX05: Working with pods$ kubectl get pod -o=custom-columns=NAME:.metadata.name,IP:.status.podIPNAME IPintro-1197849725-g22tx 172.17.0.4

67

Show just the name and podIP

EX05: Working with pods $ cat redis.yamlapiVersion: v1kind: Podmetadata: name: redis-manualspec: containers: - image: redis:4.0.1 name: redis$$ kubectl apply -f redis.yamlpod "redis-manual" created$ kubectl get podsNAME READY STATUS RESTARTS AGEredis-manual 1/1 Running 0 2s...

68

Write a spec file manually

Apply it to the cluster

EX05: Working with pods $ kubectl apply -f redis.yamlpod "redis-manual" created$$$$$ kubectl delete -f redis.yamlpod "redis-manual" deleted$$$$$$ kubectl apply -f redis.yamlpod "redis-manual" created$

69

$ kubectl get pods -w | grep redisredis-manual 0/1 Pending 0 0sredis-manual 0/1 Pending 0 0sredis-manual 0/1 ContainerCreating 0 0sredis-manual 1/1 Running 0 1s

redis-manual 1/1 Terminating 0 7sredis-manual 0/1 Terminating 0 8sredis-manual 0/1 Terminating 0 9sredis-manual 0/1 Terminating 0 9s

redis-manual 0/1 Pending 0 15sredis-manual 0/1 Pending 0 15sredis-manual 0/1 ContainerCreating 0 15sredis-manual 1/1 Running 0 16s

We can also -w(atch) the pod changes

EX05: Working with pods $ kubectl get events | grep redis-manual3m 3m 1 redis-manual Pod Normal Scheduled default-scheduler Successfully assigned redis-manual to minikube

3m 3m 1 redis-manual Pod Normal SuccessfulMountVolume kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-3j744"

3m 3m 1 redis-manual Pod spec.containers{redis} Normal Pulled kubelet, minikube Container image "redis:4.0.1" already present on machine

3m 3m 1 redis-manual Pod spec.containers{redis} Normal Created kubelet, minikube Created container

3m 3m 1 redis-manual Pod spec.containers{redis} Normal Started kubelet, minikube Started container

3m 3m 1 redis-manual Pod spec.containers{redis} Normal Killing kubelet, minikube Killing container with id docker://redis:Need to kill Pod

...

70

Can see the same a -w(atch) and more in the events

EX06: Working in a container• Purpose

• Explore starting points for debugging

• Explore how to get logs

• Explore how to get inside a container

71

EX06: Working in a container$ kubectl logs redis-manual...:32.344 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo...:32.344 # Redis version=4.0.1, bits=64, commit=00000000, modified=0, pid=1, just started...:32.344 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf...:32.345 * Running mode=standalone, port=6379....:32.345 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128....:32.345 # Server initialized...:32.345 * Ready to accept connections

72

Getting "logs" == stdout and stderr

Convention of 12 Factor logging approach. https://12factor.net/logs

EX06: Working in a container$ kubectl exec -it redis-manual /usr/local/bin/redis-cli127.0.0.1:6379> set foo 10OK127.0.0.1:6379> get foo"10"127.0.0.1:6379>$$$ kubectl exec -it redis-manual /usr/local/bin/redis-cli127.0.0.1:6379> get foo"10"

73

How to work on the redis container?

It persists across invocations of the client command

EX06: Working in a container$ kubectl delete pod/redis-manualpod "redis-manual" deleted$ kubectl apply -f ./redis.yamlpod "redis-manual" created$ kubectl exec -it redis-manual /usr/local/bin/redis-cli127.0.0.1:6379> get foo(nil)

74

But does not persist across invocations of the pod itself

EX07: Deployment replicas• Purpose:

• Explore deployment keeping replicas running

• Explore adding/removing replicas from a deployment

75

EX07: Deployment replicas$ kubectl get podsNAME READY STATUS RESTARTS AGEintro-1197849725-g22tx 1/1 Running 0 30mredis-manual 1/1 Running 0 10m$$$ kubectl get pods -o name | xargs kubectl deletepod "intro-1197849725-g22tx" deletedpod "redis-manual" deleted$ kubectl get podsNAME READY STATUS RESTARTS AGEintro-1197849725-g22tx 1/1 Terminating 0 30mintro-1197849725-v78kg 1/1 Running 0 17s

76

Let's do a little cleanup

A new intro pod is already around

EX07: Deployment replicas$ kubectl get deploy introNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEintro 1 1 1 1 30m$$$$ kubectl get podNAME READY STATUS RESTARTS AGEintro-1197849725-v78kg 1/1 Terminating 0 2m$

77

Deployment tries to keep CURRENT equal to DESIRED

Delete the deployment to make the pod go away

EX07: Deployment replicas$ kubectl run intro --image=intro:0.0.1 --port=8080 --replicas=3deployment "intro" created$ kubectl get deployNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEintro 3 3 3 3 13s$ kubectl get podsNAME READY STATUS RESTARTS AGEintro-1197849725-bl6qs 1/1 Running 0 17sintro-1197849725-gdr6f 1/1 Running 0 17sintro-1197849725-qm7zd 1/1 Running 0 17s

78

Let's start it with more instances

replicas == pod count

EX07: Deployment replicas$ kubectl delete po/intro-1197849725-qm7zdpod "intro-1197849725-qm7zd" deleted$ kubectl get podsNAME READY STATUS RESTARTS AGEintro-1197849725-bl6qs 1/1 Running 0 1mintro-1197849725-gdr6f 1/1 Running 0 1mintro-1197849725-l50k3 1/1 Running 0 3sintro-1197849725-qm7zd 1/1 Terminating 0 1m$ kubectl get deploy introNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEintro 3 3 3 3 1m

79

Delete a pod again

The deployment will do what it needs to to get the count back to 3

EX07: Deployment replicas$ kubectl scale deploy/intro --replicas=1deployment "intro" scaled$ kubectl get deploy/introNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEintro 1 1 1 1 2m$ kubectl get podsNAME READY STATUS RESTARTS AGEintro-1197849725-bl6qs 1/1 Terminating 0 3mintro-1197849725-gdr6f 1/1 Running 0 3mintro-1197849725-l50k3 1/1 Terminating 0 1m$$

80

Scale it down to 1

EX08: Deployment updates• Purpose

• Show how to update a deployment

81

EX08: Deployment updates

82

Let's update Hello World to be a bit more specific.

EX08: Deployment updates$ grep Hello server.js response.end('Hello Kubernetes Tutorial!');$ docker build -t intro:0.0.2 .Sending build context to Docker daemon 3.072kBStep 1 : FROM node:6.9.2 ---> faaadb4aaf9bStep 2 : EXPOSE 8080 ---> Using cache ---> 20e3088f6122Step 3 : COPY server.js . ---> 9aa0164faa7eRemoving intermediate container abe7416c8707Step 4 : CMD node server.js ---> Running in 4d672a4e6fac ---> 1df0203ce037Removing intermediate container 4d672a4e6facSuccessfully built 1df0203ce037

83

First, we need a new image to update to.

EX08: Deployment updates$ kubectl scale deploy/intro --replicas=3deployment "intro" scaled$$ # kubectl get pods -w ### watch the deployment as it happens$$$ kubectl set image deploy/intro intro=intro:0.0.2deployment "intro" image updated$ kubectl rollout status deploy/introdeployment "intro" successfully rolled out$$ minikube service intro

84

Next, let's make sure we have some additional copies for resilience

85

EX08: Deployment updatesintro-3978227742-hllw8 0/1 Pending 0 0sintro-1197849725-j3l1c 1/1 Terminating 0 1mintro-3978227742-hllw8 0/1 Pending 0 0sintro-3978227742-hllw8 0/1 ContainerCreating 0 0sintro-3978227742-6kb68 0/1 Pending 0 0sintro-3978227742-6kb68 0/1 Pending 0 0sintro-3978227742-6kb68 0/1 ContainerCreating 0 0sintro-3978227742-hllw8 1/1 Running 0 0sintro-1197849725-lt7fs 1/1 Terminating 0 1mintro-3978227742-5d5w9 0/1 Pending 0 0sintro-3978227742-5d5w9 0/1 Pending 0 0sintro-3978227742-5d5w9 0/1 ContainerCreating 0 0sintro-3978227742-5d5w9 1/1 Running 0 1sintro-1197849725-2nzz4 1/1 Terminating 0 2mintro-3978227742-6kb68 1/1 Running 0 1s

86

Our deployment strategy (default rollingUpdate) will create new Pods before deleting the old ones, and it will roll over some of the pods "slowly". (In this exercise, the pods come up too quickly so not much waiting.)

EX09 Pod information inside• Purpose

• Show how to expose information to pod

• Explore the deployment specification

• Explore the `edit` command

87

EX09 Pod information inside

88

Since I have 3 pods, how do I know which one I'm hitting? Let's add the pod IP to our response.

EX09 Pod information inside$ grep Hello server.js response.end('Hello Kubernetes Tutorial from ' + process.env.PODIP + '!\n');$ docker build -t intro:0.0.3 .Sending build context to Docker daemon 3.072kBStep 1 : FROM node:6.9.2 ---> faaadb4aaf9bStep 2 : EXPOSE 8080 ---> Using cache ---> 20e3088f6122Step 3 : COPY server.js . ---> 83e6090ec153Removing intermediate container 76bf52dc48dcStep 4 : CMD node server.js ---> Running in c08880cc596d ---> e2c588c47a0aRemoving intermediate container c08880cc596dSuccessfully built e2c588c47a0a$$ kubectl set image deploy/intro intro=intro:0.0.3deployment "intro" image updated

89

Start by adding a new image (0.0.3) which pulls an environment variable called PODIP

And roll this out

EX09 Pod information inside

90

It's updated, but we haven't defined the environment variable in it yet.

EX09 Pod information inside$ kubectl get deploy/intro -o yamlapiVersion: extensions/v1beta1kind: Deploymentmetadata: name: intro ...spec: replicas: 3 strategy: rollingUpdate: ... template: spec: containers: - image: intro:0.0.3 name: intro ports: - containerPort: 8080 protocol: TCP ...

91

Deployments have specs just like pods do

The pod spec is nested inside of the deployment spec

EX09 Pod information inside$ kubectl edit deploy/intro

spec:... template: spec: containers: - image: intro:0.0.3 name: intro env: - name: PODIP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP...

92

Opens up the EDITOR

Once it's written and EDITOR is exited, it'll save and cycle the pods

EX09 Pod information inside...deployment "intro" edited$ kubectl get podsNAME READY STATUS RESTARTS AGEintro-1725170555-306cj 1/1 Terminating 0 28mintro-1725170555-pw8zd 1/1 Terminating 0 28mintro-1725170555-qbctx 1/1 Terminating 0 28mintro-3265745252-07l0b 1/1 Running 0 3sintro-3265745252-0qw03 1/1 Running 0 4sintro-3265745252-sxz2s 1/1 Running 0 4s$$ minikube service intro...

93

The pods cycle...

EX09 Pod information inside

94

And we are now showing the Pod IP

EX09 Pod information inside$ kubectl get pod -o=custom-columns=NAME:.metadata.name,IP:.status.podIPNAME IPintro-3265745252-07l0b 172.17.0.11intro-3265745252-0qw03 172.17.0.9intro-3265745252-sxz2s 172.17.0.10

95

And confirm the Pod IPs

EX09 Pod information inside$ curl http://192.168.99.100:30561/Hello Kubernetes Tutorial from 172.17.0.11!$ curl http://192.168.99.100:30561/Hello Kubernetes Tutorial from 172.17.0.9!$ curl http://192.168.99.100:30561/Hello Kubernetes Tutorial from 172.17.0.10!$ curl http://192.168.99.100:30561/Hello Kubernetes Tutorial from 172.17.0.9!$ curl http://192.168.99.100:30561/Hello Kubernetes Tutorial from 172.17.0.10!$ curl http://192.168.99.100:30561/Hello Kubernetes Tutorial from 172.17.0.11!

96

Run it from the command line since browsers will pool the connection...

EX10 Configmaps• Purpose

• Explore configmaps

• Explore `kubectl apply`

97

EX10 Configmaps$ grep -A 2 Hello server.js response.end('Hello Kubernetes Tutorial from ' + process.env.PODIP + '!\n' + 'The configuration variable is ' + process.env.CONFIGVAR + '\n' );$ docker build -t intro:0.0.4 .Sending build context to Docker daemon 3.072kBStep 1 : FROM node:6.9.2 ---> faaadb4aaf9bStep 2 : EXPOSE 8080 ---> Using cache ---> 20e3088f6122Step 3 : COPY server.js . ---> 5f8dad93c9b3Removing intermediate container 29b6ad3411b5Step 4 : CMD node server.js ---> Running in c6911d08376d ---> f72a2166a111Removing intermediate container c6911d08376dSuccessfully built f72a2166a111

98

Update our server to output something with more environment variables in it

EX10 Configmaps$ kubectl set image deploy/intro intro=intro:0.0.4deployment "intro" image updated$$ kubectl get podsNAME READY STATUS RESTARTS AGEintro-3265745252-0qw03 1/1 Terminating 0 24mintro-3265745252-7wg78 1/1 Terminating 0 29sintro-3265745252-sxz2s 1/1 Terminating 0 24mintro-4010436465-3sbgd 1/1 Running 0 4sintro-4010436465-lr2qh 1/1 Running 0 2sintro-4010436465-nchsj 1/1 Running 0 4s$$ curl http://192.168.99.100:30561/Hello Kubernetes Tutorial from 172.17.0.8!The configuration variable is undefined

99

See that it's using it, but like with the Pod IP, it's not being set yet

EX10 Configmaps$ kubectl create configmap --from-literal=configvar=valuea introconfigmap "intro" created$$ kubectl edit deploy/intro...spec: template: spec: containers: - env: - name: CONFIGVAR valueFrom: configMapKeyRef: name: intro key: configvar...deployment "intro" edited

100

Create a configmap with one key/value in it

Map that configmap's key in as an environment variable

EX10 Configmaps$ kubectl get podsNAME READY STATUS RESTARTS AGEintro-2942694475-948fj 1/1 Running 0 6sintro-2942694475-9tf89 1/1 Running 0 6sintro-2942694475-vh007 1/1 Running 0 6sintro-3265745252-07l0b 1/1 Terminating 0 3sintro-3265745252-0qw03 1/1 Terminating 0 4sintro-3265745252-sxz2s 1/1 Terminating 0 4s$$$ curl http://192.168.99.100:30561/Hello Kubernetes Tutorial from 172.17.0.10!The configuration variable is valuea

101

Deployment changed, so pods roll

And now it's using the CONFIGVAR environment variable to output value

EX10 Configmaps$ kubectl get cm/intro -o yamlapiVersion: v1data: configvar: valueakind: ConfigMapmetadata: creationTimestamp: 2017-09-10T05:36:09Z name: intro namespace: default resourceVersion: "170767" selfLink: /api/v1/namespaces/default/configmaps/intro uid: f2f0527d-95e9-11e7-b635-080027358e48$ kubectl get cm/intro -o yaml > intro-cm.yaml

102

Another way is to use the configmap spec like pod and deployment. Can get that by looking at what's in there already.

Save that out to a file

EX10 Configmaps$ vi intro.yaml...apiVersion: v1data: configvar: valueskind: ConfigMapmetadata: name: intro...$ $ kubectl delete cm/infoconfigmap "intro" deleted$$ kubectl apply -f intro-cm.yamlconfigmap "intro" created

103

Reduce it down to take out the Kuberentes server decoration

Delete the old info

`apply` tries to create/update the resource in sync with the file

In this case, it creates

EX10 Configmaps$ vi intro.yaml...apiVersion: v1data: configvar: valuebkind: ConfigMapmetadata: name: intro...$$ kubectl apply -f intro-cm.yamlconfigmap "intro" created$ $ curl http://192.168.99.100:30561/Hello Kubernetes Tutorial from 172.17.0.11!The configuration variable is valuea

104

Let's update the configvar

And testing... we see that it hasn't updated

EX10 Configmaps$ kubectl get podsNAME READY STATUS RESTARTS AGEintro-2942694475-t0dk8 1/1 Running 0 9mintro-2942694475-tlpvm 1/1 Running 0 9mintro-2942694475-vfwwf 1/1 Running 0 9m$$$ kubectl get pods -o name | xargs kubectl deletepod "intro-2942694475-t0dk8" deletedpod "intro-2942694475-tlpvm" deletedpod "intro-2942694475-vfwwf" deleted$$$ curl http://192.168.99.100:30561/Hello Kubernetes Tutorial from 172.17.0.10!The configuration variable is valuec

105

A configmap change isn't readily identified as causing the deployment to change, these pods don't get restarted automatically.

Delete the pods manually

Test again, and we see that it has changed

EX10 Configmaps• Can also define config maps from files

• Include the whole file verbatim: kubectl create cm test --from-file=configs=/path/to/file

• Include the file as a list of key/value pairs: kubectl create cm test --from-env-file=/path/to/file

106

EX11: Secrets• Purpose

• Explore the Secrets resource

107

EX11: Secrets$ kubectl create secret generic intro --from-literal=password=reallysecretsecret "intro" created$$ kubectl get secret intro -o yamlapiVersion: v1data: password: cmVhbGx5c2VjcmV0kind: Secretmetadata: name: intro namespace: default ...type: Opaque

108

Secert is very similar to the configmap, but it's meant to have some meaning behind it (and handling is in progress)

Stores as base64 encoded values available from the API

EX11: Secrets$ grep -A 2 Hello server.js response.end('Hello Kubernetes Tutorial from ' + process.env.PODIP + '!\n' + 'The secret password is "' + process.env.PW + '"\n' );$ docker build -t intro:0.0.5 .Sending build context to Docker daemon 3.072kBStep 1 : FROM node:6.9.2 ---> faaadb4aaf9bStep 2 : EXPOSE 8080 ---> Using cache ---> 20e3088f6122Step 3 : COPY server.js . ---> cb2fb7acc119Removing intermediate container 409c93df3ec7Step 4 : CMD node server.js ---> Running in 35465f243ef9 ---> 566294badefdRemoving intermediate container 35465f243ef9Successfully built 566294badefd

109

Can use it the same way - set up the secret as an environment variable

EX11: Secrets$ cat intro-deploy.yamlapiVersion: extensions/v1beta1kind: Deploymentmetadata:...spec: ... template: .. spec: containers: - env: ... - name: PW valueFrom: secretKeyRef: name: intro key: password image: intro:0.0.5

110

Updated intro deployment specification

Make the password available to the app as part of the environment

Update to our latest build

EX11: Secrets$ kubectl apply -f intro-deploy.yamlWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl applydeployment "intro" configured$$ kubectl get podsNAME READY STATUS RESTARTS AGEintro-2942694475-fpnfh 1/1 Terminating 0 30mintro-2942694475-gqdvf 1/1 Terminating 0 30mintro-2942694475-rxn92 1/1 Terminating 0 30mintro-3353884051-kdz2c 1/1 Running 0 2mintro-3353884051-nnkln 1/1 Running 0 2mintro-3353884051-rln6s 1/1 Running 0 2m$$ minikube service introOpening kubernetes service default/intro in default browser...

111

Update the new deploy spec with `apply`

Updated deployment causes pods to roll

And see if it worked...

EX11: Secrets

112

It worked!

EX11: Secrets• You can change access to secrets separate from access to configmaps (see RBAC)

• Exposing via the environment may leak it (env available in other ways) --- we'll look at that next

• There is work to protect the secrets more

• Not allow any node access to the secret -- only ones where the secret is scheduled

• Sealing it all the way to the process

• Can use external secret stores - Vault, CyberArk, KMS, but mileage may vary

113

EX12: Volumes• Purpose

• Explore the volumes, volumeMounts fields in the spec

• Explore secrets, configmaps as mounts

114

EX12a: Volumes$ cat server.jsvar http = require('http');

var fs = require('fs');var password = fs.readFileSync('/data/password', 'UTF8');

var handleRequest = function(request, response) { console.log('Received request for URL: ' + request.url); response.writeHead(200); response.end('Hello Kubernetes Tutorial from ' + process.env.PODIP + '!\n' + 'The secret password from env is "' + process.env.PW + '"\n' + 'The secret password from fs is "' + password +'"\n' );};var www = http.createServer(handleRequest);www.listen(8080);

115

Update to read a secret from a file system path `/data/password`

EX12a: Volumes$ cat intro-deploy.yamlkind: Deployment...spec: template: ... spec: volumes: - name: intro secret: secretName: intro containers: - volumeMounts: - name: intro readOnly: true mountPath: /data ... image: intro:0.0.6

116

Update the deployment spec to map the intro secret to `/data`. This puts the `password` key at `/data/password`

EX12a: Volumes$ docker build -t intro:0.0.6 .Sending build context to Docker daemon 5.632kB...$ kubectl apply -f intro-deploy.yamldeployment "intro" configureddeployment "intro" configured$ kubectl get podsNAME READY STATUS RESTARTS AGEintro-2431729973-nn0t0 1/1 Running 0 4s...intro-538233644-8tj16 1/1 Terminating 0 4m...$ minikube service intro

117

Deploy it all - build image, apply deployment, check for changed pods, and open the browser

EX12a: Volumes

118

EX12b: Volumes$ cat server.jsvar http = require('http');

var fs = require('fs');var configvar = fs.readFileSync('/cm/configvar', 'UTF8');

var handleRequest = function(request, response) { console.log('Received request for URL: ' + request.url); response.writeHead(200); response.end('Hello Kubernetes Tutorial from ' + process.env.PODIP + '!\n' + 'The configvar from fs is "' + configvar + '"\n' );};var www = http.createServer(handleRequest);www.listen(8080);

119

Same can be done for config map

EX12b: Volumes$ cat intro-deploy.yamlkind: Deployment....spec: ... template: ... spec: volumes: ... - name: cm configMap: name: intro containers: - volumeMounts: ... - name: cm readOnly: true mountPath: /cm

120

Define config map volume in deployment spec

EX12b: Volumes$ docker build -t intro:0.0.7 .Sending build context to Docker daemon 5.12kB...$ kubectl apply -f intro-deploy.yamldeployment "intro" configured$ kubectl get podsNAME READY STATUS RESTARTS AGE...intro-1784550256-3qvbn 0/1 ContainerCreating 0 3s...intro-2431729973-m7fxw 1/1 Terminating 0 3m...

121

Redeploy

EX12b: Volumes$ curl http://192.168.99.100:30561/Hello Kubernetes Tutorial from 172.17.0.3!The configvar from fs is "valuec"

122

And test...

EX12: Volumes• Additional Volume types, but depend on environment

• HostPath volume

• Local volume

• NFS and NAS volumes

• Ceph, Gluster, ScaleIO, etc volumes

• Cloud volumes - AWS EBS/EFS, GCP Persistent Disk, Azure Disk/File

123

EX13: Stateful Sets• Purpose

• Explore support for applications expecting consistent IPs

124

EX13: Stateful Sets$ cat redis-statefulset.yamlapiVersion: apps/v1beta1kind: StatefulSetmetadata: name: redisspec: serviceName: redis replicas: 1 template: metadata: labels: app: redis spec: containers: - name: redis image: redis:4.0.1 volumeMounts: - mountPath: /data name: redis-data volumes: - name: redis-data hostPath: path: /data

125

StatefulSet spec is similar to Deployment where it has a nested Pod spec inside of it

hostPath volume creates a place to preserve data (separate from the Name/IP preservation)

EX13: Stateful Sets$ kubectl apply -f redis-statefulset.yamlstatefulset "redis" created$ kubectl get statefulsetNAME DESIRED CURRENT AGEredis 1 1 24s$ kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODEintro-1784550256-300fz 1/1 Running 0 1m 172.17.0.10 minikubeintro-1784550256-cxq4v 1/1 Running 0 1m 172.17.0.9 minikubeintro-1784550256-lc4l8 1/1 Running 0 1m 172.17.0.11 minikuberedis-0 1/1 Running 0 1m 172.17.0.3 minikube

126

Apply just like any of the others

Pod IP is allocatedCreate Pod names based on statefulset name with an identifier after

EX13: Stateful Sets$ kubectl run -it --image=redis:4.0.1 shell /bin/shIf you don't see a command prompt, try pressing enter.## redis-cli -h 172.17.0.3172.17.0.3:6379> 172.17.0.3:6379> set foo barOK172.17.0.3:6379> get foo"bar"172.17.0.3:6379> saveOK172.17.0.3:6379>#Session ended, resume using 'kubectl attach shell-2621852270-816gf -c shell -i -t' command when the pod is running

127

Let's operate inside of the pod a little bit

Connect to the server pod based on IP

Set some data and to check back later

Make sure the data is saved to disk

EX13: Stateful Sets$ kubectl delete pod redis-0pod "redis-0" deleted$ kubectl get pods/redis-0 -o wide -wNAME READY STATUS RESTARTS AGE IP NODEredis-0 1/1 Terminating 0 13s 172.17.0.3 minikuberedis-0 0/1 Terminating 0 14s <none> minikuberedis-0 0/1 Terminating 0 23s <none> minikuberedis-0 0/1 Terminating 0 23s <none> minikuberedis-0 0/1 Pending 0 4s <none> <none>redis-0 0/1 Pending 0 4s <none> minikuberedis-0 0/1 ContainerCreating 0 4s <none> minikuberedis-0 1/1 Running 0 5s 172.17.0.3 minikube

128

Delete the pod, and it's recreated automatically with the same name/IP

EX13: Stateful Sets$ kubectl attach shell-2621852270-816gf -c shell -itIf you don't see a command prompt, try pressing enter.# redis-cli -h 172.17.0.3172.17.0.3:6379> get foo"bar"172.17.0.3:6379>#

129

Check back in the new pod and see if the connection IP and data is preserved

EX13: Stateful Sets• Tied together with volumes and storage classes, StatefulSets can help with non-12 Factor Apps

• Downsides

• Can't pick IP ahead of time

• Affects pod scheduling (has to map to existing node)

130

EX14: Services• Purpose

• Explore the Service spec

• Explore cluster DNS

131

EX14: Services$ kubectl apply -f redis.yamlpod "redis" created$$ kubectl get pods/redis -o wideNAME READY STATUS RESTARTS AGE IP NODEredis 1/1 Running 0 21s 172.17.0.3 minikube$ kubectl attach shell-2621852270-816gf -c shell -itIf you don't see a command prompt, try pressing enter.# redis-cli -h 172.17.0.3172.17.0.3:6379> GET foo"bar"172.17.0.3:6379>#Session ended, resume using 'kubectl attach shell-2621852270-816gf -c shell -i -t' command when the pod is running

132

EX14: Services$ kubectl delete pod/redispod "redis" deleted$ $ kubectl scale deploy/intro --replicas=5deployment "intro" scaled$ $ kubectl apply -f redis.yamlpod "redis" created$ kubectl get pod/redis -o wideNAME READY STATUS RESTARTS AGE IP NODEredis 1/1 Running 0 9s 172.17.0.12 minikube$

133

New pod has new IP

Delete to free up the IP

Make something else take up the existing IP (.3)

Recreate

EX14: Services$ kubectl attach shell-2621852270-816gf -c shell -itIf you don't see a command prompt, try pressing enter.## redis-cli -h 172.17.0.3Could not connect to Redis at 172.17.0.3:6379: Connection refusedCould not connect to Redis at 172.17.0.3:6379: Connection refusednot connected> exit## redis-cli -h 172.17.0.12172.17.0.12:6379> GET foo"bar"172.17.0.12:6379>#Session ended, resume using 'kubectl attach shell-2621852270-816gf -c shell -i -t' command when the pod is running

134

Let's try to get the data again

Using the old IP address will fail

Try the new iP

Data is still there

EX14: Services$ kubectl get serviceNAME CLUSTER-IP EXTERNAL-IP PORT(S) AGEintro 10.0.0.238 <nodes> 8080:30561/TCP 1hkubernetes 10.0.0.1 <none> 443/TCP 1h$ kubectl get service/intro -o yamlapiVersion: v1kind: Servicemetadata: labels: run: intro name: intro ...spec: clusterIP: 10.0.0.238 ports: - nodePort: 30561 port: 8080 protocol: TCP targetPort: 8080 selector: run: intro type: NodePort ...

135

Look at the existing services

This came from the `expose` in EX02

EX14: Services$ kubectl expose pod redis --port 6379service "redis" exposed$ kubectl get service/redisNAME CLUSTER-IP EXTERNAL-IP PORT(S) AGEredis 10.0.0.184 <none> 6379/TCP 6s$ kubectl get service/redis -o yamlapiVersion: v1kind: Servicemetadata: labels: app: redis name: redis ...spec: clusterIP: 10.0.0.184 - port: 6379 protocol: TCP targetPort: 6379 selector: app: redis ...

136

This is the IP to use to connect to

EX14: Services$ kubectl get service/redis -o yamlapiVersion: v1kind: Servicespec: selector: app: redis ...$ kubectl get pods/redis -o yamlapiVersion: v1kind: Podmetadata: labels: app: redis name: redis ...

137

How the service maps to the pods

EX14: Services

• Ensure that the `kube-dns` add-on is running

• This provides a mapping from $SERVICENAME to $IP

• FQDN: $SERVICE_NAME.$NAMESPACE.svc.$CLUSTER_DOMAIN

• So can use DNS instead of L4 mappings

138

$ minikube addons list | grep kube-dns- kube-dns: enabled

EX14: Services$ kubectl attach shell-2621852270-816gf -c shell -itIf you don't see a command prompt, try pressing enter.# # redis-cli -h 10.0.0.18410.0.0.184:6379> GET foo"bar"10.0.0.184:6379>## redis-cli -h redisredis:6379> GET foo"bar"redis:6379>#Session ended, resume using 'kubectl attach shell-2621852270-816gf -c shell -i -t' command when the pod is running

139

Try to access it via the IP address of the service

Try to access it via the DNS name

Where to go from here?

140

Where to go from here?• Topics

• Running the Kubernetes cluster itself

• Persistent Volumes

• Ingresses

• Access Control

• Operators

• Helm

• Multicontainer Pods, Sidecars

141

October 29–November 3, 2017 | San Francisco, CAwww.usenix.org/lisa17 #lisa17

Remember to fill inyour tutorial evaluations!

Thank You!

F2 - Kubernetes : Hit the Ground RunningChris "mac" McEniry

top related