14-days FREE Trial

 

Right-size Kubernetes cluster, boost app performance and lower cloud infrastructure cost in 5 minutes or less

 

GET STARTED

  Blog

Kubernetes Deployments 101

 

Why use a Kubernetes Deployment?

In another article, we discussed Kubernetes ReplicaSets. ReplicaSets, however, have one major drawback: once you select the pods that are managed by a ReplicaSet, you cannot change their pod templates. So for example, if you are using a ReplicaSet to deploy four pods with NodeJS running and you want to change the NodeJS image to a newer version, you need to delete the ReplicaSet and recreate it. Restarting the pods causes downtime till the images are available and the pods are running again.

A Deployment resource uses a ReplicaSet to manage the pods. However, it handles updating them in a controlled way. Let’s dig deeper into Deployment Controllers and patterns.

Your First Deployment

Let’s have a quick demonstration of what Kubernetes Deployments can do. The following Deployment definition deploys four pods with Apache as their hosted application:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: apache-deployment
  labels:
    role: webserver
spec:
  replicas: 4
  selector:
    matchLabels:
      role: webserver
  template:
    metadata:
      labels:
        role: webserver
    spec:
      containers:
      - name: frontend
        image: httpd
        ports:
        - containerPort: 80

Save the above in a file. In this example, I named the file apache_deployment.yaml. Apply the definition to the cluster by running the following command:

kubectl apply -f apache_deployment.yaml --record

Notice the use of the --record flag at the end of the command. While not required, this is a good practice that you should follow. The --record flag saves the command that issued the deployment in a list. Later on in the article, you’ll see the value of keeping this information saved. In a few seconds, you can check the status of the pods by running kubectl get pods. You should see three pods running.

The Deployment definition

Let’s have a look at the definition file that we used to bring those pods up:

  • The file starts with the apiVersion that accepts the Deployment API object. Currently, it’s apps/v1.
  • Then we have the type of resource: deployment.
  • In the metadata, we define the name of this Deployment and its label.
  • The spec field defines how many pods we need this Deployment to maintain. It also contains the selection criteria that the controller uses to acquire the target pods. The matchLabels field targets pods labeled role=webserver.
  • The spec field also has the pod template that is used to create (or recreate) the pods.
  • The spec.template.metadata defines the label that new pods will have.
  • The spec.template.spec part has the actual container definition (owned by the container field). In our example, we define the container name and the port it listens on (HTTP 80).

Performing Updates with Zero Downtime (Deployment Rolling Updates)

So far, everything our Deployment did is no different than a typical ReplicaSet. The real power of a Deployment lies in its ability to update the pod templates without causing application outage.

Let’s say that you have finished testing the Apache server version 2.4, and you are ready to use it in production. The current pods are using the older Apache version 2.0. The following command changes the deployment pod template to use the new image:

kubectl set image deployment apache-deployment apache=httpd:2.4

The above command changes the image tag of the containers named apache to use the image httpd tagged 2.4 instead of 2. An alternative way to achieve this is by editing the deployment configuration YAML directly using a command like the following:

kubectl edit deployment apache-deployment

Then, scroll down till the pod template part and change the httpd image tag. Once you save the configuration, the Deployment starts updating the pods one by one. You can see the actual progress of this operation by issuing the following command:

kubectl rollout status deployment apache-deployment

The output shows the update progress until all the pods use the new container image.

The algorithm that Kubernetes Deployments use when deciding how to roll updates is to keep at least 25% of the pods running. Accordingly, it doesn’t kill old pods unless a sufficient number of new ones are up. In the same sense, it does not create new pods until enough pods are no longer running. Through this algorithm, the application is always available during updates.

You can use the following command to determine the update strategy that the Deployment is using:

kubectl describe deployments | grep Strategy

The output looks as follows:

StrategyType: RollingUpdateRollingUpdateStrategy: 25% max unavailable, 25% max surge

We used grep here to filter out the command’s output to reveal how it updates the pods. If we remove the filter, we’ll find some valuable information about the deployment steps. Let’s see:


Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  28m    deployment-controller  Scaled up replica set apache-deployment-6bdd4b58db to 4
  Normal  ScalingReplicaSet  4m38s  deployment-controller  Scaled up replica set apache-deployment-67fd555f74 to 1
  Normal  ScalingReplicaSet  4m38s  deployment-controller  Scaled down replica set apache-deployment-6bdd4b58db to 3
  Normal  ScalingReplicaSet  4m38s  deployment-controller  Scaled up replica set apache-deployment-67fd555f74 to 2
  Normal  ScalingReplicaSet  4m34s  deployment-controller  Scaled down replica set apache-deployment-6bdd4b58db to 2
  Normal  ScalingReplicaSet  4m34s  deployment-controller  Scaled up replica set apache-deployment-67fd555f74 to 3
  Normal  ScalingReplicaSet  4m33s  deployment-controller  Scaled down replica set apache-deployment-6bdd4b58db to 1
  Normal  ScalingReplicaSet  4m33s  deployment-controller  Scaled up replica set apache-deployment-67fd555f74 to 4
  Normal  ScalingReplicaSet  4m32s  deployment-controller  Scaled down replica set apache-deployment-6bdd4b58db to 0

You should find this at the end of the command output. It shows how the Deployment first created the ReplicaSet with four pods. Then, it used a new ReplicaSet with just one pod. Immediately, after that, it kills one of the pods of the old ReplicaSet. As you can see, it keeps killing pods from the old ReplicaSet and scaling up the new one until it replaces all pods.

Let’s double check that we have two ReplicaSets created for us by running kubectl get rs. The output should be similar to the following:

NAME                           DESIRED   CURRENT   READY   AGE
apache-deployment-67fd555f74   4         4         4       19m
apache-deployment-6bdd4b58db   0         0         0       43m

The old ReplicaSet has no pods, while the new one has all the four pods.

Magalix Trial

Kubernetes Deployments Strategies Overview

Rolling Update

If you want to use the rolling update strategy, you needn’t specify any parameters in the definition file. However, you may want to fine-tune how Kubernetes handles the transition of old Pods to the new ones. For example, Kubernetes automatically decides that it needs to keep at least 75% of the pods available. That is, only 25% of the pods (one out of four, for example can be down during the update process). If you want to override this behavior, you can add .spec.strategy.type as follows:

spec:
 strategy:
 type: RollingUpdate
 rollingUpdate:
   maxSurge: 1
   maxUnavailable: 50%

By setting the maxUnavailable parameter to 50%, I want Kubernetes to bring down as much as half the running Pods during the update. This number can either be a percentage or a whole number.

Recreate Update

The Recreate strategy will bring down all the old Pods immediately and then start the new one. This will obviously cause downtime. However, sometimes it is necessary. For example, if you discover a serious security flaw in your application and you need to immediately switch to the new patched image. You don’t need any of your clients to use the old vulnerable version as this may negatively affect your business reputation. A Recreate deployment strategy is required here even if it brings the application down for a few moments. Perhaps you can display a friendly “under maintenance” message till the update is done.

Deployments 101-1

Deployments 101-2

There are other types of deployment strategies that engineers may need to use. At the time of this writing, the Kubernetes Deployment only supports the rollingupdate and the recreate strategies. However, with the help of other controllers like Services you can achieve more complex scenarios like:

  • Blue/Green deployment: where you have two different versions of your application, the latest (green) and the running (blue). Once the green deployment is ready, you can configure the Service to select the new pods (through Pod Selector). If everything goes without issues, you can also update the blue version to be the latest and use it as a staging environment.
  • Canary release: named after the safety technique coal miners used to follow in the old days. They brought a cage containing some Canary birds and placed it at the entrance of the undiscovered mine. If the birds die, that indicated the presence of toxic Carbon Monoxide emissions. Otherwise, the miners were good to go. In software release, the Canary type entails directing a subset of your users to the new version of your application and testing their feedback. The majority of the users are still using the old stable version. If no issues were detected, more and more users get directed to the new version until it’s fully released. In Kubernetes, this can be done by creating another deployment with a smaller replica count (the Canary instance). The Deployment can be scaled up or totally terminated according to the test results.

Updating a Deployment while another is in progress (Rollover Updates)

As soon as the Deployment controller detects a change in the pod template (i.e., an update), it creates a new ReplicaSet and starts rolling out the pods to that new ReplicaSet until it moves all of them. However, sometimes, you may want to issue a new update while the existing one is still in progress. Let’s have an example.

Suppose you are updating ten application pods to version 1.1 (image myapp:1.1). And, while the update is in progress, the QA team informed you that they’d just finished testing version 1.12 and its ready for deployment. So, you decide to interrupt the running update and go ahead with the latest version (image myapp:1.2). Behind the scenes, the Deployment was using a new ReplicaSet and had already moved three pods to use the myapp:1.1 image when it detected a new deployment request. It immediately creates another ReplicaSet, kills the three pods that moved, and starts scaling up the newest ReplicaSet with pods using myapp:1.2. In other words, it does not wait for the whole ten pods to finish upgrading to myapp:1.1 then begin migrating them to myapp:1.2. Instead, it aborts the existing operation and starts the new one right away. Such an operation is called rollover update, and it is a powerful technique to ensure that your pods are always in the desired state in the shortest time possible, and with no downtime.

Undoing a deployment (aka Rolling Back)

Kubernetes deployments allow you to roll back updates. There are many scenarios when you want to undo a change. Let’s say that customers started complaining about a bug that was not detected during the QA phase and, hence, you need to get the application back to the previous version until the bug gets fixed. For example, let’s say that we decided to use Nginx instead of Apache for your web tier. You issued the following command to make that change:

kubectl set image deployment apache-deployment frontend=nginx:1.7.9 --record

In a few moments, all the pods were using Nginx as their web servers. Then, you realized that there are performance issues with the application, and clients are starting to complain. You need to configure the pods to use Apache again. There should be no downtime in this rollback process.

The Kubernetes Deployment controller keeps track of every Deployment that has been made (up to a configurable limit). Kubernetes considers changes in the pod template only and keeps them in history. If the change was, for example, scaling up or down the number of running pods, it does not count as a record.

Back to our example. To rollback the Deployment to a previous one, you need first to list the last changes. The following command outputs the deployment history:

kubectl rollout history deployment apache-deployment

You should see the following output:

REVISION  CHANGE-CAUSE
1         kubectl apply --filename=apache_deployment.yaml --record=true
2         kubectl set image deployment apache-deployment frontend=nginx:1.7.9 --record=true

The change-cause contains the command that was issued and caused the change. If you didn’t use the --record flag, this field would equal None.

To rollback the latest Deployment and return to the previous state, run the following command:

kubectl rollout undo deployment apache-deployment

Kubernetes Deployment starts a process similar to what it used when upgrading the pods to Nginx. In a few moments, all the pods are running Apache again.

Sometimes you may want to rollback to a specific deployment. Let’s say that you made an upgrade from httpd image 2.4 to 2.4.39 before changing to Nginx. You want to revert to using httpd:2.4. That’s two deployments back. You can specify the exact revision number you want your Deployment to rollback to by using the --to-revision flag. For example:

kubectl rollout undo deployment apache-deployment --to-revision=1

Scaling and Autoscaling Deployments

Since Deployments use ReplicaSets internally to manage pods, they too support scaling up or down. Let’s scale our apache-deployment to run six pods instead of four:

kubectl scale deployment apache-deployment --replicas=6

If you check the pods now using kubectl get pods, you’ll see the Deployment is creating two more pods.

You can also use Horizontal Pod Autoscaling (HPA) to automatically increase or decrease the number of pods in a deployment based on the CPU load on the node. The following command

kubectl autoscale deployment apache-deployment --min=6 --max=10 --cpu-percent=70

adds or kills pods from the Deployment according to the amount of CPU load on the node. It ensures that pods have an average CPU load of 70%. As the load increases, the Deployment spawns more pods up till ten. When the load is less, the Deployment kills extra pods as long as their number is not less than six. You can read more about the algorithm the Deployment uses for autoscaling here.

Learn how to continuously optimize your k8s cluster

TL;DR

  • Kubernetes Deployment is one of the most potent controllers you can use. It not only maintains a specified number of pods but it also ensures that any updates you want to make to those pods do not cause downtime.
  • Behind the scenes, Deployments use ReplicaSets to manage the pods.
  • Kubernetes Deployments supports rollover updates in which you can interrupt an already-going deployment update and instruct the Deployment controller to start the new update immediately without causing any application outage.
  • Kubernetes maintains a list of the recent deployments. You can use this list to rollback an update. You can also choose a specific deployment to move to by specifying its revision number.
  • You can use Deployments to scale up or down the number of pods it is managing. You can also configure it to respond to CPU load by creating or killing pods subject to a maximum and minimum number.
Mohamed Ahmed

Aug 28, 2019