Weaveworks 2022.03 release featuring Magalix PaC | Learn more
Balance innovation and agility with security and compliance
risks using a 3-step process across all cloud infrastructure.
Step up business agility without compromising
security or compliance
Everything you need to become a Kubernetes expert.
Always for free!
Everything you need to know about Magalix
culture and much more
Many programming languages frameworks implement the concept of lifecycle management. The term refers to how the platform can interact with the component it creates right after it starts or before it stops.
This implementation is important because sometimes we may need to perform some actions on the Pod such as testing for connectivity to one or more of its dependencies. Similarly, the Pod may need to undergo cleanup activities before the Pod is destroyed.
In the application process management pattern, we ensure that our containerized application is aware of its environment and correctly responds to the different signals that the platform (Kubernetes) sends to it.
Through its lifecycle, a container may get terminated. Perhaps because the Pod it belongs to is being shut down or failing one or both of the liveness or readiness probes. In all cases, Kubernetes follows a standard way of destroying a running container: by sending kill signals. In the end, a container is just a process running on the machine.
First, Kubernetes sends the SIGTERM signal. The SIGTERM signal is sent by default when you issue the kill command against a process running on your Linux system.
SIGTERM allows the running process to perform any required cleanup activities before shutdown such as releasing file locks, closing database and network connections, and so on.
Even so, sometimes the process (the container in our case) does not respond to the SIGTERM signal. Either because of a code bug that put the process in an infinite loop or for other reasons.
Because of this, Kubernetes waits for a grace period of thirty seconds (which is an overridable metric) before it sends the more aggressive signal SIGKILL.
SIGKILL is the same signal sent to a running process when you issue the popular kill -9 command and hand it the process id. The process does not receive a SIGKILL signal but by the underlying operating system.
Once the kernel detects this signal, it stops provisioning any resources to the process in question. The kernel also stops any CPU threads currently in use by the dangling process. In other words, it cuts the power off the process, forcing it to die.
So far, Kubernetes treats containers the same way any Linux system administrator deals with the running process: sending signals to the process or the kernel. But, because containers are part of larger applications with complex functions and tasks, signals are not enough. For that reason, Kubernetes offers postStart and preStop hooks.
You can think of a hook as a placeholder for executing code at a specific stage. You may or may not use the hook depending on your needs. The postStart hook is a placeholder for any logic that you need to execute as soon as the container starts. Example use cases are many:
Let’s have a quick example: the following definition file will start a Pod hosting one container. The container needs to ensure that a dependency service is available. Otherwise, the whole container should get killed:
apiVersion: v1
kind: Pod
metadata:
name: client
spec:
containers:
- image: nginx
name: client
lifecycle:
postStart:
exec:
command:
- sh
- -c
- sleep 10 && exit 1
When you apply the above definition to the cluster using kubectl apply -f poststart.yml and have a look at the Pods status using kubectl get pods, you will find out that the client pod is always in the ContainerCreating status:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
client 0/1 ContainerCreating 0 6s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
client 0/1 ContainerCreating 0 9s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
client 0/1 ContainerCreating 0 11s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
client 0/1 ContainerCreating 0 14s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
client 0/1 ContainerCreating 0 19s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
client 0/1 ContainerCreating 0 22s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
client 0/1 ContainerCreating 0 27s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
client 0/1 ContainerCreating 0 29s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
client 0/1 ContainerCreating 0 43s
This is what happened in sequence:
Kubernetes pulled the nginx image
It created the container and prepared to start it
Because we have a lifecycle stanza within the definition, Kubernetes executes the postStart hook and scheduled bringing the container up till the hook script is finished.
The postStart script pauses the thread for ten seconds before it returns a non-zero exit status.
When Kubernetes detects the non-zero exit status, it kills and restarts the container again, and the whole cycle repeats indefinitely.
We can make nginx start after ten seconds (which simulates any precheck activities) by altering the postStart script so that the definition looks as follows:
aapiVersion: v1
kind: Pod
metadata:
name: client
spec:
containers:
- image: nginx
name: client
lifecycle:
postStart:
exec:
command:
- sh
- -c
- sleep 10
Now, let’s apply this new configuration:
$ kubectl apply -f poststart.yml
pod/client created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
client 0/1 ContainerCreating 0 4s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
client 1/1 Running 0 22s
As you can see from the above, Kubernetes executed the postStart script then started the container’s main ENTRYPOINT, which is the nginx daemon.
postStart script uses the following methods for running the checks:
apiVersion: v1
kind: Pod
metadata:
name: client
spec:
containers:
- image: mynginx
name: client
lifecycle:
postStart:
httpGet:
port: 8080
path: /status
Good question! init container is a Kubernetes feature that allows a container to start and do one or more tasks, then it gets terminated. The init container starts and stops before other containers do, making it the right candidate for performing any pre-launch tasks.
However, while postStart hooks and init containers appear to do similar jobs, the implementation is mostly different. Let’s have a quick comparison between both methods and demonstrate possible use cases for each:
Yes, any logic implemented through the postStart script can be applied by adding it as part of the ENTRYPOINT command that gets passed to the container to start. But this is not a good decision from a design perspective.
It tightly couples the container with its pre-launch logic in a way that requires each container to be individually modified. Using hooks allows you to change containers while keeping the same pre-launch logic in place. It also allows you to work on the pre-launch logic independent of which containers it will run against.
Earlier in this article, we learned about the different signals that Kubernetes sends to the running containers inside the Pods when it wants to bring them down.
However, although the container receives the SIGTERM and it allows the container to shut down for up to thirty seconds - by default, this may not be sufficient for complex scenarios.
Let’s continue with our preceding example, and let’s say that the RESTful API service that is running in parallel with our nginx service needs to perform several steps before it shuts down.
The API designers were smart enough to expose an endpoint specifically for this purpose: executing the shutdown procedure. Kubernetes provides the preStop hook, which gets called right before the SIGTERM signal is sent to the container. The preStop hook also provides the same check methods as the postStart hook: httpGet and exec.
However, unlike the postStart hook, if Kubernetes detects a non-zero exit status or a non-success HTTP code, it will continue the shutdown procedure and send the SIGTERM signal.
Let’s change our example to make a GET request to /shutdown endpoint of our hypothetical service that is running on port 8080:
apiVersion: v1
kind: Pod
metadata:
name: client
spec:
containers:
- image: mynginx
name: client
lifecycle:
preStop:
httpGet:
port: 8080
path: /shutdown
The main difference between a traditional and a cloud-native application is that the latter does not run on an infrastructure that you own or under your control.
Orchestration platforms like Kubernetes were designed to ensure that you get the highest level of application performance and availability given an unpredictable infrastructure. Accordingly, cloud-native applications should be written in a way that honors the contracts and constraints imposed by Kubernetes to enjoy the features it provides.
In this article, we discussed one of the best-practical design patterns for cloud-native applications. The points that we need to drive home here are the following:
Kubernetes is a continually evolving project. There may be more hooks in the future to communicate with the container when it is about to be scaled up and down or when the container is asked to release some of its consumed resources to avoid getting killed.
As you can see, Cloud-native applications allow for more automation and more control from the orchestrator’s side to make intelligent decisions.
But you need to put in more thought when designing your applications to make good use of those benefits.
Empower developers to delivery secure and compliant software with trusted application delivery and policy as code. Learn more.
Automate your deployments with continuous application delivery and GitOps. Read this blog to learn more.
This article explains the differences between hybrid and multi-cloud model and how GitOps is an effective way of managing these approaches. Learn more.
Implement the proper governance and operational excellence in your Kubernetes clusters.
Comments and Responses