Since the introduction of containers, the method of building and running applications in an organization has changed immensely. The majority of companies are now leveraging the power of containerization and using container tools like Docker and Kubernetes extensively. Having some knowledge of such tools and technologies is very important.
Out of all the companies using container orchestration platforms, 75% of them are using Kubernetes. If you are a Manager or Lead in such an organization and working on DevOps practices, you would have many developers, engineers, and operations professionals in your team working on the Kubernetes cluster. It is crucial for you and your team to have a great command of Kubernetes for overall team productivity.
I have decided to launch a series of blogs on Team Productivity, with each blog in the series covering different parameters. You will also learn how Magalix can help you in achieving team productivity. In today’s blog, I will teach you how to increase team productivity by doing proper resource management of the Kubernetes cluster when used by multiple members of a team or even multiple teams.
Why is Team Productivity important?
Once you have decided that you want to run your application on Kubernetes, the next thing you need to decide is how your team will be utilizing the Kubernetes cluster. Your team can have a variety of professionals including developers, QA engineers, admins, operations teams, etc.
When you share your cluster with multiple members in the team or even multiple teams, resource allocation must be one of the important parameters to be considered.
In the beginning, everything will work fine on the cluster, but you may get surprises later. If any member of the team uses all the resources of the cluster to run a few containers, the other members won't be able to run pods/containers on the cluster anymore, which may lead to the downtime of the application. This can have a huge impact on business productivity. It is always a good idea to set resource requests and limits in Kubernetes.
Resource Requests and Limits
There are two types of resources in a Kubernetes cluster- CPU and memory. These parameters are used by the Kubernetes to figure out where to run your pods.Kubernetes uses two mechanisms to control CPU and memory resources- requests and limits. Requests are the CPU and memory resources which a container is guaranteed to get. Once you set this up, Kubernetes is only going to schedule pods and containers on those nodes that can meet the resource requirement. Limits are those parameter values that a container’s resource should not exceed- they can only go until the defined limit.
You can set requests and limits for each container in the pod. The request must always be lesser than the limit- otherwise, the container will not run. CPU resources are defined in millicores and memory resources in mebibytes. Below is a sample example:
containers: - name: prodcontainer1 image: ubuntu resources: requests: memory: “128Mi” cpu: “600m” limits: memory: “256Mi” cpu: “1000m”
Oftentimes when a team is starting to work with a Kubernetes cluster, they forget to set the resources and go ahead with default configurations...only later realizing its importance. Any team can start running multiples containers without any constraints and take up more than their fair share of the cluster. This is why setting a resource quota for a namespace or pod or container is extremely important.
Setting Resource Quota
You can easily prevent the issues mentioned above by setting up resource quotas. In Kubernetes, you can create a namespace and lock them down using quotas.
For example, if you have a production team and development team namespace, a common pattern is to put no quota on production and put strict quotas on the development namespace. This way, the developers won't be allowed to use all the resources of the cluster and allows the production team to take all the resources it needs in case of a traffic spike.
Here is an example for setting up resource quota namespace for the development team.
apiVersion: v1 kind: Pod metadata: name: quota-dev-team spec: containers: - name: quota-dev-team-containers image: nginx resources: requests: memory: "1200Mi" cpu: "700m" limits: memory: "2000Mi" cpu: "1000m"
There are four parameters which have been set in the above YAML file. Let me tell you about each of them:
- Requested CPU is the maximum combined CPU requests that all containers in the namespace can have. In this example, you can have 70 containers with 10m requests, 7 containers with 100m requests, or just one container with 700m of CPU requests. If the requested CPU resource in the namespace is less than 700m, we're good to go.
- Requested memory is the maximum combined memory requests that all containers in the namespace can have. In the above example, you can have 10 containers with 120 MiB requests or just one container with 1200Mib, as long as the total requested memory in the namespace is less than 1200 mebibytes.
- Limit for CPU is a maximum combined CPU limit that all the containers and namespace can have.
- Finally, the limit for memory is the maximum combined memory limits that all containers in the namespace can have.
You have to set such structures for different members in your team or different teams working on the same cluster. People working in development and QA teams could have fewer resources available to them than the operations team which works on the production environment, it can be flexible.
How can Magalix help?
When you connect the cluster with Magalix, this is what it will look like by default. The Magalix dashboard will show the details of CPU and memory available in the cluster, as well as how much it's getting utilized. My cluster is using only 10% of the CPU and 14% of the memory on the cluster.Drill down inside the cluster dropdown and go to the optimization tab. In case you have not set the resource requests and limits for CPU and memory, Magalix will be able to identify this issue and will show you this in the summary tab, as shown below.
Click on the issue listed to get the recommendation by Magalix. It will take you to the automation tab, which lists all the namespaces on your cluster where the CPU or memory requests and limits are not set.
Here, you might have your namespaces with names as development, QA, staging, production etc., depending on your organization’s use case. You can select the recommendations for those namespaces where you want to do resource management.
For example, here, I’m selecting CPU and memory limit or request for the Kubernetes-dashboard namespace.
When you click on “Apply Selected” in the previous step, a pop up will appear listing the selected recommendations. Click on “Apply”.
This will take you to the final tab, which is the execution log. A Magalix agent will go ahead and start applying the recommendations on the Kubernetes cluster, and you can watch the progress in the status section.
By using Magalix you can easily do proper resource management of the cluster.
Learn How you an continuously optimize your Kubernetes resources confidently and securely with Magalix
When you are working on a cluster with a team, optimally managing the cluster resources is important for better productivity.
Using the steps mentioned in this blog, you can easily set resource requests and limits for the cluster. There won't be any resource outage issues because of any misuse of the cluster resources. Setting limits will help you run the cluster smoothly within multiple team members or multiple teams and increase overall team productivity.