Balance innovation and agility with security and compliance
risks using a 3-step process across all cloud infrastructure.
Step up business agility without compromising
security or compliance
Everything you need to become a Kubernetes expert.
Always for free!
Everything you need to know about Magalix
culture and much more
Capacity management is a complex, ever-moving target, for teams on any infrastructure, whether on-prem, cloud-native, or hybrid cloud.
It can be especially difficult for teams running Kubernetes clusters. In this post, you can learn how to get started right by starting small and focusing on step 3 from the Magalix Ultimate Guide to Capacity Management eBook: setting pods’ CPU limits.
When setting CPU limits in your pods, it’s important to use the right statistics to properly budget your resources.
For example, a common method of budgeting is using the average utilization of the CPU, which doesn’t work. Using the maximum or percentiles without considering metric resolution causes a lot of resource churn.
Setting the proper CPU limits makes your system predictable and performant. Not setting limits allows CPU-hungry containers to slow down all other containers on the same worker node.
On the other hand, setting low CPU limit values can degrade the performance of your applications. Setting a low value for the pod’s CPU limit exposes your apps to throttling, which can lead to significant performance issues. If you’re running infrastructure for a mobile game, a slow gaming experience can be a huge burden for user acquisition and engagement. For things like healthcare or finance, it can be a death sentence for your business, and can be even worse for your end-users (patients and clients).
Get a deeper dive on metrics in our webinar on Capacity Management - How to select the right metrics
Lastly, to get a quick start on capacity management on Kubernetes, your team will need a quick word on why setting pod’s memory limit is critical for the availability of your cluster and containers.
Many engineers shy away from setting pod memory limits because the OS will kill the container when that container exceeds the memory limit, and they think this situation is bad enough that they’d like to avoid it by not setting limits.
However, not setting the pod’s memory limit leads to a much bigger blast radius when the same container or other containers reach the limit of the running node’s available memory. Multiple pods will be OOMkilled and scheduled to another node, or the process of memory hijacking will be repeated with a separate set of containers.
Setting low memory values also puts pods at risk of frequent crashes. Setting high memory values puts a lot of pressure on your infrastructure budget since you will have a lot of idle capacity. Neither is ideal.
Note: Get the recommended memory limit values automatically with KubeOptimizer. You can see the collected metrics and recommended values in a single screen! You can also apply the suggested value with a single click, and look like a super hero 😉
At this point, let’s stop and review pod CPU limit recommendations:
Quick tips for setting pod memory limits for your team:
Prevent Kubernetes NetworkPolicy misconfigurations by enforcing policy as code