<img src="https://ws.zoominfo.com/pixel/JHVDdRXH2uangmUMQBZd" width="1" height="1" style="display: none;">

Labeling Your Nodes is a Wise Move!

DevOps Kubernetes Governance Policies
Labeling Your Nodes is a Wise Move!
DevOps Kubernetes Governance Policies

Overview

In a homogeneous Kubernetes environment, where every node is running the same hardware and operating system, your workloads will be scheduled wherever resources are available. Aside from your master nodes, there won’t be anything else differentiating between your nodes. This might be suitable at first, but not every pod will be created equally.

This is why Kubernetes and Magalix recommend labeling your nodes, preferably when you are provisioning them so when the time comes, you are already set.

Your workloads will get more complex over time. Scenarios you could run into include:

  • Provisioning dedicated nodes for specific jobs, such as Machine learning workflows
  • Deploying services based on locality
  • Workloads that require specialized hardware
  • Multiple pods that need to be provisioned to the same node, or explicitly separated

In addition to these scenarios, some organizations run both Windows and Linux servers. It’s quite possible that your handful of Windows Server 2019 servers all have 32GB of RAM, while your Linux machines range from 4GB to 8GB of RAM. Mix in the fact that some of your containers need to runAsUser, and you are looking at some complex scheduling since Windows does not yet support that feature.

Provisioning and managing these nodes without any sort of grouping won’t allow you to support any of these types of scenarios. You may even end up in a scenario where even though your total amount of resources are adequate for all of your workloads, they remain paused while you provision new nodes for additional resources.

Let’s take the following example of a cluster in AWS:

Labeling Your Kubernetes Nodes

Our cluster doesn’t use node labels. Instead, we allow Kuberentes to manage the scheduling for us. You can see that ServiceA doesn’t have enough hardware resources to get scheduled, forcing you to provision another node with at least 8GB of RAM and 2 vCPUs. This leaves you with an existing server sitting there idly, adding costs for no reason at all.

With 3 available nodes, the problem seems reasonably manageable. A brute force method would be to juggle your pods by killing them off, until each lands on the node that it’s supposed to. I’m sure if you deployed in a certain order, things would be just fine. But, what happens if nodes become unavailable, or pods restart? If something were to cause your services to come up in a different order, you’ll be back at square one. Double the number of different services, all with different requirements and this type of juggling will end up in chaos, leaving you no choice but to add more nodes to work around the issue.

The Power of Labels and KubeAdvisor

These are the situations when node labels play a crucial role. They are important enough that Kuberenetes advises adding well-known labels to your nodes. Based on the official Kubernetes documentation, we decided to implement our very own KubeAdvisor policies to ensure we are labelling our nodes appropriately, at all times, across all clusters. Revisiting our example, we labelled our nodes with the recommended node.kubernetes.io/instance-type and used nodeSelector in our deployments to select the right labels.

Labeling Your Kubernetes Nodes

Maintaining consistency is a key factor in governing your Kubernetes clusters. Because of this, KubeAdvisor ships with node label policies out of the box. In anticipation of growth, we want to ensure your nodes are properly labeled so when it comes time to carve out resources for your cluster, or perform any other type of node organization, we have you covered.

Labeling Your Kubernetes Nodes

Conclusion

More workloads will require more resources. Whether you’re scaling your nodes vertically or horizontally, organizing your nodes soon becomes a necessity. Keeping your nodes consistent is an ongoing process and may not be top of mind when situations call for rapidly expanding your server count. Although our example leverages the node.kubernetes.io/instance-type label, our policies also cover a node’s architecture type, operating system, role, and zone. Additionally, extend your governance reach by creating your own set of labels policies to meet your specific standards. Check out KubeAdvisor today to see our latest policies.

Get Started With KubeAdvisor

If you haven’t stumbled upon our other articles, read up on our complementary article about how Owner labels help your teams support each other.

Comments and Responses

Related Articles

Product In-Depth: Enforce Policies and Standards from a Single Console

Magalix provides a single management interface to control, enforce and visualize the state of compliance for all of your clusters.

Read more
Product In-Depth: Centralized Policy Management

achieving DevSecOps isn’t as difficult as you may have been led to believe. Interested in learning more about how to start resolving violations in minutes

Read more
Product In Depth: Detailed Violation Analysis

Security, compliance, and governance are not just one-time events that happen every so often. Managing a compliant environment is a 24x7 operation.

Read more