Node Policies Advisor

Nodes

In Kubernetes, you can think of a node as a server with available compute resources. They are used to schedule pods in order for you to run a workload. Depending on your environment, you might have a one node cluster, or a cluster of 5000 nodes! Labeling nodes is a good way of identifying clusters at scale and also for using them in specific workloads. 

As your node count grows, especially if you have auto-scaling turned on, making sure your nodes are properly labeled may not be top of mind. Following best practices, our Node Advisor policies are enabled by default to inform you when nodes are not labeled appropriately. We believe that having consistency across your node pool is a must and in preparation for scale, it’s important to put in a process that labels your node appropriately. Coupled with affinity, anti-affinity, and/or nodeSelector, Kubernetes gives you the ability to customize where you want your workloads to run.

Policies Included

Missing Node Label Kubernetes IO Arch

kubernetes.io/arch is a recommended label from the Kubernetes documentation. This label is useful when there is a need to identify Kubernetes nodes that are either ARM or x86 based. If you are running a mixture of workloads that require a certain architecture type, this is the label to set. Typically, you would use this node with nodeSelector so your pods know exactly where to get scheduled. 


This policy ensures that you have set the kubernetes.io/arch label on your nodes.

Missing Node Label Kubernetes IO Hostname

A recommended label from the Kubernetes documentation, this label is to identify the hostname of your node. This label is useful in situations where you require knowledge of your Kubernetes topology. For example, if you are applying node affinity, you can leverage the hostname as a topology key so you can guarantee that specific workloads are always assigned to a specific node. The opposite would apply for cases that require anti-affinity. 


This policy ensures that you have set the kubernetes.io/hostname label on your nodes. 

Missing Node Label Kubernetes IO Instance Type 

A recommended label from the Kubernetes documentation, its main purpose is to label  the type of instance the node is. This is especially useful if you are using a cloud provider and want to carve out specific workloads for a particular type of server. In normal cases, you would want to rely on the Kubernetes scheduler to handle scheduling based on resource properties and not particular instance types. For example, scheduling work that requires a GPU as opposed to a G series node.


This policy ensures that you have set the node.kubernetes.io/instance-type label on your nodes.

Missing Node Label Kubernetes IO OS

A recommended label from the Kubernetes documentation, the utility for this label is to identify the operating system of your node. This is especially useful in an environment mixed with Linux and Windows. Windows containers can only run on Windows nodes so assigning appropriate labels to the right nodes, and using nodeSelector will ensure Windows Pods are always assigned to Windows nodes. The same rule applies for Linux containers and Linux nodes. 

In other cases, such as using community based deployments (e.g. Helm charts), it might make sense to add additional taints and tolerations in addition to your Nodes so unlabeled workloads aren’t mistakenly trying to run on the wrong type of node. 


This policy ensures that you have set the kubernetes.io/os label on your nodes.

Missing Node Label Kubernetes IO Role

A use case for this label is to identify which of your nodes in your cluster are regular nodes, and which are master nodes. Coupled with taints and tolerations, you can leverage this label so you aren’t scheduling regular workloads to your master nodes. 

This is important to think about because scheduling regular workloads on the same node as a master without the proper safeguards can put your master nodes in jeopardy. The Kubernetes master node(s) are essential to a properly running Kubernetes cluster and should be dedicated solely to the functions of the master. 


This policy ensures that you have set the kubernetes.io/role label on your nodes.

Missing Node Label Topology Kubernetes IO Zone

A well known label as deemed by the official Kubernetes documentation, this label is used to identify where the nodes in your cluster are physically located. This setting makes a bit more sense if you are using a cloud provider, but Kubernetes recommends you consider setting this value even if you aren’t in the cloud. If you are running your own data centers or using any type of failure domains, such as different racks with individual power sources and gear, this could prevent single points of physical failures. 

An example of using this label would be to use SelectorSpreadPriority and allow pods to be spread across different zones, increasing your resiliency incase of a complete zone outage. 


This policy ensures that you have set the topology.kubernetes.io/zone label on your nodes. 

Outdated Kernel Version Policy

Modern Linux kernel versions have proven their stability but every so often, a security vulnerability is discovered. In today’s IT climate, security and privacy can no longer be after thoughts. Depending on how you’re provisioning and managing your nodes, you may never have the need to log into the server. If you do, chances are checking the kernel version won’t be the reason why. 

Our Outdated Kernel Version Policy checks each of your nodes to verify you are running the latest Linux kernel version. We currently support longterm maintenance and stable kernels. 

  • 4.4.x
  • 4.9.x
  • 4.14.x
  • 4.19.x
  • 5.4.x
  • 5.8.x
  • 5.9.x

Connect Magalix to your clusters to check if their K8s objects violate or comply with this advisor's policies. Start your 30-days free trial by clicking here