In the age of containerized apps, modern applications leverage microservices packed with configurations and other related dependencies. In a Kubernetes cluster, as part of the control plane, the default scheduler (or the kube-scheduler) controls the selection of optimal nodes for every newly created pod to run on, including unscheduled pods.
But each container within a pod will have different resource requirements. This means that you have to filter existing nodes according to specific requirements. The default behavior of Kubernetes works well for most use cases. For example, it finds a balance in resource utilization. However, if you want to exact more control over how a pod is used, it requires more advanced features, including node affinity, nodeSelector, and more.
What is nodeSelector?
nodeSelector is the easiest way to select node constraints. As part of the PodSpec field, nodeSelector stipulates a map of key-value pairs. The idea here is to allow a pod to be scheduled if the labels match the labels defined in nodeSelector.
The Magalix Rego Playground testing our built-in nodeSelector Template.
This means that pods that are eligible to run on a node have each of the indicated key-value pairs as labels. However, you have to first specify the key-value pairs inside the pod. The pod can also have additional labels.
What is Node Affinity?
Node affinity is essentially a set of rules leveraged by a scheduler to determine where pods are placed. In this scenario, you have the option of configuring a pod to run on a node on a specific availability zone in a particular CPU.
There are two different types of affinity features, namely, node affinity and inter-pod affinity or anti-affinity. They also solve more complex scenarios when compared to nodeSelector.
The Magalix Rego Playground testing one of our built-in nodeAffinity Templates.
As such, Node affinity is more or less like the nodeSelector with the following enhancements:
- It allows the designation of a rule as "preferred" or "soft" instead of hard requirements. This means that whenever the scheduler doesn't satisfy it, the pod remains scheduled without a guarantee of enforcement.
- Affinity or anti-affinity language provides more matching rules and is more expressive. It also provides the precise matches generated with logical AND operations.
In this scenario, anti-affinity or inter-pod affinity also constraints against pod labels (instead of node labels). While it also shares the two enhancements listed above, it also helps curb against labels on other pods running on the node. The same is true for other topological domains.
As such, with the help of label selectors in specified pods with custom labels on nodes, you can specify affinity or anti-affinity toward groups of nodes. The node itself doesn't have control over its placement.
Affinity rules are everywhere, essentially providing resiliency and high availability to Kubernetes applications. These rules help avert undesirable situations that could compromise the stability of the nodes running within a cluster.
These rules also ensure that replicas of the same applications don't run within the same node. This is fundamentally a soft requirement, where both replicas are placed in the same node if the scheduler can't march them.
What is Policy-as-Code?
Policy-as-Code (PaC) describes the codification of policies within your infrastructure setup. It's a process that helps operations teams verify and enforce specific standards and rules within individual clusters or across the whole organization.
Benefits of PaC include enhanced efficiency through the automation of common tasks, the reduction of variations in your infrastructure, and maintenance costs. PaC also ensures that misconfigurations don't leak into your production environment.
The enforced policies depend on established organizational guidelines, conventions, and industry best practices. It basically involves checking for environment variables that apply to that container at the start of the server or service to determine the type of actor that's required.
However, this begs the question, how do you selectively enforce Open Policy Agent (OPA) governance protocols when OPA usually takes an all or nothing approach?
How Do You Selectively Enforce OPA?
When using affinity with nodeSelector, you can deploy the same full-stack code to all nodes. For example, some groups of nodes are labeled in batches and others as UI. All the batched nodes will only respond to services and batched requests. At the same time, the UI will respond to both services and the UI.
Once the UI node detects the batch cluster nodes, they will stop performing batches or services. The affinity part allows clients to route the requests to these role-based nodes without hard mapping or named instructions.
Relays on the policy require a set of keys to exist at the container level to start a server or service. When the container starts, if it doesn't find the keys, it will stop the code from executing further.
In some cases, the same is true even it finds the keys. Your consideration code can see the security level needed to execute the container to stop service.
PaC is crucial to programmatically enforce security standards, implement the right playbooks and workflows, and create compliance reporting and analysis. This approach helps create developer-centric experiences with continuous deployment for cloud-native applications.
By consistently enforcing best practices and established organizational conventions, you can also automate security and compliance into your CI/CD workflows. When you enforce PoC across the organization, you essentially apply and implement governance standards with a click.
This process is also repeated throughout the SLDC and helps everyone get on the same page when it comes to complex compliance and governance protocols.
At Magalix, we're in the business of empowering enterprises by defining, managing and deploying custom governance policies by leveraging PoC. We help implement the right playbooks, workflows, and more using a robust OPA policy execution engine
To learn more, reach out to one of our in-house Kubernetes experts or sign-up for a 30-day free trial.