<img src="https://ws.zoominfo.com/pixel/JHVDdRXH2uangmUMQBZd" width="1" height="1" style="display: none;">

Magalix Introduces the Cloud-Native Application Policy Pack

Exit icon Learn More

Security Context Settings Help Mitigate Kubernetes Risk

Policy as Code Shifting Security Left Security as Code
Security Context Settings Help Mitigate Kubernetes Risk
Policy as Code Shifting Security Left Security as Code

Properly securing containers, pods and running workloads in Kubernetes is challenging. However, it's a critical part of protecting your environment as the consequences can be far-reaching.

As containers and pods are individual units of compute, they are often subject to aggressive attacks on Kubernetes clusters. However, Kubernetes offers some key native capabilities to protect your pods.

Pods are the smallest resource that's deployed and managed in Kubernetes. By enforcing security at this level, DevOps teams achieve greater fine-grained control over individual application components.

Kubernetes workloads have several different settings that have a direct or indirect impact on security. As a result, it requires significant knowledge to implement them correctly. In this scenario, one misconfiguration could compromise the whole application. That’s where security context comes in.

What are Kubernetes Security Context Settings?

Kubernetes tools like security context settings allow every container and pod to boost security and avert a potential security incident. Kubernetes security context includes tools like Open Policy Agent (OPA)Gatekeeper and policies like Pod Security Policies. However, securing containers and pods isn't easy because of the knowledge gap.

"Security context" basically indicates specific constraints when it comes to access and permissions at an individual pod level that's configured at runtime. Such settings include a wide range of configurations like the following:

  • System-level capabilities
  • Container root filesystem ready (or not)
  • Run privileged (or not)
  • Access control based on UID and GID

Getting these settings and configurations right is the first step to fortifying your environment. As such, we should tread carefully when implementing each setting.

Pod-Level Security Context

The primary objective here is to mitigate risk by limiting a pod's potential to be compromised in an attack. In the same vein, we also try to limit the blast radius of an attack spreading beyond a set of containers.

As such, you can specify the settings for each pod in the security context field in the pod manifest. You can also leverage the pod security context object and save relevant security attributes using the Kubernetes API.

Pod-level security context also results in the setting being applied to volumes as they are mounted. Whenever applicable, match the fsGroup specified in the security context.

Container-Level Security Context

You can also apply constraints to containers that run within a specific pod through a pod-level security context. If you don't want the same setting used on all containers within any given pod, you can specify it through a container-level security context.

 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80	

An example of an Nginx deployment manifest with no SecurityContext set.

To achieve this, the container manifest must include the security context field. In this scenario, field values of container.securityContext will trump field values of PodSecurityContext. This means that individual container constraints override those specified for pods whenever there's a conflict or overlap.

 
package magalix.advisor.podSecurity.securityContext
 
violation[result] {
  not controller_spec.securityContext
  result = {
    "issue detected": true,
    "msg": "The securityContext for your spec has not been set."
  }
}
 
violation[result] {
  containers := controller_spec.containers[_]
  not containers.securityContext
  result = {
    "issue detected": true,
    "msg": "The securityContext for your containers have not been set."
  }
}
# controller_container acts as an iterator to get containers from the template
controller_spec = input.spec.template.spec {
  contains_kind(input.kind, {"StatefulSet" , "DaemonSet", "Deployment", "Job"})
} else = input.spec {
  input.kind == "Pod"
} else = input.spec.jobTemplate.spec.template.spec {
  input.kind == "CronJob"
}
 
contains_kind(kind, kinds) {
  kinds[_] = kind
}	

A REGO policy that will be in violation when tested against the Nginx example above.

But container security contexts never override pod security contexts. This is because the pod security context applies only to the pod's volumes.

Container Vs. Pod Settings

Security context settings in Kubernetes are both defined in the ContainerSpec and PodSpec APIs. In this scenario, the scoping is designated by [C] and/or [P] annotations. Whenever the setting is available and configured in both scopes, container settings always take precedence.

Security context settings are extensive, and some of the most popular ones include:

1- RunAsUser/RunAsGroup [P/C]

Often, we configure processes to run as a specific user and/or group in container images. Sometimes, these are overridden in the configuration settings runAsUser and runAsGroup. However, you can't configure it in conjunction with volume mounts with files containing the duplicate ownership IDs.

However, it's a bad idea to use such settings as there's a danger in making runtime decisions for the container that are incompatible with the original container image. Furthermore, if you configure a different user, it will continue to fail as the user doesn't exist in the original image file.

While it's undoubtedly a great idea not to run container processes as the root user, there's no way to guarantee it with the runAsUser or runAsGroup settings. These settings can be removed later but be sure to set runAsNonRoot to true.

2- RunAsNonRoot [P/C]

Although containers use cgroups and namespaces to limit some of their processes, one misconfiguration in the deployment settings will enable access to resources on the host. If the process runs as a root, it'll have the same access to resources as the host root account.

If you're using other containers or pods to minimize constraints, the presence of a root UID heightens the risk of exploitation. So, unless there's a valid reason to do it, never run a container as a root.

3- ProcMount [C]

In an attempt to avert potential security issues, some parts of the proc filesystem are masked by default by container runtimes. But sometimes, access to these specific parts is sometimes required, especially when nested containers form part of the in-cluster build.

In this scenario, you have two options. You can unmask and remove all masking from the proc filesystem or go with the default setting with the standard container runtime behavior. Those who use this type of entry must know what they are doing and should only do it when dealing with nested containers.

Pod Security Policy

Pod Security Policy (PSP) describes a cluster-level resource used to control a pod's behavior. This approach uses policy to dictate what it can and can't do by defining pod-level standards. By utilizing an Admission Controller, you can implement PSPs or implement least privilege access. Before creating a pod on a namespace, you can use the Admission Controller to verify it.

The key difference here is that PSP is a cluster-level resource we implement before pod definitions come to any controller for implementation. The security context is defined and configured at the container or pod level at runtime.

Shift Security Left and Enforce Policy-As-Code

DevOps teams can also add an additional layer of security by integrating policy-as-code. When you programmatically enforce security, you create developer-centric experiences during the deployment of cloud-native applications. 

By leveraging automated operators, you can also continuously monitor repositories for changes in your Kubernetes cluster or cloud infrastructure. An automatic update is triggered whenever a change is detected.

Shifting security left helps DevOps teams normalize hybrid environments. It's an approach that allows DevOps teams to achieve exceptional governance levels in all clusters from a single source of truth. It also helps maintain a robust security posture while accelerating innovation.

At Magalix, we help DevOps teams shift security left and implement robust workflows and playbooks that ensure security and compliance. 

Request A Commitment-Free Consultation

Comments and Responses

Related Articles

What Is Zero Trust Architecture and How Does It Work?

In an enterprise environment with containers and micro-segmentation, zero-trust architecture helps enhance security protocols. Learn more.

Read more
Cloud Asset Management and Protection: Storage Assets

Learn useful strategies to manage cloud storage assets and get the most value for your investment.

Read more
4 Reasons Why Companies should Codify their Security

The move to the cloud has significantly increased the operational and security complexity. Codifying security policies can help mitigate the potential risk

Read more