<img src="https://ws.zoominfo.com/pixel/JHVDdRXH2uangmUMQBZd" width="1" height="1" style="display: none;">

New! Magalix brings you the SaC (Security-as-Code) podcast. Listen now!

Exit icon Listen Now

Shift Left to Ensure Robust Kubernetes Security

DevOps Kubernetes Shiting Security Left DevSecOps
Shift Left to Ensure Robust Kubernetes Security
DevOps Kubernetes Shiting Security Left DevSecOps

At a glance, Kubernetes is a powerful solution that solves many problems. So, it’s no surprise that Kubernetes is dominating the container orchestration market.

But working with it isn’t always easy as things can quickly become complicated. The same applies to Kubernetes security. Kubernetes isn’t secure by default. There are several attack pathways, but there are also concrete tactics to secure your services and infrastructure.

According to the State of Kubernetes and Container Security report, as much as 87% of organizations now manage some part of their container workloads leveraging Kubernetes. The same study found that 94% of organizations also had a serious security incident in their container environment over the last year.

These include runtime security incidents (27%), significant vulnerabilities (24%), and misconfigurations (69%). If companies aren’t proactive, they risk data breaches, regulatory fines, and severe damage to brand value.

In this scenario, each security issue corresponded with a container lifecycle phase. Known vulnerabilities must be remediated during the build, while misconfigurations are managed effectively during deployment. Developers must also respond to potential threats at runtime.

Kubernetes Security Incident

Kubernetes security is essential as threat actors search relentlessly for vulnerabilities and exploit them. For example, Tesla's Amazon Web Services cloud infrastructure fell victim to a far-reaching and well-hidden cryptojacking campaign. The attack came to light while scanning the public internet for misconfigured and unsecured cloud servers.

When investigated further, it was found to be running Kubernetes administration consoles, freely accessible over the internet, and engaging in cryptomining. The attackers discovered that the Kubernetes console wasn’t password protected and even found login credentials for the cloud environment in one of the console’s pods or storage containers. This helped threat actors deploy scripts to establish a cryptojacking operation.

Container Security Risks and Challenges

Containers enable portability, greater speed, and access to microservices architectures. However, there’s a potential of creating security blind spots that expand your attack surface exponentially.

As DevOps grew popular for its agile advantages, security was an afterthought. When security isn’t at the forefront of development projects, your risk exposure grows exponentially.

Poor Visibility

When you deploy more and more containers, it’s increasingly difficult to maintain visibility into your cloud-native infrastructure. As containers follow a distributed philosophy, it’s challenging (to say the least) to investigate potential vulnerabilities (like misconfigurations) in individual containers.

Misused Image Registries

When companies fail to establish container governance policies, they increase their risk exposure through misused image registries. Enterprises must formulate robust Kubernetes governance policies to manage container images and keep their name out of the headlines.

You have to build and store them in trusted image registries consistently. It’s critical to ensure that container images are made leveraging highly secure and approved and regularly scanned base images. Developers should only use container images from “allow” lists in these registries to launch containers in a Kubernetes environment.

Unsecured Endpoints

Containers and pods talk to each other. They communicate with each other within deployments and with internal and external endpoints. It’s the only way to get them to function correctly.

Whenever a container is left insecure and breached, threat actors can potentially move laterally within the environment. The extent depends on how broadly that container communicates with others.

This is an ongoing problem as implementing network segmentation in an extensive container environment is prohibitive and complicated. It’s also challenging to manually configure such policies and make them work.


Learn why Kubernetes Governance is crucial to scaling business operations with Magalix latest whitepaper.

“Shift-Left Cloud-Native Security with a DevOps Mindset”.

Download Now


Shift Left to Ensure Kubernetes Security

Kubernetes defaults are usually the least secure. By default, Kubernetes doesn’t apply network policies to a pod. This means that all the pods can talk to each other in a Kubernetes environment, potentially enabling lateral movement during a security breach.

However, Kubernetes, by design, accelerates application deployment and simplifies management and operations. By leveraging a rich set of Kubernetes controls, you can effectively secure your clusters and applications.

Kubernetes network policies behave like a firewall that controls how pods communicate with other endpoints and each other. When a network policy governs a pod, it can only communicate with the assets defined in that network policy.

To properly secure Kubernetes environments, you have to first “shift left” or weave in security protocols right from the first phase of development. This is a departure from previous development models where you build first and then think about securing it much later.

When Kubernetes is treated as immutable infrastructure, what’s running inside a container should never be changed or patched. Instead, it should be destroyed and recreated leveraging a standard template before deploying new updates.

The same applies during an active breach or when a potential threat is detected. For example, when a compromised container starts running malicious processes like cryptojacking and cryptomining, destroy it and recreate it. However, you have to ensure that the information used to build a new container image remedies the root cause of the problem.

It's best to approach cloud-native development (and security) in four distinct phases that constitute the application lifecycle: "Develop," "Distribute," "Deploy," and "Runtime."

When you shift left, security is built into every phase of the container lifecycle:

1- Develop

Cloud-native tools help introduce security early in the application lifecycle. By engaging in security testing, you can identify and respond to compliance violations and misconfigurations early. This approach adds security failures to familiar workflows raised for other pipeline issues.

2- Distribute

Software supply chain safety is vital in development models that enable faster software iteration. Cloud-native application lifecycles demand methods for verifying the workload's integrity, the process for workload creation, and the means of operation.

This phase becomes more complicated whenever you use open source software, third-party runtime images, layers, and upstream dependencies. In this scenario, artifacts or container images need continuous automated scanning and updates to mitigate risk.

3- Deploy

Whenever you integrate security across the development and distribution phases, it enables continuous validation of candidate workload attributes, secure workload observability capabilities, and logs available metrics in real-time.

4- Runtime

Cloud-native environments must implement policy enforcement and resource restrictive capabilities by design. For example, runtime resource constraints (like Linux kernel cgroup isolation) for workloads often restrict visibility when integrated into higher application lifecycle levels in a cloud-native environment. To overcome this challenge, you have to break down the cloud-native environment into little layers of interrelated components.

There’s no security tool or plugin that’ll eternally secure your application deployments. Practicing standard hygiene like patching and two-factor authentication remains critical.

Kubernetes Security Best Practices

The Development Phase

Secure your container images starting from the development phase. The effort you make now will pay off later. This approach complements the “shift left” model, which demands security implementation during the early stages.

It’s crucial to build secure images and to scan those images for known vulnerabilities. In this scenario, it’s best to only use minimal base images like distroless images. It’s also a good idea to avoid adding unnecessary component debugging tools.

Other Kubernetes security best practices to follow during the build phase:

  • Always use the latest up-to-date images
  • Always use an image scanner to identify known vulnerabilities within images
  • Integrate strict security protocols into your CI/CD pipeline (and automate security)
  • Label non-fixable vulnerabilities (whenever it isn’t critical and left to fix later)
  • Deploy in-depth defense protocols (with policy checks and a remediation workflow in place)

The Distribute Phase

To enable faster software iteration, ensure software supply chain safety. This means including methods to verify the integrity of workloads and the different processes used for that workload creation and operation.

This becomes increasingly challenging when using a lot of open-source software and third-party runtime images, layers, and upstream dependencies. Containers in the lifecycle pipeline must be automated and updated continuously to ensure security.

Once these security checks are complete, it’s critical to cryptographically sign artifacts to enforce non-repudiation and ensure integrity.

Other Kubernetes security best practices to follow during the distribute phase:

  • Enable observability and logging
  • Pre-deployment checks to identify excessive privileges

The Deployment Phase

Before deploying your Kubernetes infrastructure, make sure it’s properly configured and secured. It’s also vital to ensure visibility on what you're implementing and how you’re doing it. This approach helps us quickly identify and rectify potential security violations.

You have to know what’s deployed, what’s going to be deployed, how it’s deployed, what it can access, and its compliance posture. This information provides opportunities to target areas that demand remediation.

Other Kubernetes security best practices to follow during the deployment phase:

  • Access the privileges used by containers and provide the minimum privileges and capabilities to perform a function
  • Always use annotations and labels correctly
  • Assess image origins, including registries (and never use images from unknown registries)
  • Control traffic between pods and clusters using Kubernetes network policies (and prevent lateral movement)
  • Deploy pod security policies
  • Engage in image scanning during the deployment phase
  • Prevent unnecessary exposure to sensitive data by preventing overly permissive access
  • Use namespaces to isolate sensitive workloads (and contain attacks or limit the impact of human error)

The Runtime Phase

The runtime phase brings with it a new set of security challenges. The primary objective here is to gain visibility into the running environment and effectively detect and respond to potential threats.

Being proactive about security from the beginning dramatically minimizes the likelihood of security incidents at runtime during Kubernetes deployments.

To start, we must first monitor security-relevant container activities like the following:

  • Process activity
  • Network communications between containerized services
  • Network communications among containerized services and external clients and servers

Through observation, you’re well-placed to identify anomalies in the system. It’s much easier to detect anomalies in containers than VMs as Kubernetes and containers are declarative.

Other Kubernetes security best practices to follow during the runtime phase:

  • Compare and analyze different runtime pod activities
  • Continuously monitor network traffic to limit unnecessary or insecure communication.
  • Engage in vulnerability scanning while running deployments
  • Leverage Kubernetes built-in controls whenever possible to fortify security protocols
  • Monitor running deployments for newly discovered vulnerabilities and other known vulnerabilities
  • Use contextual information in Kubernetes
  • Use “allow” lists (with the help of an experienced security vendor)
  • Whenever there’s an active breach, scale suspicious pods to zero (using Kubernetes native controls)

Beyond images and workloads, you also have to take steps to protect the entire environment like your cluster infrastructure, nodes, and container engine. This is achieved by securely configuring the Kubernetes API server, regularly updating to the latest version of Kubernetes, and securing both etcd and kubelet.

With the growing adoption of Kubernetes, more work is required to ensure pod and cluster security. However, Kubernetes helps us weave security into everything and make microservices work while improving security through segmentation.

Once you tap into the innate power of Kubernetes with robust governance policies and transparent error messaging, the shift-left philosophy will emerge.

Magalix helps security teams shift left and define, manage, and deploy governance policies with a robust OPA policy execution engine, following Kubernetes’ best practices. To learn more, schedule a commitment-free consultation.

Comments and Responses

Related Articles

The Shared Security Model - Dividing Responsibilities

Understanding the Shared Cloud Security Model and causes behind common data breaches.

Read more
How to Prevent Non-Secure Container Images from Being Deployed with Policy-As-Code

Security is critical to business continuity. As such, DevOps teams must prevent non-secure container images from being deployed. But how do you do it?

Read more
Using Affinity with nodeSelector and Policy-As-Code, and Exclusions

In a Kubernetes cluster, you have to leverage policy-as-code to enforce Node Affinity using nodeSelector. But how do you do go about it? Learn more.

Read more