Weaveworks 2022.03 release featuring Magalix PaC | Learn more
Balance innovation and agility with security and compliance
risks using a 3-step process across all cloud infrastructure.
Step up business agility without compromising
security or compliance
Everything you need to become a Kubernetes expert.
Always for free!
Everything you need to know about Magalix
culture and much more
At a glance, Kubernetes is a powerful solution that solves many problems. So, it’s no surprise that Kubernetes is dominating the container orchestration market.
But working with it isn’t always easy as things can quickly become complicated. The same applies to Kubernetes security. Kubernetes isn’t secure by default. There are several attack pathways, but there are also concrete tactics to secure your services and infrastructure.
According to the State of Kubernetes and Container Security report, as much as 87% of organizations now manage some part of their container workloads leveraging Kubernetes. The same study found that 94% of organizations also had a serious security incident in their container environment over the last year.
These include runtime security incidents (27%), significant vulnerabilities (24%), and misconfigurations (69%). If companies aren’t proactive, they risk data breaches, regulatory fines, and severe damage to brand value.
In this scenario, each security issue corresponded with a container lifecycle phase. Known vulnerabilities must be remediated during the build, while misconfigurations are managed effectively during deployment. Developers must also respond to potential threats at runtime.
Kubernetes security is essential as threat actors search relentlessly for vulnerabilities and exploit them. For example, Tesla's Amazon Web Services cloud infrastructure fell victim to a far-reaching and well-hidden cryptojacking campaign. The attack came to light while scanning the public internet for misconfigured and unsecured cloud servers.
When investigated further, it was found to be running Kubernetes administration consoles, freely accessible over the internet, and engaging in cryptomining. The attackers discovered that the Kubernetes console wasn’t password protected and even found login credentials for the cloud environment in one of the console’s pods or storage containers. This helped threat actors deploy scripts to establish a cryptojacking operation.
Containers enable portability, greater speed, and access to microservices architectures. However, there’s a potential of creating security blind spots that expand your attack surface exponentially.
As DevOps grew popular for its agile advantages, security was an afterthought. When security isn’t at the forefront of development projects, your risk exposure grows exponentially.
When you deploy more and more containers, it’s increasingly difficult to maintain visibility into your cloud-native infrastructure. As containers follow a distributed philosophy, it’s challenging (to say the least) to investigate potential vulnerabilities (like misconfigurations) in individual containers.
When companies fail to establish container governance policies, they increase their risk exposure through misused image registries. Enterprises must formulate robust Kubernetes governance policies to manage container images and keep their name out of the headlines.
You have to build and store them in trusted image registries consistently. It’s critical to ensure that container images are made leveraging highly secure and approved and regularly scanned base images. Developers should only use container images from “allow” lists in these registries to launch containers in a Kubernetes environment.
Containers and pods talk to each other. They communicate with each other within deployments and with internal and external endpoints. It’s the only way to get them to function correctly.
Whenever a container is left insecure and breached, threat actors can potentially move laterally within the environment. The extent depends on how broadly that container communicates with others.
This is an ongoing problem as implementing network segmentation in an extensive container environment is prohibitive and complicated. It’s also challenging to manually configure such policies and make them work.
Kubernetes defaults are usually the least secure. By default, Kubernetes doesn’t apply network policies to a pod. This means that all the pods can talk to each other in a Kubernetes environment, potentially enabling lateral movement during a security breach.
However, Kubernetes, by design, accelerates application deployment and simplifies management and operations. By leveraging a rich set of Kubernetes controls, you can effectively secure your clusters and applications.
Kubernetes network policies behave like a firewall that controls how pods communicate with other endpoints and each other. When a network policy governs a pod, it can only communicate with the assets defined in that network policy.
To properly secure Kubernetes environments, you have to first “shift left” or weave in security protocols right from the first phase of development. This is a departure from previous development models where you build first and then think about securing it much later.
When Kubernetes is treated as immutable infrastructure, what’s running inside a container should never be changed or patched. Instead, it should be destroyed and recreated leveraging a standard template before deploying new updates.
The same applies during an active breach or when a potential threat is detected. For example, when a compromised container starts running malicious processes like cryptojacking and cryptomining, destroy it and recreate it. However, you have to ensure that the information used to build a new container image remedies the root cause of the problem.
It's best to approach cloud-native development (and security) in four distinct phases that constitute the application lifecycle: "Develop," "Distribute," "Deploy," and "Runtime."
When you shift left, security is built into every phase of the container lifecycle:
Cloud-native tools help introduce security early in the application lifecycle. By engaging in security testing, you can identify and respond to compliance violations and misconfigurations early. This approach adds security failures to familiar workflows raised for other pipeline issues.
Software supply chain safety is vital in development models that enable faster software iteration. Cloud-native application lifecycles demand methods for verifying the workload's integrity, the process for workload creation, and the means of operation.
This phase becomes more complicated whenever you use open source software, third-party runtime images, layers, and upstream dependencies. In this scenario, artifacts or container images need continuous automated scanning and updates to mitigate risk.
Whenever you integrate security across the development and distribution phases, it enables continuous validation of candidate workload attributes, secure workload observability capabilities, and logs available metrics in real-time.
Cloud-native environments must implement policy enforcement and resource restrictive capabilities by design. For example, runtime resource constraints (like Linux kernel cgroup isolation) for workloads often restrict visibility when integrated into higher application lifecycle levels in a cloud-native environment. To overcome this challenge, you have to break down the cloud-native environment into little layers of interrelated components.
There’s no security tool or plugin that’ll eternally secure your application deployments. Practicing standard hygiene like patching and two-factor authentication remains critical.
Secure your container images starting from the development phase. The effort you make now will pay off later. This approach complements the “shift left” model, which demands security implementation during the early stages.
It’s crucial to build secure images and to scan those images for known vulnerabilities. In this scenario, it’s best to only use minimal base images like distroless images. It’s also a good idea to avoid adding unnecessary component debugging tools.
Other Kubernetes security best practices to follow during the build phase:
To enable faster software iteration, ensure software supply chain safety. This means including methods to verify the integrity of workloads and the different processes used for that workload creation and operation.
This becomes increasingly challenging when using a lot of open-source software and third-party runtime images, layers, and upstream dependencies. Containers in the lifecycle pipeline must be automated and updated continuously to ensure security.
Once these security checks are complete, it’s critical to cryptographically sign artifacts to enforce non-repudiation and ensure integrity.
Other Kubernetes security best practices to follow during the distribution phase:
Before deploying your Kubernetes infrastructure, make sure it’s properly configured and secured. It’s also vital to ensure visibility on what you're implementing and how you’re doing it. This approach helps us quickly identify and rectify potential security violations.
You have to know what’s deployed, what’s going to be deployed, how it’s deployed, what it can access, and its compliance posture. This information provides opportunities to target areas that demand remediation.
Other Kubernetes security best practices to follow during the deployment phase:
The runtime phase brings with it a new set of security challenges. The primary objective here is to gain visibility into the running environment and effectively detect and respond to potential threats.
Being proactive about security from the beginning dramatically minimizes the likelihood of security incidents at runtime during Kubernetes deployments.
To start, we must first monitor security-relevant container activities like the following:
Through observation, you’re well-placed to identify anomalies in the system. It’s much easier to detect anomalies in containers than VMs as Kubernetes and containers are declarative.
Other Kubernetes security best practices to follow during the runtime phase:
Beyond images and workloads, you also have to take steps to protect the entire environment like your cluster infrastructure, nodes, and container engine. This is achieved by securely configuring the Kubernetes API server, regularly updating to the latest version of Kubernetes, and securing both etcd and kubelet.
With the growing adoption of Kubernetes, more work is required to ensure pod and cluster security. However, Kubernetes helps us weave security into everything and make microservices work while improving security through segmentation.
Once you tap into the innate power of Kubernetes with robust governance policies and transparent error messaging, the shift-left philosophy will emerge.
Magalix helps security teams shift left and define, manage, and deploy governance policies with a robust OPA policy execution engine, following Kubernetes’ best practices. To learn more, schedule a commitment-free consultation.
Empower developers to delivery secure and compliant software with trusted application delivery and policy as code. Learn more.
Automate your deployments with continuous application delivery and GitOps. Read this blog to learn more.
This article explains the differences between hybrid and multi-cloud model and how GitOps is an effective way of managing these approaches. Learn more.
Implement the proper governance and operational excellence in your Kubernetes clusters.
Comments and Responses