<img src="https://ws.zoominfo.com/pixel/JHVDdRXH2uangmUMQBZd" width="1" height="1" style="display: none;">

Learn the 3 Key Elements to Successfully Shifting your Security Left - Live Webinar

Exit icon Register Now

Human Generated Errors Through Bad Configuration in Kubernete Writeup

DevOps Kubernetes Configurations Management Governance
Human Generated Errors Through Bad Configuration in Kubernete Writeup
DevOps Kubernetes Configurations Management Governance

Human error is the most often cited cause of data breaches and hacks, containers and Kubernetes have a lot of knobs and dials which gives room for increasing misconfiguration error. Getting all the configuration correctly is often very challenging and tedious for seasoned developers and those who are learning to be. Misconfiguration through human errors is not surprisingly but higher and still rising than experienced misconfiguration incidents in Kubernetes.human common mistakesMisconfiguration by humans poses one of the greatest security risks to containers and Kubernetes. In today’s DevOps driven environment, configuration management must be as automated and streamlined as possible for it to not slow down application development and deployment. It should be comprehensive, covering containers, Kubernetes, and all their configurable components, which includes:

  1. RBAC.
  2. Network policies.
  3. Secrets.
  4. Privilege levels.
  5. Resource limits/requests.
  6. Read-only root file systems.
  7. Annotations, labels.
  8. Sensitive host mount and access.
  9. Image configuration, including provenance.
We will be discussing the first two which are:
  • RBAC.
  • Network policies.

Enable Role-Based Access Control (RBAC)

RBAC is quite an important security symbol in Kubernetes that protects clusters, whereby it lets you have full control of who can or who cannot have specific access to your API resources in Kubernetes. Although this can be new, organizations might have configured RBAC in a manner that leaves unintentional exposure to the environment.

To achieve the least privilege without leaving unintentional weaknesses, be sure you haven’t made any of the following five configuration mistakes described below.

1- Cluster Administrator Role Granted Unnecessarily

The built-in cluster-admin role grants effectively unlimited access to the cluster. During the transition from the legacy ABAC controller to RBAC, some administrators and users may have mistakenly replicated ABAC’s permissive configuration by granting cluster-admin widely, neglecting the warnings in the relevant documentation. If a user or groups of users are routinely granted cluster-admin, account compromises or mistakes can have dangerously broad effects. Also, service accounts did not need this type of access. In both cases, a more tailored role or cluster Role should be created and granted only to the users that need it but not to all users.

2- Improper Use of Role Aggregation

In Kubernetes 1.9 and later, Role Aggregation can be used to simplify privilege grants by allowing new privileges to be combined into existing roles. However, if these aggregations are not carefully reviewed, they can change the intended use of a role; for instance, the system: view role could improperly aggregate rules with verbs other than view, violating the intention that subjects granted system: view can never modify the cluster.

3- Duplicated Role Grant

Role definitions can overlap with each other, giving users the same access in more than one way. Administrators sometimes intend for this overlap to happen due to one reason or the other, but this configuration can make it more difficult to understand and evaluate which users are granted which accesses. And, in this kind of situation access revocation or denier will be more difficult for the administrator if she/he does not realize that multiple role bindings grant the same privileges or access.

4- Unused Role

Roles that are created but not granted to any user can increase the complexity of RBAC management. Similarly, roles that are granted only to users that do not exist example (users who have left the organization) can make it difficult to see the configurations that do matter by the administrator during configuration. Removing these unused or inactive roles is typically safe and will focus attention on the active roles only. This can as well help to manage access efficiently and quickly.

5- Grant of Missing Roles

Role bindings can reference roles that do not exist. This is a misconfiguration issue if the same role name is reused for a different purpose, in the future, these inactive role bindings can suddenly and unexpectedly grant privileges to users other than those who the new role is intentionally created for.

It is recommended and also important to use RBAC, different Kubernetes distributions and platforms have enabled RBAC by default at different times, and newly upgraded older clusters may still not enforce RBAC because the legacy Attribute-Based Access Control (ABAC) controller is still active. If you’re using a cloud provider, this setting is typically visible in the cloud console or using the provider’s command-line tool. For instance, on Google Kubernetes Engine, you can check these settings on all of your clusters using gcloud:

 
$ gCloud container clusters list --format= ' table [box] (name, legacyAbac.enabled) '   	
NAME	ENABLED
with-rbac  	
with-abac  	True

Once the RBAC is enabled, the next step is to check that you haven’t made any of the configuration mistakes described above.

Magalix Easily track and enforce the right configurations in your Kubernetes clusters. Hit the ground running with cloud-native community configurations, or build your own configuration policies and workflows. Learn More

Network Policies

With companies large and small rapidly adopting the platform, security has emerged as an important concern partly because of the learning curve inherent in understanding any new infrastructure, and partly because of recently announced vulnerabilities in the Kubernetes environment.

Kubernetes has brought another dynamic security to the table; its defaults are geared towards making it easy for users to get up and run quickly, as well as being backward compatible with earlier releases of Kubernetes that lacked important security features.

Administrations have the option of using network policies to configure their Kubernetes deployments environment. As an example, organizations can use a network policy to control how pods communicate with one another by only allowing communication between those pods defined in that policy. You can think of network policies in Kubernetes as an equivalent to a firewall in non-cloud based IT infrastructure.

By default, Kubernetes does not apply a network policy to a pod, meaning every pod can interact with every other pod in a Kubernetes environment with restrictions.

Default configured network policies make it possible and easier for malicious actors or users to compromise a single pod and capitalize on that access to move laterally throughout the container environment with ease. This is a vital human misconfiguration error that should be avoided.

A best practice guide on how to set up network policies will be discussed below. The network policy spec is intricate, and it can be difficult to understand and use correctly. In the guide, recommendations that significantly improve security will be discussed. Users can easily implement these recommendations without needing to know the spec in detail.

Note: We will focus only on ingress network policies because this is where the biggest security gains and improvement comes from, it is recommended and also important that you focus on them first, and then adding egress policies later. You can check out for egress policies in detail using this link

Network Policies allow you to control network access into and out of your containerized applications. The best practice is to make sure that you have a network plugin that supports this resource. Examples of network plugins include Romana, Weave Net, Calico, Cilium, and Kube-router.

Some managed providers install Network Policy providers for you or install a Container Network Interface (CNI) that implements Network Policies in your cluster. If that’s in place or setup, then you can go ahead by starting with some basic default network policies, such as blocking traffic from other namespaces by default. You can check to see if your cluster has any policies in effect with the following kubectl command described below:

 
$ kubectl get networkpolicies --all-namespaces No resources found.

 

Isolate Your Pods

Each network policy has a pod selector field, which selects a group of (zero or more) pods. When a pod is selected by a network policy, the network policy is said to apply to that pod.

Each network policy also specifies a list of allowed (ingress and egress) connections. When the network policy is created, all the pods that this policy applies to will be allowed to make or accept the connections listed in it. In other words, a network policy is essentially a list of allowed connections to or from pods. A connection to or from a pod is allowed if it is permitted by at least one of the network policies that apply to that pod.

This, however, has an important dilemma, based on everything discussed so far, we would generally have this misconception that, if no network policies are applied to a pod, then it means no connections to or from it would be permitted. The opposite is the case, that if no network policies apply to a pod, then all network connections to and from it are permitted except the connection is forbidden by a network policy applied to the other peer in the connection. This is virtually one of the major human misconfiguration errors that should be put in mind and take note of during configurations.

This behavior relates to the notion of “isolation”; that is, pods are “isolated” if at least one network policy applies to them but if no policies apply, they are “non-isolated”.

Note that network policies are not enforced on non-isolated pods. Although somehow counter-intuitive, this behavior exists to make it easier to get a cluster up and running, but for a user who does not understand network policies, their applications can run without having to create one.

Therefore, it is recommended to start by applying a “default-deny-all” network policy. The effect of this is to isolate all pods, which means that only connections explicitly listed by other network policies will be allowed.

 
apiVersion: networking.k8s.io/v1  	
kind: NetworkPoIicy	
metadata:  	
	name: default-deny-all
spec:  
podSelector: { }   	
policy Types: 
-Ingress

Without this policy, it is very easy and flexible to get into a scenario where you delete a network policy, with the hope to forbid the connections listed in it, but later finds out that all connections to some pods suddenly become permitted including ones that were not allowed before. Such a scenario occurs when the network policy deleted was the only one that applied to a particular pod, this means that the deletion of the network policy caused the pod to become “non-isolated”. Ensure this is avoided in your environment as well.

Note: Since network policies are namespaced resources, you will need to create this policy for each namespace. You can do so by running kubectl -n <namespace> create -f <filename> for each namespace.

Explicitly Allow Internet Access for Pods that Need it

With just the default-deny-all policy in place in every namespace, none of the pods will be able to interact with each other or receive traffic from the Internet. For most applications to work, there is a need to allow some pods to receive traffic from outside sources. One convenient way to permit this setup would be to designate labels that are applied to those pods to which you want to allow access from the internet and to create network policies that target those labels. For example, the following network policy allows traffic from all (including external) sources for pods having the networking/allow-internet-access=access=true label, as discussed previously, this will have to be created for every namespace

 
apiVersion: networking.k8s.io/v1  	
kind: NetworkPoIicy	
metadata:  	
	name: internet-access 
spec:  
podSe1ector:  
matchLabels:  
networking/allow-internet-access: 	"true" 
policyTypes:  
Ingress
ingress:   	
{} 

However, this policy provides a good starting point, with much greater security than the default if well applied by the user.

Explicitly Allow Necessary Pod-to-Pod Communication

After taking all steps discussed above, it is also recommended to add network policies to allow pods to interact with each other. You have a few options for how to enable pod-to-pod communications, depending on the situation

If the pods to interact with each other is not known, a good starting point is to allow all pods in the same namespace to interact with each other and explicitly allow communication across namespaces since that is usually rare and most times it’s a mistake that is unavoidable. You can use the following network policy to allow all pod to pod communication within a namespace

 
apiVersion: networking.k8s .io/v1 	
kind: NetworkPoIicy	
metadata:  	
	name: allow-same-namespace 	
spec:  
podSe1ector: { }   	
policyTypes:  
Ingress
ingress:   	
from:  
 podSe1ector:  { }

Often, communication between pods in an application follows a hub and spoke paradigm, with some central pods that many other pods need to talk to. In this case, however, a label should be created which designates pods that are allowed to talk to the “hub.” For example, if your hub is a database pod and has an app=db label, you could allow access to the database only from pods that have a networking/allow-db-access=true label by applying the following policy:

 
apiVersion: networking.k8s .io/v1 	
kind: NetworkPoIicy	
metadata:  	
	name: allow-db-access 
spec:  
podSe1ector:  
matchLabels : 
app: "db"  	
policyTypes:  
Ingress
ingress:   	
from:  
podSe1ector:  
matchLabels:  
networking/allow-db-access:	"true"

You could do something similar if you have a server that initiates connections to many other pods. If you want to explicitly allow the pods that the server is allowed to talk to, you can set the networking/allow-server-to-access=true label on them, and apply the following network policy (assuming your server has the label app=server) on them:

 
apiVersion: networking.k8s.io/v1  	
kind: NetworkPoIicy	
metadata:  	
	name: allow-server-to-access  
spec:  
podSelector:  
matchLabels:  
networking/allow-server-to-access:	"true" 
policyTypes:  
Ingress
ingress:   	
from:  
podSelector:  
matchLabels:  
app: "server"

Within the same namespace, users who know exactly which pod to pod connections should be allowed in their application can explicitly allow each of such connections. If you want pods in deployment A to be able to interact with pods in deployment B, you can create the following policy to allow that connection, after replacing the labels with the labels of the specific deployment:

 
apiVersion: networking.k8s .io/v1 	
kind: NetworkP01icy	
metadata:  	
	name: allow-server-to-access  
spec:  
podSelector:  
matchLabels:  
deployment -b - pod-label -1- key:	deployment-b-pod-label-1-value
deployment -b - pod-label -2 - key:   deployment-b-pod-label-2-vaIue
policyTypes:  
Ingress
ingress:   	
From:
podSelector:  
matchLabels:  
deployment-a-pod-label-1-key:  deployment-a -pod-label-1-value   	
deployment-a-pod-label-2-key:  deployment-a-pod-label-2-value

To allow connections across namespaces, a label needs to be created for the source namespace but Kubernetes does not have any labels on namespaces by default and also add a namespaceSelector query next to the podSelector query. To label a namespace, simply run the command: kubectl label namespace <name> networking/namespace=<name>

With the namespace labelled, this allow deployment A in namespace N1 to talk to deployment B in namespace N2 by applying the following network policy described below

 
apiVersion: networking.k8s . io/v1	
kind: NetworkPolicy	
metadata:  	
	name: allow-n1-a-to-n2-b   	
namespace: N2 
spec:  
podSe1ector:  
matchLabels:  
deployment -b - pod-label -1- key:	deployment-b-pod-label-1-value
deployment -b - pod-label -2 - key:   deployment-b-pod-label-2-value
policyTypes:  
Ingress
ingress:   	
from:  
namespaceSelector: 	
matchLabels:  
networking/namespace:  N1
podSelector:  
matchLabels:  
deployment-a-pod-label-1-key: deployment-a -pod-label-1-value
deployment-a-pod-label-2-key: deployment-a-pod-label -2-value

Conclusion

While these recommendations and guides provided above are best practices to avoid human misconfiguration in Kubernetes and its environment, network policies are a lot more involved. You can explore them in more detail, be sure to check out the Kubernetes tutorial as well as some handy network policy recipes on the Magalix blog.

Comments and Responses

Related Articles

Breaking Down the Complexity of Cloud Native Security for Leadership

Securing Cloud-Native applications can be complex because of the volume of skills and knowledge required

Read more
Securing Cloud-Native Applications is the New Foundation to Digital Transformation Success

Security can no longer remain on its own independent island & must be incorporated into the rest of the stack in to maintain a hardened infrastructure

Read more
DevOps Security Kubernetes
An Unpatched MiTM Vulnerability Affects All Kubernetes Version

An unpatched MiTM vulnerability has been recently discovered and affects all versions of Kubernetes, as disclosed by Kubernetes Product Security

Read more

Start Your 30-day Free Trial Today!

Automate your Kubernetes cluster optimization in minutes.

Get Started View Pricing
No Card Required