14-days FREE Trial

 

Right-size Kubernetes cluster, boost app performance and lower cloud infrastructure cost in 5 minutes or less

 

GET STARTED

  Blog

Kubernetes Network Policies 101

 

What is a Kubernetes Network Policy?

Several companies are moving their entire infrastructures to Kubernetes. Kubernetes is aiming at abstracting all the components that you normally find in a modern IT data center. So pods represent compute instances, network plugins provide routers and switches, volumes make up for SAN (Storage Area Network) and so on. But, what about network security? In a data center, this is handled by one or more firewall appliances. In Kubernetes, we have the NetworkPolicy.

To apply a NetworkPolicy definition in a Kubernetes cluster, the network plugin must support NetworkPolicy. Otherwise, any rules that you apply are useless. Examples of network plugins that support NetworkPolicy include Calico, Cilium, Kube-router, Romana, and Weave Net.

Most network plugins work on the Network Level of the OSI model (Layer 3). However, you may find some models that also work on other layers as well (for example, Layer 4 and Layer 7).

Do you need a NetworkPolicy resource defined in your cluster? The default Kubernetes policy allows pods to receive traffic from anywhere (these are referred to as non-isolated pods). So unless you are in a development environment, you’ll certainly need a NetworkPolicy in place.

Your First NetworkPolicy Definition

The NetworkPolicy resource uses labels to determine which pods it will manage. The security rules defined in the resource are applied to groups of pods. This works in the same sense as security groups that cloud providers use to enforce policies on groups of resources.

In our first example, we’ll use NetworkPolicy to target pods which have app=backend label. Those pods will be isolated in terms of ingress (incoming) and egress (outgoing) traffic once the rules are applied.

The policy enforces that Pods can receive traffic from other pods in the same namespace (default) on port 3000, as long as :

  • Traffic is from IPs in block 182.213.0.0/16
  • The client pod is in a namespace that is labeled project=mywebapp.
  • The client pod is labeled app=frontend and it is in the same namespace (default).

All other ingress traffic will be rejected.

Egress traffic is allowed to the IP range 30.204.218.0/24 on port 80. All other Egress traffic will be rejected.

A NetworkPolicy definition file that addresses those requirements looks as follows:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: backend-network-policy
  namespace: default
spec:
  podSelector:
	matchLabels:
  	app: backend
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
	- ipBlock:
    	cidr: 182.213.0.0/16
	- namespaceSelector:
    	matchLabels:
      	project: mywebapp
	- podSelector:
    	matchLabels:
      	app: frontend
	ports:
	- protocol: TCP
  	port: 3000
  egress:
  - to:
	- ipBlock:
    	cidr: 30.204.218.0/24
	ports:
	- protocol: TCP
  	port: 80

Below is a visualization of that network policy. 

kubernetes network policy example

How can we fine-tune Network Policy using selectors?

There are endless scenarios where you want to allow or deny traffic from specific or multiple sources. The same thing holds for which destination you want to allow traffic to. Kubernetes NetworkPolicy resource provides you with a rich set of selectors that you can use to secure your network paths the way you want. NetworkPolicy selectors can be used to select source connections (ingress) as well as destination (egress). They are as follows:

  • podSelector: selects pods in the same namespace, which is defined in the metadata part of the NetworkPolicy definition. The selection is through the pod label.
  • namespaceSelector: selects a specific namespace by its label. All pods in that namespace are matched.
  • podSelector combined with namespaceSelector: when combined, you can select pods with a particular label in a namespace with a specific label. For example, let’s say we want to limit incoming traffic to our database (app=db) to only pods in a namespace labeled env=prod. Additionally, the pods must have the app=web. The ingress part of the NetworkPolicy definition to achieve this may look as follows:
 ingress:
  - from:
	- namespaceSelector:
    	matchLabels:
      	env: prod
  	podSelector:
    	matchLabels:
      	app: web

NOTE

You may be wondering: does Kubernetes combine its rules with an AND or an OR operator? This depends on whether the rules are in a single array item, or in multiple items. This works the same whether the definition is in YAML or JSON. In this article, we’ll be discussing YAML. So in the above snippet, we have both the namespaceSelector and the podSelector in one item (in YAML, an array item is denoted by a dash ‘-’). As a result, Kubernetes will combine both rules with an AND operator. In other words, the incoming connection must match both rules to be accepted.

On the other hand, if we write the above snippet as

 ingress:
  - from:
	- namespaceSelector:
    	matchLabels:
      	env: prod
  	podSelector:
    	matchLabels:
      	app: web

Kubernetes will combine both rules with an OR operator. This means that any pod in a namespace labeled “env=production” will be allowed. Also, any pod labeled app=web in the same NetworkPolicy namespace will be allowed. This is a very important distinction to look for when defining your NetworkPolicy rules.

  • ipBock: here you can define which IP CIDR blocks are the source or the destination of the selected pods' connections. Traditionally, those IPs are external to the cluster because pods always have short-lived IPs that can change at any moment.
    You should be aware that, depending on the network plugin in use, the source IP address may change before the packet gets analyzed by the NetworkPolicy rules. An example scenario is when the cloud provider’s Load Balancer replaces the source IP of the packet with its own.
    ipBlock can also be used to block specific IPs from an allowed range. This can be done using the except keyword. For example, we can allow all traffic from 182.213.0.0/16 but deny 182.213.50.43. The snippet for such a configuration may look as follows:
- ipBlock:
      cidr: 182.213.0.0/16
      except:
      - 182.213.50.43/24

As you can see, you can add multiple IP addresses or even ranges as items in the excepted array.

Common Kubernetes NetworkPolicy use cases

The rest of this article discusses some common use cases for using NetworkPolicy in your cluster.

Deny Ingress Traffic That Has No Rules

An effective network security rule starts with denying all traffic by default unless explicitly allowed. This is how firewalls work. By default, Kubernetes regards any pod that is not selected by a NetworkPolicy as “non-isolated”. This means all ingress and egress traffic is allowed. So, a good foundation is to deny all traffic by default unless a NetworkPolicy rule defines which connections should pass. A NetworkPolicy definition for denying all ingress traffic may look like this:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  namespace: default
  name: ingress-default-deny
spec:
  podSelector: {}
  policyTypes:
  - Ingress

The effect of applying this policy is that all pods in the “default” namespace will not accept any traffic unless allowed by another rule. The namespace can be changed as desired through the metadata part of the NetworkPolicy.

Note that this does not affect egress traffic, which must be controlled through other rules, as we discuss later.

Deny Egress Traffic That Has No Rules

We’re doing the same thing here but on egress traffic. The following NetworkPolicy definition will deny all outgoing traffic unless allowed by another rule:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: egress-default-deny
  namespace: default
spec:
  podSelector: {}
  policyTypes:
  - Egress

You can combine both policies in a single definition that will deny all ingress and egress traffic as follows:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  namespace: default
  name: ingress-egress-deny
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress

Allow All Ingress Traffic Exclusively

You may want to override any other NetworkPolicy that restricts traffic to your pods, perhaps for troubleshooting a connection issue. This can be done by applying the following NetworkPolicy definition:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: ingress-default-allow
  namespace: default
spec:
  podSelector: {}
  ingress:
  - {}
  policyTypes:
  - Ingress

The only difference we have here is that we add an ingress object with no rules at all.

Be aware, though, that this policy will override any other isolating policy in the same namespace.

Allow All Egress Traffic Exclusively

Like we did on the ingress part, sometimes you want to exclusively allow all egress traffic even if some other policies are denying it. The following NetworkPolicy will override all other egress rules and allow all traffic from all pods to any destination:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: ingress-default-allow
  namespace: default
spec:
  podSelector: {}
  ingress:
  - {}
  policyTypes:
  - Ingress

In all cases, you can modify any of the allow-all deny-all rules to match specific pods by using the podSelector as discussed earlier.

TL;DR

Kubernetes uses the NetworkPolicy resource to fulfill the firewall role that’s normally found in a traditional data center. NetworkPolicy is largely dependent on the capabilities of the network plugin in place.

A NetworkPolicy definition can work on all the pods in a namespace or you can use selectors to apply the rules to pods with a specific label.

Through ingress and egress rules, you can define the incoming or outgoing connection rules from/to:

  • Pods with a specific label (podSelector)
  • Pods belonging to a namespace with a particular label (namespaceSelector)
  • A combination of both rules to limit the selection to labeled pods in labeled namespaces.
  • IP ranges. Commonly, they’re external IPs as pods IPs are volatile by definition.

Pods that are selected by a NetworkPolicy are said to be “isolated”. Those which are not matched are called “non-isolated”. Kubernetes allows non-isolated pods to accept all egress and ingress traffic. For that reason, it’s recommended that you issue a default deny-all policy on egress and ingress traffic so that pods that are not matched by any NetworkPolicy are locked-down till they are.

Mohamed Ahmed

Jul 25, 2019