<img src="https://ws.zoominfo.com/pixel/JHVDdRXH2uangmUMQBZd" width="1" height="1" style="display: none;">

Enforce Ingress Best Practices Using OPA

DevOps Kubernetes opa open policy agent Ingress
Enforce Ingress Best Practices Using OPA
DevOps Kubernetes opa open policy agent Ingress

In this section of our OPA series, we define policies that ensure that no bad Ingress definitions will be deployed to our cluster. If you haven’t already done so, please go through our previous articles in this series to learn more about OPA, and how it can be integrated with Kubernetes to enforce policies. This article assumes that you have a working knowledge of Kubernetes and OPA, and that you already have admin access to a Kubernetes cluster that has OPA deployed. We also assume that an Ingress controller is installed (in the lab, we used the nginx controller).

As you probably know, the Kubernetes Ingress controller allows you to serve different URLs on the same Load Balancer. It receives the HTTP request and routes it to the appropriate backend service depending on the path or the host header. Obviously, this saves you a lot of extra cost required to create a Load Balancer for each public-facing service. However, and like Uncle Ben said:

" With great power, comes great responsibility "

In the rest of this article, we get to know some of the problems that may occur when working with Ingresses in the absence of an administrative policy.

Requirement: No Route Conflicts Exist Among Ingress Resources

So, you have your Ingress controller set, and users start creating Ingress resources for different routes - as more and more routes are defined, there is a chance of collision. For example, having an Ingress with oscorp.com host defined in one namespace, and another Ingress with the same host defined in another namespace. OPA can help here by creating a policy that intercepts the request for a new Ingress and:

  1. Checking the existing hosts in all other Ingresses.
  2. Determining if there’s an already defined host in an existing Ingress that matches the one in the request.
  3. Depending on the result, the request is approved or denied.

The following diagram depicts this workflow:

enforce ingress best practices using opa 1_png

Describing The Policy In Rego

When we need to define a policy for OPA to execute, we need to use the Rego language, which was designed specifically for this purpose. Our enforce-ingress-hostnames.rego file may look as follows:

package kubernetes.admission

import data.kubernetes.ingresses

deny[msg] {
	# We define variables that we use as keys when iterating through the ingresses dictionary
    some other_ns, other_ingress
    # We are only interested in Ingress requests
    input.request.kind.kind == "Ingress"
    # that asks for creation (other requests should be ignored by this policy)
    input.request.operation == "CREATE"
    # Extract the host part of the requests JSON object
    host := input.request.object.spec.rules[_].host
    # Get all the existing ingresses. Make sure you identify the namespace (other_ns) and the ingress name (other_ingress)
    ingress := ingresses[other_ns][other_ingress]
    # We are not interested in Ingress requests in the same namespace
    other_ns != input.request.namespace
    # Do we have an existing ingress that has a host matching the one in the ingress definition?
    ingress.spec.rules[_].host == host
    # If yes, then this policy is vilated. We need to send an informative message to the client detailing which part of the ingress violated the policy
    msg := sprintf("invalid ingress host %q (conflicts with %v/%v)", [host, other_ns, other_ingress])

As usual, we’ve added comments to explain what each code line does, but let’s focus on some important points:

  • You must use the Kubernetes.admission module for this policy to work on Kubernetes. This is defined in line 1.
  • Line 3 imports the existing Ingress objects in the cluster. It does that by using the caching feature in the kube-mgmt container. We highly recommend that you go through this part of the documentation: https://github.com/open-policy-agent/kube-mgmt#caching
  • If you have read any of our previous articles in this series, you’ll notice that we almost always use the underscore character (_) as an iterator. Rego allows you to use the underscore with arrays as long as you are not interested in examining the values of the array item (if you are a Go programmer, there is a similar concept in Go). However, in our case, we want to show the value of the key name to show the conflicting namespace and ingress (more on that later).
  • Line 15 uses the kube-mgmt caching capability (discussed in a previous point) to extract the ingress namespace and name from the collection of the existing resources. It uses the other_ns and other_ingress to hold those values for us.
  • Line 17 is the meat and potato of our policy. It performs the actual evaluation as to whether we do have an existing Ingress resource in any of the namespaces with the same host in the definition.
  • Line 21 displays an informative message to the client when the policy is violated that contains the offending namespace and ingress name.

Applying The Policy

Before uploading our Rego file to OPA, we need to instruct the kube-mgmt sidecar container to load the existing ingresses, so that they are available to our code. This can be done by modifying the command arguments of the container so that they look as follows:

- name: kube-mgmt
     image: openpolicyagent/kube-mgmt:0.8
         - "--replicate=extensions/v1beta1/ingresses"

If you’ve been following this article series from the start, you can apply the above change either by editing the OPA deployment: kubectl -n opa edit deployment opa or by making the change in the YAML file and applying it.

Now, let’s apply the policy by creating a ConfigMap in the OPA namespace that holds the contents of our Rego file:

kubectl create configmap  enforce-ingress-hostnames --from-file=enforce-ingress-hostnames.rego

Then, ensure that OPA did not complain about any syntax errors:

kubectl get cm enforce-ingress-hostnames -o json | jq '.metadata.annotations'
  "openpolicyagent.org/policy-status": "{\"status\":\"ok\"}"

Already working in production with Kubernetes? Want to know more about kubernetes application patterns?


Download Kubernetes Application Patterns E-Book

Exercising The Policy

In order to ensure that our policy is working as expected, we apply the following procedure:

  1. Create an Ingress resource in the default namespace and ensure that the creation request was admitted successfully.
  2. Create a new Ingress in another namespace. Make sure that you use the same host in the definition file.
  3. The request should be denied, and you should see an informative message.

Our Ingress definition looks as follows:

apiVersion: extensions/v1beta1
kind: Ingress
  name: ingress-ok
  namespace: production
  - host: oscorp.com
      - backend:
          serviceName: nginx
          servicePort: 80
$ kubectl apply -f ingress.yaml
ingress.extensions/ingress-ok created
$ kubectl create ns production
namespace/production created

Now, modify the ingress.yaml file so that the namespace is production instead of default and apply the definition again:

$ kubectl apply -f ingress.yaml
Error from server (invalid ingress host "oscorp.com" (conflicts with default/ingress-ok)): error when creating "ingress.yaml": admission webhook "validating-webhook.openpolicyagent.org" denied the request: invalid ingress host "oscorp.com" (conflicts with default/ingress-ok)


  • Open Policy Agent can be deployed to Kubernetes as an admission controller. When there, it intercepts the requests arriving at the API server, and validates them against the policies it already has.
  • OPA can be used not only to enforce security policies, but also in many Kubernetes best practices.
  • In this article, we explored how we can use OPA to avoid Ingress conflicts by checking first if there is an existing Ingress with the same host as the one to be created.

Comments and Responses

Related Articles

Team Productivity: Resource Management

Since the introduction of containers, the method of building and running applications in an organization has

Read more
Capacity Management for Teams on Kubernetes: Setting Pod CPU and Memory Limits

Capacity management is a complex, ever-moving target, for teams on any infrastructure, whether on-prem,

Read more
Kubernetes cost saving K8s
Kubernetes Cost Optimization with Magalix

Is our Spending Getting Worse? I woke up one day to see this email from our CEO in my mailbox. I knew this

Read more

start your 14-day free trial today!

Automate your Kubernetes cluster optimization in minutes.

Get started View Pricing
No Card Required