<img src="https://ws.zoominfo.com/pixel/JHVDdRXH2uangmUMQBZd" width="1" height="1" style="display: none;">

Kubernetes Governance with Magalix

DevOps Kubernetes K8s Governance
Kubernetes Governance with Magalix
DevOps Kubernetes K8s Governance

Who Owns this Workload?

Recently, one of our Site Reliability Engineers (SRE) noticed a workload running in our cluster which he hasn’t seen before. The workload was consuming some resources and the SRE wanted to apply some updates to the cluster but was not sure who owns the workload, and doesn’t know if his updates would have an impact on it or not. The SRE reached out to the development team on Slack concerning the Workload to see if anyone knew anything about this new workload. It was late in the day, and it took some time to get a reply. Many people did not know about this new workload, or who owns it, or what part of the system it belonged to. Some of us even wondered if this could be a malicious application. Should we terminate it and remove it right away? What if it is part of the new feature we launched a few days ago?

Eventually, we figured out that it was part of a prototype that an engineer was working on with someone from the business team to experiment with a new feature. But what if this was some malicious application? Or it was causing an issue in production and we needed to reach the owner of this service? Many critical hours would’ve passed without reaching a resolution. In such a distributed and decentralized environment and as your teams continue to grow, it becomes challenging to know everything and ensure everyone is a good citizen. So what should we do?

How to Establish Better Governance and Avoid this Problem?

If we mandate it that each new service or every change made to production should be reviewed by the SRE team, we will certainly slow the development team and slow the pace of innovation. Also, the manual review does not guarantee the detection of all violations. Therefore, the solution is to establish a governance framework and automate its process as much as possible in order for us to have the opportunity to scale.

There are three key elements that should be defined in order to establish good governance around the issue we faced. These elements are “targets”, “policies”, and “triggers”, as we explained in “Kubernetes Governance 101”. The target in this case would be all workloads in our cluster. We should apply the policy to all types of workloads: ReplicaSets, StatefulSets, Jobs, DaemonSets, etc. The policy would be to enforce that each workload has an “owner” label with the email of the engineer responsible for it. For the trigger, we thought initially that it is enough to run this once a day to find out who was violating this policy.


Help your team get up to speed with guidance on complex governance and compliance issues.

👇👇

Enforce Policy Across All of Your K8s Clusters With A Single Click


How to Setup a Custom Advisor in Magalix

Magalix offers KubeAdvisor which allows us to write custom policies using Rego - a declarative policy language that is open source and part of the Open Policy Agent project. This allows us to rely on a widely used framework and community-supported tools to make policy as code.Kubernetes Governance With Magalix

This is the Pathway within Magalix Console:

1. Navigate to KubeAdvisor.

Kubernetes Governance

2. Create Custom Advisor.

Kubernetes Governance

3. Create Issue.

Kubernetes Governance
Kubernetes Governance

4. Write Rego Policy.

package magalix.advisors.owner
	
violations [result] { 
      not input.metadata.labels.owner 
        result = {                   " issue": true           }   } violations[result] {        value: = input.metadata.labels.owner       not contains (value, “@”)
     result = {                 "issue": true          }   }

This policy will check if the workloads spec (metadata.labels) contains an owner label and the value is not empty. If you are new to Rego, see the language reference

5. Simply Browse Back to The “Issues”.

Kubernetes Governance

Once the advisor and issue have been created, our system will run them within 24 hours. To see the violations, simply browse back to the “Issues” section under the relevant clusters.  See more info about our console and how to navigate it to find violations and recommendations.


How to Get Notified of New Violations?

Once you have gotten everyone in your development team to update their workload’s spec and add the required label, how do you make sure things continue smoothly and things don't regress whereby the cluster ends up with new applications and services that do not follow the policy you’ve laid down. You can certainly keep going back to Magalix Console and check if there are new violations. But it would be better for you to get notified through an application like Slack, of any new violations so you can act on it swiftly.

Magalix supports integration with Zapier and some other tools that support webhooks. All you need to provide to us is the URL, and Magalix will send you an event whenever we detect a new violation of your policies. The event we generate could then be sent to a Slack channel, or create a JIRA task, etc.

This is the JSON message that Magalix sends to the webhooks:

{
   "cluster": "prod-cluster",    
   "namespace": "kafka",   
   "controller": "kafka", 
   "has_violation": true,
   "created_at": “2020-07-31T00:00:00Z”
   "advisor_name": "label checker",
   "issue_name": ": “owner label checker”,
   "url": “https://console.gcp.dev.magalix.com/#/clusters/75ab01f4-d3cf-4c16-b713-546fbaf7ada4/issues/recommendations/d1ed8834-fdf4-5f3b-aca0-dbbaf761071b?parent=issues”
}
  • cluster: name of the cluster in magalix.
  • namespace: name of the namespace the controller belongs to.
  • controller: name of the controller for which the recommendation was generated.
  • has_violation: boolean indicating whether the controller violated the recommendation or not.
  • created_at: the time the recommendation was generated.
  • advisor_name: the name of the advisor that generated this recommendation.
  • issue_name: the name of the issue that generated this recommendation.
  • url: a link to the recommendation in the magalix console.

In our case, we got the URL of one of our Slack channels and then went to Magalix Console to register for violation events. This is one of our “Beta Features” that you can request access to through Magalix Console.

Kubernetes Governance With Magalix

You can also create a Zap in Zapier using our Magalix app in Zapier. The Zapier app will be triggered when Magalix sends a violation event. A unique URL will be generated for you so that you can register with us in the Magalix Console. After that, you can build your Zap to send messages to Slack, create JIRA tickets, or integrate with any of the hundreds of applications that Zapier supports.


Already working in production with Kubernetes? Want to know more about kubernetes application patterns? Get your Free copy

👇👇

Download Kubernetes Application Patterns E-Book


Can I Prevent the Violation from Going to Production?

So far we are triggering the policy and checking against data from the cluster. This means that the violations have already made their way to production.

You may be wondering what if you want to enforce the policy during the build process in your CI/CD pipeline? Again, Magalix supports this process. By going to the Magalix Console you can get a unique URL that can be used in your CI/CD pipeline to check for violations. This is also one of our “Beta Features” that you can request access to through Magalix Console.

Conclusion

As we saw in this article, it is important to establish a good governance framework to ensure that the development team remains agile and innovative while maintaining a healthy infrastructure and operational environment that is stable and easy to operate. You can extrapolate other scenarios and policies from the example we discussed. See our “Kubernetes Governance 101” article for more ideas. For example, let's say you want to enforce a minimum number of replicas on all ReplicaSets? or Limit workloads that are exposed externally through public IPs? All of this and more is now possible with Magalix. We all want to be spending most of our time in Innovation Mode and not in Firefighting Mode. Life is more exciting this way! :)

Comments and Responses

Related Articles

Labeling Your Nodes is a Wise Move!

These are the situations when node labels play a crucial role. They are important enough that Kuberenetes advises adding well-known labels to your nodes

Read more
Human Generated Errors Through Bad Configuration in Kubernete Writeup

Human error is the most often cited cause of data breaches and hacks, containers and Kubernetes have a lot of knobs and dials which gives room for increasing misconfiguration error.

Read more
Writing Policies for Pods, Network Objects, and OPA

Magalix simplifies the question about policy such as “Where do you install it?”, “How to run it?”, “Where to run it?” etc.

Read more

Start Your 30-day Free Trial Today!

Automate your Kubernetes cluster optimization in minutes.

Get started View Pricing
No Card Required