Balance innovation and agility with security and compliance
risks using a 3-step process across all cloud infrastructure.
Step up business agility without compromising
security or compliance
Everything you need to become a Kubernetes expert.
Always for free!
Everything you need to know about Magalix
culture and much more
“Small mistakes tend to lead to large ones”. Modern development practices and technologies, like microservices, Kubernetes come with huge benefits, and at the same time require greater visibility and control. In terms of Kubernetes, it becomes cumbersome to manage your multiple Kubernetes clusters manually to conform to security rules, operational best practices, and organizational standards.
Enforcing policies on a Kubernetes cluster gives control of the resources being deployed in the cluster and will help you avoid Kubernetes deployment mistakes with a policy-as-code approach that would otherwise arise if you try to manage manually. Policy-as-code is the process of converting manual tasks into a code format where policies can be managed as code and can detect, prevent, counteract, and reduce known and unknown threats.
As per the “Container Security Survey” conducted by AWS in 2020, 49% of respondents indicated that they had yet to implement a “general policy management strategy” for Kubernetes.
In this article, we will see how you can avoid 6 Kubernetes deployment mistakes using policy-as-code.
Now, let’s proceed and see 6 Kubernetes deployment mistakes one by one and recommendations to avoid them.
When you push a docker image to the Docker registry and do not provide a tag, the default tag that the image gets is “latest”, and when you pull an image from the repository without specifying a tag, the image with the “latest” tag gets pulled. This means that, if you do not provide a tag when you push or pull an image, the “latest” tag is applied by default.
Now, imagine you are working on some kind of bug fix or feature development and you built and pushed the image to the registry. At the same time, there is another team working on deploying your application to the production using the image you pushed with the “latest” tag or no tag, what do you think would happen? In such a case, the code that you have been working on or the bug that you have been fixing will get deployed to the production environment.
Deploying such an image that does not contain the tested code or contains bugs in the production may break the environment and can make it harder to track which version of the image is running and more difficult to roll back properly. This is one of the Kubernetes deployment mistakes that most of us usually do and should be avoided at any cost.
Always make sure that the image that you use for your application deployments has a tag and other than the “latest”. This will help to identify the application versions deployed and roll back in case required. Click here to learn to enforce Kubernetes container images to have a label that is not “latest” using policy-as-code.
apiVersion: v1 kind: Pod metadata: name: spec: containers: - name: image: : ← Not specify :latest here ports: - containerPort:
When you run your applications in Kubernetes Cluster as containers, they tend to use as much memory and CPU as they want if no limit has been set on them. In such a case, it is important not to allow containers to consume too much of the node’s memory. If a particular pod or application starts consuming more resources, other pods or applications on the same node may starve and the node may malfunction.
It is necessary to limit the resource, i.e. memory and CPU consumption. This can be achieved by specifying resource limits in your pod, or deployment definition file. The same can be achieved using policy-as-code, visit the article here to know more about this and restricting resource quotas at the namespace level.
apiVersion: v1 kind: Pod metadata: name: spec: containers: - name: image: : Resources: ← Specify resource requests and limits requests: memory: "XXMi" cpu: "XXm" limits: memory: "XXMi" cpu: "XXm"
Application secrets are meant to be secret. They are referred to as secrets because the information that they hold is critical and is not meant to be available publically. Secrets may be referred to as Database username-password, login username-password, OAuth tokens, and ssh keys. Such kind of information is critical and can cause huge damage if anyone gets it.
Anyone can misuse such information, as there is a possibility that the secrets can be leaked via logs, debug records, and application code accessible to other developers or even through source code management tools like GitHub and GitLab if you hardcode and push them.
To make sure that the information such as username-passwords, OAuth tokens, and ssh keys are stored in an encoded or encrypted form. You can leverage Kubernetes Secrets to store your credentials in an encoded form so that they are not available in the plain-text format. Click here to know more about Kubernetes Secrets. Here, make a note that Kubernetes Secretes are not encrypted but encoded.
apiVersion: v1 kind: Pod metadata: name: spec: containers: - name: image: : env: - name: valueFrom: secretKeyRef: ← Use credentials from secrets name: key:
One of the reasons that we use Kubernetes is to achieve high availability for our micro-services or applications. To achieve high availability, you must have more than one instance of your applications running in your Kubernetes Cluster. In case, there is only one instance of the application running and due to some reason, unexpectedly, the applications pod goes down, by the time it is created again, your application or micro-service will be unavailable and user requests will fail.
It is always recommended that you have more than one instance of your application or micro-service or pod running in your Kubernetes Cluster. You can achieve this by specifying the number of replicas that you want for your deployment. Alternatively, you can also specify a minimum and maximum count for your applications’ pods using policy-as-code.
apiVersion: apps/v1 kind: Deployment metadata: name: : spec: replicas: ← Replicas must be more than 1 selector: matchLabels: : template: metadata: labels: : spec: containers: - name: image: : ports: - containerPort:
Most of the time, we do not think about the origin of the base image or application image and deploy containers using such images. It is very important to know the source of the image. Using images from untrusted registries may compromise the cluster’s security. At the same time, it is important and necessary to use the company’s private repository to guarantee that all the images are tested and secure.
Using images from the trusted registries or the company’s private registries is the solution to save your clusters from getting compromised by images used from untrusted sources.
While you are working on different Kubernetes environments, sometimes you knowingly or unknowingly make changes using the kubectl command. kubectl is a very powerful command used to manage Kubernetes objects. You can use kubectl to create, edit, patch, delete Kubernetes objects on the fly. While updating Kubernetes objects using kubectl command to perform hotfixes, you forget them to revert back, and this is the point when you have already messed up the infrastructure.
At a later point in time when you encounter that your application is working in one environment and facing issues in another. Now, there is no way you can track what has caused the inconsistency. Other team members working on the same applications will be clueless about the issue and its root cause.
Updating or patching Kubernetes objects on the fly with kubectl command can introduce an inconsistency in the application and application environment.
Kubectl command is a great tool to get started with Kubernetes deployments; however, when you are working on live systems, be it dev, qa, staging, or prod, you must never edit or patch the Kubernetes objects and must use object configuration files and store them in a version control system of your choice to keep a track of versions and changes. This will not lead to inconsistent environments.
While deploying applications on Kubernetes, we generally try to think out of the box and ignore basic configurations. Such configuration may seem basic; however, if ignored may result in costly consequences. As per IDC, about 67% of security breaches are caused by misconfigurations.
With the Magalix Policy Enforcement platform, now you tighten your security gaps and implement best security and configuration practices - to avoid exposing critical databases, endpoints, or any other assets/resources. Our library of policies is continuously growing and securely configures K8s, Prometheus, MongoDb, Mongo-express, MySQL, PostgresSQL, MariaDB, RabbitMQ, InfluxDB, and many more.
Empower developers to delivery secure and compliant software with trusted application delivery and policy as code. Learn more.
Automate your deployments with continuous application delivery and GitOps. Read this blog to learn more.