Balance innovation and agility with security and compliance
risks using a 3-step process across all cloud infrastructure.
Step up business agility without compromising
security or compliance
Everything you need to become a Kubernetes expert.
Always for free!
Everything you need to know about Magalix
culture and much more
One of the most challenging aspects of implementing a secure Cloud-Native environment is keeping up with the constant rate of change. As your teams gain familiarity and momentum with the basics, keeping track of everything going in and out of your stack becomes overwhelming and unmanageable. Besides, the Cloud-Native landscape itself is also constantly evolving making it extremely challenging to understand what to govern, and how to govern it. For those that are considering the push towards Cloud-Native, or even those that are well into their journey, Cloud-Native and Kubernetes have seemed to become synonymous with complexity. With so many moving pieces, custom integrations, and a plethora of 3rd party solutions available, developing and implementing a governance strategy can seem impossible. In parallel, securing your new environment will continue to increase in difficulty in what is an already steep learning curve.
All journeys begin with a first step but without knowing which direction to head in you might find yourself stuck at a crossroads. Our journey begins with identifying all components, end to end, in need of governance. If we follow common DevOps practices, end to end in this context represents the chain of events that occur between the time code is written, until it is running in production. This would include activities such as developer initiated unit tests, CI/CD pipelines, container image building, and deployment to your Kubernetes cluster.
The Run Time phase consists of entities that are deployed and serving live traffic. We start with the rightmost phase because this is the intersection between traditional cybersecurity and Cloud-Native governance. Your systems and security teams should be well versed in the traditional ways of securing your stack. They ensure the proper firewalls are in place, data is encrypted at rest, access is provided on a granular as-needed basis, and so on.
If we look inward, the traditional perimeter model is no longer enough to claim a secure environment. There is a need to address the components inside the cluster that are dynamic. Changes to existing workloads are deployed frequently as your development teams continue to optimize, new workloads are propped up just as quickly, if not quicker all while your security and systems teams are trying to understand the context of each change. In essence, they are always playing “catch-up”.
In a Cloud-Native environment, there is a way not only to keep up with the rate of change but a way to get in front of it. Policy-as-Code has emerged to allow systems, security, and software engineers the ability to codify what is and isn’t compliant, and enforce that governance in real-time. This would allow Kubernetes and other pieces of your stack to automatically reject any new component introduced that is not compliant, while continuously scanning your environment for new compliance violations and security vulnerabilities.
Policy as Code
As we “shift left” and gather feedback earlier in the chain we move into our Build Time phase. This phase is typically represented by a CI/CD solution, broken down by various pipelines, steps, and tools. Now that policy is codified, we can enforce governance into our CI/CD pipelines, similar to how it’s enforced in the Run Time phase. The purpose of implementing governance at this phase is to catch violations before they are deployed live, in turn notifying the right person for intervention.
The way this would work is by simply adding additional steps to an already existing CI/CD process. For example, if the current process tests the code and then builds a container image, you could add a step before building the container to check if the base image is coming from an approved registry. Subsequently, you can then check components within the deployment itself, perhaps for elevated privileges or unapproved ports, to identify any potential Run Time issues before attempting to deploy.
Testing your policies and then promoting them in your chain
The Build Time is not only meant to check for compliance within your microservices. Since your policy is code, the build time also is a phase for your teams to test their policies. Just like how there are tests that are run to ensure your microservices is functioning, you must also perform these same tests to ensure policies aren’t falsely identifying violations or missing violations due to incorrectly written policies. Comparable to other codebases, your policies will get versioned and promoted when they are working correctly.
Development Time is the phase where the developers decide how to approach the problem, and then begin codifying your business requirements. Traditionally, the developer is responsible for clean, functional code without having many security considerations outside the scope of the task. For example, if a developer is working on a login page and accompanying backend, they may decide that there should be password complexity rules, a way to retrieve a forgotten password, a new account sign up page, and so on. What they probably aren’t thinking about is if the packages they are using to build this flow are secure, or if their container image is running as the root user.
Software engineers are not always fully aware of the potential risks and vulnerabilities they are about to ship. If systems and security engineers understand what needs to be governed and what needs to be secured, they can easily create Policies as Code on behalf of the software engineer. In this model, software engineers simply run the security tests to immediately discover if what they are shipping is compliant or not.
Securing your applications and environment at scale requires an all hands on deck approach as security has become a shared responsibility. Systems engineers are not always security engineers. Security engineers are not always software engineers. Asking your development teams to handle security hasn’t typically been the norm. To scale with today’s demand, many organizations are investing in DevSecOps, the trifecta of all 3 responsibilities shared across the organization, but specifically across these 3 technical stakeholders.
In the Build Time section we discussed, in brief, the model of having your systems and security engineers write a majority of the policies, if not all, and then having your software developers run the proper tests locally for rapid feedback. For this model to work, your teams will need to extend their knowledge base using the right tools for the job. Open Policy Agent (OPA), a policy engine, has become a popular choice for those looking to secure their Cloud-Native applications. Based on the declarative language Rego, OPA can be used to create policies within Kubernetes, but can also apply to any JSON object making it highly extensible. Simply put, OPA allows you to make binary decisions based on JSON payloads. OPA is open source, so many common policies are available within the community.
OPA covers many use cases but not all. OPA doesn’t scan images for vulnerabilities, or identify new CVEs when they are discovered. In those cases, you can look further into the community for open source tools that meet the requirements of your particular use case. You may get by with out-of-the-box examples to boot, but unique or advanced solutions will require a bit more thought and experience. At this point, the learning curve begins to steepen since knowledge of new tools and languages are necessary to secure each component, end to end.
If your teams are not already comfortable with practicing DevOps or DevSecOps principals for your Cloud-Native application, then looking for a commercial solution to secure your end to end might be a viable option.
DevOps processes accelerate your business, but that only becomes possible after many hours of trial and error. CI / CD, trusting test results, automation, and team collaboration are key contributors to a successful DevOps organization. Coincidentally, the road to securing your Cloud-Native applications closely resembles the pathway of DevOps.
In this context, confidence is not a requirement, but a goal. Establishing confidence and trust in your process requires numerous cycles of testing, validating, and auditing. Remember, securing your Cloud-Native application is not just a one-time event but a continuous machine that probes, detects, identifies, mitigates, prevents, and protects. Establishing confidence with a magnitude of requirements is why many shy away, but by mirroring the automation and mindset of DevOps, with the assistance of Policy As Code and open source tooling, you can achieve a continuously secure environment and application.
Securing Cloud-Native applications can be complex because of the volume of skills and knowledge required, but by having a “shift left” perspective you can start breaking down what areas need to be secured by listing them out. Identifying the components in the chain that need to be secured can open doors for a methodical approach, starting with applying policy as code in your Run Time. Continue to shift left and then secure your Build Time, and Develop Time until the end-to-end requirements are satisfied.
Here’s why a Zero Trust security approach is one of the most reliable ways to prevent supply chain attacks.