Weaveworks 2022.03 release featuring Magalix PaC | Learn more
Balance innovation and agility with security and compliance
risks using a 3-step process across all cloud infrastructure.
Step up business agility without compromising
security or compliance
Everything you need to become a Kubernetes expert.
Always for free!
Everything you need to know about Magalix
culture and much more
Welcome to Part 1 of our 3-part series on Cloud Asset Management and Protection.
Organizations love the cloud because they don’t have to worry about provisioning physical assets, and can take full advantage of its scalability, agility, and cost-effectiveness.
However, the cloud also creates challenges around asset management. Since it’s easy and inexpensive to provision cloud resources, organizations often create and delete them very often. This transient nature of cloud assets often results in inefficient “asset sprawl”, with many resources of different types residing in different places. In such a scenario, popular asset tracking methods (e.g. IP address-based tracking) are ineffective. The good news is that this challenge can be mitigated by following some best practices.
This article shares these best practices to manage one specific type of cloud assets: compute.
Compute assets are the first of the Big Three cloud assets and are of three types: Virtual Machines, containers, and serverless.
Compute resources are the workhorses of the cloud. To keep them running at peak condition, it’s critical to manage them well.
A Virtual Machine (VM) is a digital avatar of a physical computer. It can do almost everything a physical machine can – run operating systems or programs, perform computing functions, connect to other networks, and store data.
All VMs have an operating system, which includes a kernel and user space programs. Many also have platform or middleware software, and custom application code.
A single physical machine can host multiple VMs, and numerous compute resources can be distributed among them. This increases resource flexibility and availability and improves the overall efficiency of IT, engineering, and DevOps teams.
VMs play a critical role in many enterprise functions, including DevOps:
For some VM items, many vendors provide limited support, so organizations must manage their own VM inventory, vulnerabilities and licenses.
Essential inventory items to track:
For efficient VM asset management, it’s essential to collect relevant information about VM instances, either by installing agents on the VMs, or by automating data collection with the cloud provider’s inventory system.
It’s also vital to track images, i.e., templates used to create new VMs. Otherwise, they might come online with unpatched vulnerabilities and cause numerous problems. Dedicated VMs and the firmware of bare-metal systems should also be updated and secured.
Containers package an application’s code and its entire runtime environment so developers can run applications quickly and reliably when moved between computing environments or deployment targets.
Unlike VMs, containers rarely contain an entire operating system, are therefore more lightweight, use fewer resources, and allow for greater application modularity. They can be created or removed in near real-time.
Sometimes, containers may include the operating system, perform multiple functions, and allow administrators to log in, making them similar to VMs, but on a smaller scale. It’s important to inventory such containers, and track containers, users, software, etc., either with agents or with automation. Images should also be inventoried and updated to ensure that new containers are not created from existing vulnerable images.
Native containers perform a single function and contain only the operating system components they require. They’re also immutable, so their code cannot be updated.
Asset management for native containers is less complicated than for mini VM containers. However, configuration management and vulnerability management risks must still be managed.
Since native containers are created and removed frequently, it’s simpler to inventory only the container images and identify which existing image a new container is copied from. This also makes it easier to track image configurations and patch any new vulnerabilities.
A container orchestration system bundles containers so they can perform higher-level functions. Running multiple containers in production, especially with microservices, can be a complex task for DevOps teams. Container orchestration simplifies this operational complexity through automation. It also automatically restarts or scales a container (or cluster) to boost resilience and helps secure containerized applications.
Kubernetes is a popular container orchestration system. It can work with Docker (and other) containers to provide an extensible and portable means to build containerized applications and schedule, monitor, and scale containers.
Inventory the below components to streamline Kubernetes asset management:
The Kubernetes command line (or API) is useful for tracking and inventorying pods and Docker containers.
Serverless computing is a way to build code and run applications without worrying about the hardware or operating system. Serverless services like AWS Lambda handle infrastructure management tasks, and dynamically allocate – and charge for – the compute and storage resources required to execute code. The DevOps team has no servers to manage or provision, so they can focus on writing and deploying code.
For serverless, the organization does not have to track images or instances, or inventory operating system or platform components. They only need to inventory the serverless deployments to control function access, and manage and mitigate code vulnerabilities.
Strong policies are essential to configure, manage and secure the complex Kubernetes environment. Robust governance policies ensure that images are only built and stored using trusted image registries, while network policies control traffic between pods and other endpoints. They may also restrict access to services within the Kubernetes cluster. Pod security policies control container privilege level. Policies can also be defined to restrict SSH access to Kubernetes nodes.
Magalix enables companies to programmatically enforce governance-as-code across their Kubernetes infrastructure. We adhere to Kubernetes best practices and provide a robust policy enforcement and execution engine to support the effortless deployment of governance policies.
Establish automated operators to continuously monitor repositories for changes, and achieve exceptional governance across all K8s clusters from a Single Source Of Truth – with Magalix and policy-as-code.
Protect your cloud assets by programmatically enforcing security standards. With Magalix, you can effortlessly manage and automate policies with policy-as-code. Get a 30-day free trial.
Self-service developer platform is all about creating a frictionless development process, boosting developer velocity, and increasing developer autonomy. Learn more about self-service platforms and why it’s important.
Explore how you can get started with GitOps using Weave GitOps products: Weave GitOps Core and Weave GitOps Enterprise. Read more.
More and more businesses are adopting GitOps. Learn about the 5 reasons why GitOps is important for businesses.
Implement the proper governance and operational excellence in your Kubernetes clusters.
Comments and Responses