Balance innovation and agility with security and compliance
risks using a 3-step process across all cloud infrastructure.
Step up business agility without compromising
security or compliance
Everything you need to become a Kubernetes expert.
Always for free!
Everything you need to know about Magalix
culture and much more
We’ve deployed the Kubernetes cluster to Provider-managed Kubernetes services such as Amazon Elastic Kubernetes Service (EKS) and Google Kubernetes Engine (GKE) in previous articles. When doing so, we’ve seen that we don’t have to manage the master node or cluster control plane and we only have to be concerned about how to deploy the application and select the instance type. Aside from that, all other concerns are handled by the providers.
The Master node manages the cluster and is responsible for maintaining the state of the cluster. To communicate with master nodes we use the Kubernetes client tool Kubectl. With Kubectl, we can easily issue commands so that our master nodes work accordingly. The master node is used to manage the cluster state and can be replicated for high availability. The master node has the following components to manage and control the state of the cluster:
So far we’ve seen what a master node is and the purpose it serves for cluster nodes. Cloud providers, however, have their own managed Kubernetes service such as Amazon Elastic Kubernetes service, Google Kubernetes Engine, or Azure Kubernetes service. These cloud providers manage the master node for you straight out of the box. You don’t have to provision or manage the master node. The managed versions vary from different cloud providers, some go so far as to offer dedicated support, pre-configured environments, and hosting.
Kubernetes was created by Google for its own container orchestration tool initially called BORG, later they changed its name to Omega - this long history is why it’s considered the most advanced managed Kubernetes service. It also includes a health check and automatic repair of microservices, logging, and monitoring with a stack driver. Additionally, it comes with four-way auto-scaling and multi-cluster support.
Some of the core features of Google Kubernetes Service are:
Horizontal pod auto-scaling based on CPU utilization or custom metrics, cluster autoscaling that works by node group, and vertical pod autoscaling that automatically scans CPU and memory usage of pods, and dynamically adjusts your CPU and memory applications in response. Automatically scales the node group and clusters across multiple node groups, based on any changing workload requirements.
The simplest way to start a Kubernetes cluster in GKE is:
gcloud container clusters create magalix
Google Cloud Platform (GCP), or gcloud, is the command-line interface you can use to create, delete, or manage most of the GCP cloud services. Here, we’re creating a cluster named magalix with the default number of nodes (3). With just one line of code, the cluster has been deployed and it will be managed by GCP. The above code will create one master node, 3 worker nodes, and a pre-configured environment for you.
Note: This will be deployed to a default project id
Cost: GKE charges $0.10 cents per hour for Kubernetes cluster management and charges for underlying services according to a pricing scale.
Amazon Web service has its own managed Kubernetes service called EKS. It’s also another managed Kubernetes service where you don’t have to maintain or create the cluster control plane. EKS runs a cluster control plane in multiple AZ to ensure it maintains high availability, and automatically replaces unhealthy instances. It works with different AWS to provide scalability and security for your application, such as the following:
The simplest way to start a Kubernetes cluster in EKS is Eksctl: EKsctl is the EKS CLI tool to interact with the cluster.
1. To create a cluster using Fargate i.e., serverless deployment:
eksctl create cluster --name magalix --region us-west-1 --fargate
This command creates the cluster magalix on region us-west-1 (North Virginia) using Fargate. In this case, we don’t specify a number of nodes.
2. To create a cluster using EC2 machines i.e., AWS virtual machine:
eksctl create cluster --name magalix --region us-west-1 --nodegroup-name standard-workers --node-type t3.medium --nodes 3 --nodes-min 1 --nodes-max 4 --managed
This command creates the cluster magalix on region us-west-1 (North Virginia) on EC2 machines with a managed node group called standard-workers, instance type to be t3.medium, and with auto-scaling min-max of 1 to 4.
Note: This will be deployed to a default region
Cost: EKS charges $0.10 cents per hour for Kubernetes cluster management and charges for underlying services according to a pricing scale.
AKS is also a managed Kubernetes service, which reduces the complexity and operational overhead of managing Kubernetes by offloading much of that responsibility to Azure. AKS handles all of your critical tasks, health monitoring, and maintenance for you. It offers serverless Kubernetes, an integrated continuous integration and continuous delivery (CI/CD) experience, and enterprise-grade security and governance.
The simplest way to start a Kubernetes cluster in AKS is:
az group create --name mine --location eastus
This will create the resource group mine in eastus location:
az aks create --resource-group mine --name magalix --node-count 1 --enable-addons monitoring --generate-ssh-keys
This command will create a cluster name magalix in the mine resource group with one worker node and monitoring enabled.
Cost: AKS charges nothing for the management of the Kubernetes cluster. It only charges for its underlying services.
We’ve seen how easy it is to deploy Kubernetes clusters within a managed Kubernetes service, but everything has its pros and cons -- with a self-managed Kubernetes service we have more command over the cluster control plane. In managed service management, layer components are handled by the provider.
When we deploy a cluster through kubeadm, kubespray, or even by doing it manually (the hard way), you have full access to the cluster and master all the other related management components. You’ll also have more control over the deployment and administration of your cluster. For example, you can implement multiple node groups or choose to have different instance types for different nodes. These options are not available with many Kubernetes managed services. You can deploy a self-managed Kubernetes service on google cloud by using a compute engine as nodes, AWS by using EC2 machines, and Azure by deploying it to Azure virtual machines.
Kubespray provides an Ansible role for Kubernetes deployment and configuration. It can be used on any cloud provider, or on-premises. It uses Kubeadm under the hood.
Kubeadm Bootstrap is a best practice Kubernetes cluster on existing infrastructure. It uses the bare minimum viable Kubernetes cluster possible.
Kops is a tool that can be used to create, delete, upgrade, and maintain production-based highly available Kubernetes clusters. Kops enables you to manage the full Kubernetes cluster lifecycle: from infrastructure, provisioning, to cluster deletion.
It is a small bash script used to aggregate logs from multiple pods into one stream.
This tool is a Kubernetes watcher that publishes the Kubernetes event to the team via the communication app, Slack. Kubewatch runs as a pod inside Kubernetes clusters, and watches for changes that occur in the system.
Prometheus, a famous tool used for monitoring the cluster. It’s very simple to integrate yet extremely powerful.
HELM is a package manager for Kubernetes. It’s like an npm, pip for Kubernetes. HELM operates on the chart and you can share your application thru HELM chart creation.
Istio is an open-sourced service mesh used to make it easier to connect, manage, and secure traffic. It observes telemetry about microservices running in containers.
It’s a set of plugins written in GO that performs DNS functions. CoreDNS with Kubernetes plugins can replace the Kube-DNS service and implement the specifications defined for Kubernetes DNS-based service discovery.
It’s a web UI for the Kubernetes cluster and has a Native dashboard that makes it easier to monitor and troubleshoot the Kubernetes cluster.
When choosing between these modalities, we have to assess the application’s requirement, cost, and flexibility. If we’re going for the Provider-managed Kubernetes service, we’ve seen it cost us approximately $100 per month just for the management of the cluster. By contrast, if we opt for self-management, do we have the needed time and skill required to manage the Kubernetes cluster? Definitely worth considering. However, when it comes to flexibility, self-managed Kubernetes services give us far more options. Ultimately, you’ll have to consider your application requirements, available time for management tasks, and skill level.
Protect your cloud infrastructure by understanding the key vulnerability areas according to the shared responsibility model.
Know more about the 4 main types of “leaks” that commonly occur with cloud asset management, and some useful strategies to address them.
With the NIST cybersecurity framework implemented using policy-as-code, companies can strengthen their security processes. Learn more.