In this article, we will talk about Google Kubernetes Engine (GKE) and deploy a sample app on a GKE cluster. Kubernetes is the preferred container orchestration tool in 2020 and is used to deploy applications delivered within packages. Most cloud providers i.e. Amazon Web Services, Google Cloud, and Azure, all have their own managed Kubernetes service offering.
What is the Difference Between Managed and Non-Managed Services?
Managed Services are those that are managed by the cloud provider. In the Kubernetes cluster, you have one master node (it can be replicated across the cluster for high availability) and one or more worker nodes. When you are using a private cloud, you have to configure each part of the server on your own i.e., a master node, worker node, and device storage. But in these managed services, you don't have to worry about the configuration. Later on, we will see how managed services can handle all of these cases.
Google Kubernetes Engine
Google Kubernetes Engine, a managed service used for containerized applications for their orchestration is used to deploy simple or complex web applications, complex tasks, or even to run ML and AI infrastructure based on Kubernetes.
Some operations you can perform on it are:
- Create a Kubernetes object, i.e., pods, ReplicaSet, Deployment Services, jobs and many others
- Update containers
- Resize the Replicaset
You can interact with gcloud using CLI (Local) or using google cloud shell.
Behind the scenes, GKE can create Compute Engine instances that run these containers and manage them with GKE. Google Kubernetes Engine provides one zonal cluster per billing account for free. It charges 0.10$ per hour as a management fee and the nodes that you are using in the cluster are billed according to the Compute Engine pricing.
Create Your First Cluster Using GKE
Firstly login into your account then open the Google Cloud Platform console. On the left side, you have an option name Kubernetes Engine under the compute category.
Click on quick start.
Name Your Cluster
This guide creates a Kubernetes cluster with the amount of nodes you require for the application. You can configure everything in this cluster.
Starting from the left side, we have seen the first step. We then have to name the cluster. You can name it anything you like.
Pick a Location
The second step is to pick up the location for the sake of simplicity. You can select any, but when we are using it in production, we are trying to see where we are getting the lower latency from, so that it can serve end-users quickly. You can deploy your cluster regionally or zone wise. The cost adds up if you use it regionally and not on a single-zone.
Set Release Channel
Next, we are going to set the Release channel. You can choose the version of the Kubernetes engine in this step. Later on, the Cluster release channel can't be changed, so choose wisely. There are three versions - Rapid, Regular, and Stable.
- Rapid: On the rapid version, you will get the fastest upgrade and have the newest version of Kubernetes Engine all the time, but sometimes there might be some bugs or issues that have no workaround. This release is not recommended for production-grade.
- Regular: This version is used for customers who want to test new releases before they qualify for production. Known issues will occur, but there will be workarounds for them.
- Stable: This version is has been tested and pass all the tests to be stable for production-grade applications.
You can also select a static version of the Kubernetes engine by selecting the static version.
Choose The Resources
Here you can use and configure your node.
- Machine family has three types, and you can select according to your needs. The memory-optimized machine is used for intensive memory load, such as real-time model learning. The compute optimized is a high-performance machine that can be used for scientific modeling. There is also a general-purpose machine than can perform equally well on all tasks.
- There is a Machine type from which you can select from the range of x CPUs and x Memory. Being in free credit status and trying it for learning purposes, you should always choose the lowest option. It saves you a credit, and you can do much more with that credit.
- Enabling autoscaling means you can scale nodes up and down. We are not enabling it here as our end goal is to learn, but on production, you will set the size according to users.
- Telemetry, you can enable logging of your system statuses such as crashes, incidents, and many more.
Review the cluster
You will review the settings and click make a change and then create so that Google Cloud Platform will start initializing your cluster, and within a minute or so your cluster will be initialized.
Now our first cluster spin-up running three machines with 3 CPU and 1.8 GB memory. The next step is to deploy the application.
Deploy Our App To GKE
Install gcloud tool through https://cloud.google.com/sdk/install.
gcloud components install kubectl
After installing the Kubernetes client, set default zone and project id.
gcloud config set project i-return-273913 gcloud config set compute/zone us-central1-c
As you have configured the default project and computing zone, now we need authentication credentials for our cluster.
gcloud container clusters get-credentials my-first-cluster-1
Our cluster name is my-first-cluster, but you can change it to your cluster name.
kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0
We will create a deployment named hello-server with the sample image of hello-server written in GoLang hello-app:1.0 with version 1 tag. You can increase replicas by providing --replicas=3 flag to increase pod to 3.
kubectl expose deployment hello-server --type LoadBalancer --port 80 --target-port 8080
Now we can expose our deployment with a load balancer service on port 80. Now our app will be accessible for the Internet.
kubectl get pods NAME READY STATUS RESTARTS AGE hello-server-5bfd595c65-lm5rr 1/1 Running 0 2m26s
We can now check for pods if replicas not provided by default will be one pod. Your pod will be in a running state, ready to serve.
kubectl get service hello-server NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-server LoadBalancer 10.105.2.251 220.127.116.11 80:30098/TCP 71s
You can also check the service that we have just created. Hello-server is running on External IP of3 18.104.22.168.
Now you can visit the application and it will be running on external IP with the load balancer attached. Visit 22.214.171.124 and you will see hello from the server.
Kubernetes Managed Service On Other Cloud Providers
Amazon Web Services, Azure and many more have their own Kubernetes managed services. Azure has a slight advantage in cost as it charges nothing for management, and AWS and Google cloud charges 0.10$ per hour for management. For the worker node, there is not much difference. You can use their cost management tool to find out how much they cost you to run an application.
- Kubernetes is a container orchestration tool.
- There are two types of managed and unmanaged Kubernetes service.
- Cloud providers have the option of using their managed Kubernetes service by paying them extra cost as their management fees.
- Google provides a Kubernetes managed service named Google Kubernetes Engine
- Google usage is pay-as-you-go having 0.10$ as their management fees, and for worker node, they charge according to google compute engine. There is also service for using loadbalancer.
- One zonal cluster per billing account is free.
- Google cloud shell provides an easy way to connect to their cluster or specific computing machine to deploy your application.