Weaveworks 2022.03 release featuring Magalix PaC | Learn more
Balance innovation and agility with security and compliance
risks using a 3-step process across all cloud infrastructure.
Step up business agility without compromising
security or compliance
Everything you need to become a Kubernetes expert.
Always for free!
Everything you need to know about Magalix
culture and much more
In previous article, we’ve seen how to manage or create Kubernetes on a cloud provider such as Google Cloud. Now, we’ll look at how to create or manage Kubernetes clusters on Amazon Web Service (AWS).
In this article, we’ll create a Kubernetes cluster through eksctl (Elastic Kubernetes Service CLI tool, helps to build clusters in amazon EKS) and deploy a sample Guestbook to EKS cluster. Kubernetes is a container orchestration tool and used to deploy applications delivery within packages. Within the cloud providers i.e., Amazon Web Services, Google Cloud, Azure, they all have their own managed Kubernetes service.
Managed Services are those that are managed by the cloud provider. In the Kubernetes cluster, you have one master node (can be replicated across the cluster for high availability) and one or more worker nodes. When you’re using a private cloud, then you have to configure each part of the server on your own i.e., master nodes, worker nodes, and device storage. But in managed services, you don’t have to worry about the configuration and provisioning. Later on, we’ll look at how these managed services handle all of these tasks. You’ll just have to pass the configuration as a flag to the command.
Elastic Kubernetes Service is a managed Kubernetes used to deploy clusters without the need to maintain your own Kubernetes control plane. EKS is a managed service used for containerized applications for their orchestration and used to deploy simple or complex web applications, complex tasks, or even for Machine Learning or Artificial Intelligence, based on microservice architecture.
Some main operations you’ll be able to perform are:
Behind the scenes, EKS is creating EC2 machine instances that are running these containers and managed by EKS. It charges 0.10$ per hour as a management fee, and the nodes that you’re using in the cluster are charged according to the EC2 pricing scale.
There are two ways to deploy your cluster: one is to deploy on an EC2 machine, the other is to deploy in a serverless manner that is to use Fargate.
This deployment method uses Fargate instead of the EC2 virtual machine. Fargate deployment is available in specific regions.
This deployment method eliminates the need for provisioning and managing the server and focuses on building applications. You pay only for the resources that you’re using for your application. It automatically scales the cluster according to the application need, so there will be no additional unneeded resources running on the cluster.
This method is used to deploy your pods in EC2 virtual machines in the VPC (Virtual Private Cloud). Using this deployment method, you have to specify the type of virtual machines you are using in the cluster (i.e., t2.micro, t3.large), depends on the application resource allocation, then you have to specify the number of nodes in three modes i.e., the minimum number of nodes, the maximum number of nodes, the desired number of the nodes, etc. This Deployment is widely available across all regions and you’ll have more control on resource allocation as you can scale in or out your amount of nodes.
pip install awscli --upgrade --user
This command can install or upgrade the AWS Command Line Interface:
aws configure
Don’t forget to configure that with your access and secret key. This article assumes that you have AWS installed and pre-configured with your security credentials.
The Chocolatey package manager is used to download CLI tools. If not already installed Click Here
chocolatey install -y eksctl aws-iam-authenticator
This will install the required packages:
eksctl version
Now you can check if EKS CLI installed successfully. A successful response will return the version number of CLI tool.
eksctl create cluster \
--name cluster_name\
--region us-east-1 \
--fargate
This command creates a Fargate cluster with three flags:
eksctl create cluster \
--name cluster_name \
--region us-east-1 \
--nodegroup-name nodegroup_name \
--node-type t2.micro \
--nodes 6 \
--nodes-min 2 \
--nodes-max 8 \
--managed
This will create the following Linux workload flags:
Reminder: It takes 10 to 15 minutes to spin up a cluster.
Note: Since we want to run on the free tier, we’ve made 8 small nodes (the smallest number of nodes to run our sample guest book application). You can use bigger instances, but it will cost more. One more thing to note when deploying the cluster, if you see your pod in pending state then you’ll have to increase the nodes to 10 (maybe more), but we have our application running in 6.
Output will be:
[ℹ] eksctl version 0.17.0
[ℹ] using region eu-west-3
[ℹ] setting availability zones to [eu-west-3a eu-west-3c eu-west-3b]
[ℹ] subnets for eu-west-3a - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ] subnets for eu-west-3c - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ] subnets for eu-west-3b - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ] using Kubernetes version 1.15
[ℹ] creating EKS cluster "Guest" in "eu-west-3" region with managed nodes
[ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
[ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-west-3 --cluster=Guest'
[ℹ] CloudWatch logging will not be enabled for cluster "Guest" in "eu-west-3"
[ℹ] you can enable it with 'eksctl utils update-cluster-logging --region=eu-west-3 --cluster=Guest'
[ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "Guest" in "eu-west-3"
[ℹ] 2 sequential tasks: { create cluster control plane "Guest", create managed nodegroup "standard-workers" }
[ℹ] building cluster stack "eksctl-Guest-cluster"
[ℹ] deploying stack "eksctl-Guest-cluster"
[ℹ] building managed nodegroup stack "eksctl-Guest-nodegroup-standard-workers"
[ℹ] deploying stack "eksctl-Guest-nodegroup-standard-workers"
[✔] all EKS cluster resources for "Guest" have been created
[✔] saved kubeconfig as "C:\\Users\\ZARAK/.kube/config"
[ℹ] nodegroup "standard-workers" has 6 node(s)
[ℹ] node "ip-192-168-2-171.eu-west-3.compute.internal" is ready
[ℹ] node "ip-192-168-23-114.eu-west-3.compute.internal" is ready
[ℹ] node "ip-192-168-42-156.eu-west-3.compute.internal" is ready
[ℹ] node "ip-192-168-49-128.eu-west-3.compute.internal" is ready
[ℹ] node "ip-192-168-70-18.eu-west-3.compute.internal" is ready
[ℹ] node "ip-192-168-74-253.eu-west-3.compute.internal" is ready
[ℹ] waiting for at least 6 node(s) to become ready in "standard-workers"
[ℹ] nodegroup "standard-workers" has 6 node(s)
[ℹ] node "ip-192-168-2-171.eu-west-3.compute.internal" is ready
[ℹ] node "ip-192-168-23-114.eu-west-3.compute.internal" is ready
[ℹ] node "ip-192-168-42-156.eu-west-3.compute.internal" is ready
[ℹ] node "ip-192-168-49-128.eu-west-3.compute.internal" is ready
[ℹ] node "ip-192-168-70-18.eu-west-3.compute.internal" is ready
[ℹ] node "ip-192-168-74-253.eu-west-3.compute.internal" is ready
[ℹ] kubectl command should work with "C:\\Users\\ZARAK/.kube/config", try 'kubectl get nodes'
[✔] EKS cluster "Guest" in "eu-west-3" region is ready
To test your cluster, run the following command:
kubectl get svc
The output will look like:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kubernetes ClusterIP 10.100.0.1 443/TCP 1m
Now we’re able to see that we do have a cluster running.
👇👇
kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/redis-master-controller.json
The output will be:
replicationcontroller "redis-master" created
This will create a replication controller to deploy a set of pods.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/redis-master-service.json
The output will be:
service "redis-master" created
This will create a service named redis-master
kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/redis-slave-controller.json
The output will be:
replicationcontroller "redis-slave" created
This will create a replication controller to deploy a set of pods.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/redis-slave-service.json
The output will be:
service "redis-slave" created
This will create a service for the redis-slave replication controller:
kubectl apply -f
https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/guestbook-controller.json
The output will be:
replicationcontroller "guestbook" created
This will create a replication controller to deploy a set of pods:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/guestbook-service.json
The output will be:
service "guestbook" created
The next output will look like the following:
C:\Users\ZARAK>kubectl get pod
NAME READY STATUS RESTARTS AGE
guestbook-8nmfx 1/1 Running 0 30s
guestbook-pztk9 1/1 Running 0 30s
guestbook-wm9j5 1/1 Running 0 30s
redis-master-5hkkk 1/1 Running 0 53s
redis-slave-54lhp 1/1 Running 0 41s
redis-slave-gss5x 1/1 Running 0 41s
All pods have to be in a running state. If it shows as pending then you’ll have to increase the number of nodes because the CPU and Memory are not quite enough to enter a run state.
kubectl get services -o wide
You’ll get multiple services now, wait for 5 minutes and you will see the External IP like this:
http://a9523dc74a5134c508c90231bd257a95-1289170190.eu-west-3.elb.amazonaws.com:3000/
After getting the Elastic IP, try it after 5 to 10 minutes so that DNS will have time to propagate.
Now, you’ve successfully deployed the sample application:
Congrats! You have successfully deployed your first cluster with the Amazon-managed Kubernetes service EKS.
Now, so that we don’t incur more charges or cost, we’ll have to delete all the resources and the cluster.
kubectl delete rc/redis-master rc/redis-slave rc/guestbook svc/redis-master svc/redis-slave svc/guestbook
Output will be:
C:\Users\ZARAK>kubectl delete rc/redis-master rc/redis-slave rc/guestbook svc/redis-master svc/redis-slave svc/guestbook
replicationcontroller "redis-master" deleted
replicationcontroller "redis-slave" deleted
replicationcontroller "guestbook" deleted
service "redis-master" deleted
service "redis-slave" deleted
service "guestbook" deleted
This one line will delete the replication controller and all services in the cluster.
First, make sure there’s no load balancer associated.
kubectl get svc --all-namespaces
This command shows you all the services in all of the cluster namespaces.
Output will look like:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.100.0.1 443/TCP 19m
kube-system kube-dns ClusterIP 10.100.0.10 53/UDP,53/TCP 18m
Search for the service that has External IP because that has an associated load balancer. Since we don’t have any service with an External IP, we’ll skip to the next step.
If you have one (as mentioned above), go ahead and delete the service.
kubectl delete svc service_name
This will delete the service.
eksctl delete cluster --name cluster_name
EKS CLI tool deletes the cluster with the following command --name flag represents the name of the cluster.
Output will be:
[ℹ] eksctl version 0.17.0
[ℹ] using region eu-west-3
[ℹ] deleting EKS cluster "Guest"
[ℹ] either account is not authorized to use Fargate or region eu-west-3 is not supported. Ignoring error
[✔] kubeconfig has been updated
[ℹ] cleaning up LoadBalancer services
[ℹ] 2 sequential tasks: { delete nodegroup "standard-workers", delete cluster control plane "Guest" [async] }
[ℹ] will delete stack "eksctl-Guest-nodegroup-standard-workers"
[ℹ] waiting for stack "eksctl-Guest-nodegroup-standard-workers" to get deleted
[ℹ] will delete stack "eksctl-Guest-cluster"
[✔] all cluster resources were deleted
Self-service developer platform is all about creating a frictionless development process, boosting developer velocity, and increasing developer autonomy. Learn more about self-service platforms and why it’s important.
Explore how you can get started with GitOps using Weave GitOps products: Weave GitOps Core and Weave GitOps Enterprise. Read more.
More and more businesses are adopting GitOps. Learn about the 5 reasons why GitOps is important for businesses.
Implement the proper governance and operational excellence in your Kubernetes clusters.
Comments and Responses