14-days FREE Trial

 

Right-size Kubernetes cluster, boost app performance and lower cloud infrastructure cost in 5 minutes or less

 

GET STARTED

  Blog

Deploying Kubernetes Clusters With Amazon Elastic Kubernetes Service (EKS)

In previous article, we’ve seen how to manage or create Kubernetes on a cloud provider such as Google Cloud. Now, we’ll look at how to create or manage Kubernetes clusters on Amazon Web Service (AWS).

In this article, we’ll create a Kubernetes cluster through eksctl (Elastic Kubernetes Service CLI tool, helps to build clusters in amazon EKS) and deploy a sample Guestbook to EKS cluster. Kubernetes is a container orchestration tool and used to deploy applications delivery within packages. Within the cloud providers i.e., Amazon Web Services, Google Cloud, Azure, they all have their own managed Kubernetes service.

Deploying Kubernetes Cluster With EKS.

Prerequisites

  • Kubernetes client - Kubectl has to be installed. If not already installed, click here
  • Amazon Web Service account
  • AWS CLI configured - Configure from here
  • An account with the required permissions to deploy EKS cluster
  • Nore: you can't deploy the EKS cluster free of cost on Free tier. On the free tier, there will be a 0.10$ per hour charge for the management or control plane and 0.05$ per hour for NAT Gateway. Here, worker nodes are free of cost as long as you have free hours left on EC2 Machine (only on the free tier).

Managed And Non-managed Kubernetes Service:

Managed Services are those that are managed by the cloud provider. In the Kubernetes cluster, you have one master node (can be replicated across the cluster for high availability) and one or more worker nodes. When you’re using a private cloud, then you have to configure each part of the server on your own i.e., master nodes, worker nodes, and device storage. But in managed services, you don’t have to worry about the configuration and provisioning. Later on, we’ll look at how these managed services handle all of these tasks. You’ll just have to pass the configuration as a flag to the command.

EKS - Elastic Kubernetes Service

Elastic Kubernetes Service is a managed Kubernetes used to deploy clusters without the need to maintain your own Kubernetes control plane. EKS is a managed service used for containerized applications for their orchestration and used to deploy simple or complex web applications, complex tasks, or even for Machine Learning or Artificial Intelligence, based on microservice architecture.

Some main operations you’ll be able to perform are:

  1. Creating a Kubernetes object, i.e., Pods, ReplicaSet, Deployment, Services, Jobs, and many more.
  2. Updating the container.
  3. Resizing the ReplicaSet.
  4. Deploying the application on Linux workload or Fargate (serverless fashion).

Behind the scenes, EKS is creating EC2 machine instances that are running these containers and managed by EKS. It charges 0.10$ per hour as a management fee, and the nodes that you’re using in the cluster are charged according to the EC2 pricing scale.

Fargate Deployment vs. Linux Workload

There are two ways to deploy your cluster: one is to deploy on an EC2 machine, the other is to deploy in a serverless manner that is to use Fargate.

1. Fargate Deployment

This deployment method uses Fargate instead of the EC2 virtual machine. Fargate deployment is available in specific regions.

Deploying Kubernetes Cluster With EKS.

This deployment method eliminates the need for provisioning and managing the server and focuses on building applications. You pay only for the resources that you’re using for your application. It automatically scales the cluster according to the application need, so there will be no additional unneeded resources running on the cluster.

2. Linux Workload Deployment

This method is used to deploy your pods in EC2 virtual machines in the VPC (Virtual Private Cloud). Using this deployment method, you have to specify the type of virtual machines you are using in the cluster (i.e., t2.micro, t3.large), depends on the application resource allocation, then you have to specify the number of nodes in three modes i.e., the minimum number of nodes, the maximum number of nodes, the desired number of the nodes, etc. This Deployment is widely available across all regions and you’ll have more control on resource allocation as you can scale in or out your amount of nodes.

Step 1: CLI Configuration

pip install awscli --upgrade --user

This command can install or upgrade the AWS Command Line Interface:

aws configure

Don’t forget to configure that with your access and secret key. This article assumes that you have AWS installed and pre-configured with your security credentials.

Step 2: Installing The Elastic Kubernetes Service CLI Tool

For Windows

The Chocolatey package manager is used to download CLI tools. If not already installed Click Here

chocolatey install -y eksctl aws-iam-authenticator

This will install the required packages:

eksctl version

Now you can check if EKS CLI installed successfully. A successful response will return the version number of CLI tool.

Step 3: Create The Cluster And Working Node Using The EKS CLI Tool

1. Using Fargate

eksctl create cluster \
--name cluster_name\
--region us-east-1 \
--fargate

This command creates a Fargate cluster with three flags:

  1. --name:  the name used to define the name of the cluster
  2. --region: which region to deploy the Kubernetes cluster
  3. --fargate: used to deploy a cluster using the Fargate deployment

2. Using Linux Workload

eksctl create cluster \
--name cluster_name \
--region us-east-1 \
--nodegroup-name nodegroup_name \
--node-type t2.micro \
--nodes 6 \
--nodes-min 2 \
--nodes-max 8 \
--managed

This will create the following Linux workload flags:

  1. --name: Defined the name of the cluster
  2. --region: Which region to deploy the cluster on
  3. --nodegroup-name: Name of the node group of Linux virtual machines
  4. --node-type: Type of virtual machine, we’re using the lowest one to make this tutorial in the free tier
  5. --node: Desired capacity of the nodes running in your cluster
  6. --nodes-min:  Minimum number of nodes
  7. ----nodes-max: Maximum number of nodes
  8. --managed: For Deploying your cluster on the Managed node group

Reminder: It takes 10 to 15 minutes to spin up a cluster.

Note: Since we want to run on the free tier, we’ve made 8 small nodes (the smallest number of nodes to run our sample guest book application). You can use bigger instances, but it will cost more. One more thing to note when deploying the cluster, if you see your pod in pending state then you’ll have to increase the nodes to 10 (maybe more), but we have our application running in 6.

Output will be:

[ℹ]  eksctl version 0.17.0
[ℹ]  using region eu-west-3
[ℹ]  setting availability zones to [eu-west-3a eu-west-3c eu-west-3b]
[ℹ]  subnets for eu-west-3a - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ]  subnets for eu-west-3c - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ]  subnets for eu-west-3b - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ]  using Kubernetes version 1.15
[ℹ]  creating EKS cluster "Guest" in "eu-west-3" region with managed nodes
[ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
[ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-west-3 --cluster=Guest'
[ℹ]  CloudWatch logging will not be enabled for cluster "Guest" in "eu-west-3"
[ℹ]  you can enable it with 'eksctl utils update-cluster-logging --region=eu-west-3 --cluster=Guest'
[ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "Guest" in "eu-west-3"
[ℹ]  2 sequential tasks: { create cluster control plane "Guest", create managed nodegroup "standard-workers" }
[ℹ]  building cluster stack "eksctl-Guest-cluster"
[ℹ]  deploying stack "eksctl-Guest-cluster"
[ℹ]  building managed nodegroup stack "eksctl-Guest-nodegroup-standard-workers"
[ℹ]  deploying stack "eksctl-Guest-nodegroup-standard-workers"
[✔]  all EKS cluster resources for "Guest" have been created
[✔]  saved kubeconfig as "C:\\Users\\ZARAK/.kube/config"
[ℹ]  nodegroup "standard-workers" has 6 node(s)
[ℹ]  node "ip-192-168-2-171.eu-west-3.compute.internal" is ready
[ℹ]  node "ip-192-168-23-114.eu-west-3.compute.internal" is ready
[ℹ]  node "ip-192-168-42-156.eu-west-3.compute.internal" is ready
[ℹ]  node "ip-192-168-49-128.eu-west-3.compute.internal" is ready
[ℹ]  node "ip-192-168-70-18.eu-west-3.compute.internal" is ready
[ℹ]  node "ip-192-168-74-253.eu-west-3.compute.internal" is ready
[ℹ]  waiting for at least 6 node(s) to become ready in "standard-workers"
[ℹ]  nodegroup "standard-workers" has 6 node(s)
[ℹ]  node "ip-192-168-2-171.eu-west-3.compute.internal" is ready
[ℹ]  node "ip-192-168-23-114.eu-west-3.compute.internal" is ready
[ℹ]  node "ip-192-168-42-156.eu-west-3.compute.internal" is ready
[ℹ]  node "ip-192-168-49-128.eu-west-3.compute.internal" is ready
[ℹ]  node "ip-192-168-70-18.eu-west-3.compute.internal" is ready
[ℹ]  node "ip-192-168-74-253.eu-west-3.compute.internal" is ready
[ℹ]  kubectl command should work with "C:\\Users\\ZARAK/.kube/config", try 'kubectl get nodes'
[✔]  EKS cluster "Guest" in "eu-west-3" region is ready

Step 4: Test The Cluster

To test your cluster, run the following command:

kubectl get svc

The output will look like:

NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
svc/kubernetes   ClusterIP   10.100.0.1           443/TCP   1m

Now we’re able to see that we do have a cluster running.


Already working in production with Kubernetes? Facing scaling challenges? Learn how Magalix can help scale your infrastructure, and help your team focus on applications.

👇👇

Learn More


Step 5: Deploying The Sample Guestbook Application To The Cluster

kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/redis-master-controller.json

The output will be:

replicationcontroller "redis-master" created

This will create a replication controller to deploy a set of pods.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/redis-master-service.json

The output will be:

service "redis-master" created

This will create a service named redis-master

kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/redis-slave-controller.json

The output will be:

replicationcontroller "redis-slave" created

This will create a replication controller to deploy a set of pods.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/redis-slave-service.json

The output will be:

service "redis-slave" created

This will create a service for the redis-slave replication controller:

kubectl apply -f 
https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/guestbook-controller.json

The output will be:

replicationcontroller "guestbook" created

This will create a replication controller to deploy a set of pods:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/guestbook-service.json

The output will be:

service "guestbook" created

The next output will look like the following:

C:\Users\ZARAK>kubectl get pod

NAME                 READY   STATUS    RESTARTS   AGE
guestbook-8nmfx      1/1     Running   0          30s
guestbook-pztk9      1/1     Running   0          30s
guestbook-wm9j5      1/1     Running   0          30s
redis-master-5hkkk   1/1     Running   0          53s
redis-slave-54lhp    1/1     Running   0          41s
redis-slave-gss5x    1/1     Running   0          41s

All pods have to be in a running state. If it shows as pending then you’ll have to increase the number of nodes because the CPU and Memory are not quite enough to enter a run state.

kubectl get services -o wide

You’ll get multiple services now, wait for 5 minutes and you will see the External IP like this:
http://a9523dc74a5134c508c90231bd257a95-1289170190.eu-west-3.elb.amazonaws.com:3000/

After getting the Elastic IP, try it after 5 to 10 minutes so that DNS will have time to propagate.

Now, you’ve successfully deployed the sample application:

Deploying Kubernetes Cluster With EKS.

Congrats! You have successfully deployed your first cluster with the Amazon-managed Kubernetes service EKS.

Now, so that we don’t incur more charges or cost, we’ll have to delete all the resources and the cluster.

Step 6: Deleting All The Resources

To start deleting:

kubectl delete rc/redis-master rc/redis-slave rc/guestbook svc/redis-master svc/redis-slave svc/guestbook

Output will be:

C:\Users\ZARAK>kubectl delete rc/redis-master rc/redis-slave rc/guestbook svc/redis-master svc/redis-slave svc/guestbook

replicationcontroller "redis-master" deleted
replicationcontroller "redis-slave" deleted
replicationcontroller "guestbook" deleted
service "redis-master" deleted
service "redis-slave" deleted
service "guestbook" deleted

This one line will delete the replication controller and all services in the cluster.

Step 7: Deleting The Cluster

First, make sure there’s no load balancer associated.

kubectl get svc --all-namespaces

This command shows you all the services in all of the cluster namespaces.

Output will look like:

NAMESPACE     NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE
default       kubernetes   ClusterIP   10.100.0.1            443/TCP         19m
kube-system   kube-dns     ClusterIP   10.100.0.10           53/UDP,53/TCP   18m

Search for the service that has External IP because that has an associated load balancer. Since we don’t have any service with an External IP, we’ll skip to the next step.

If you have one (as mentioned above), go ahead and delete the service.

kubectl delete svc service_name

This will delete the service.

eksctl delete cluster --name cluster_name

EKS CLI tool deletes the cluster with the following command --name flag represents the name of the cluster.

Output will be:

[ℹ]  eksctl version 0.17.0
[ℹ]  using region eu-west-3
[ℹ]  deleting EKS cluster "Guest"
[ℹ]  either account is not authorized to use Fargate or region eu-west-3 is not supported. Ignoring error
[✔]  kubeconfig has been updated
[ℹ]  cleaning up LoadBalancer services
[ℹ]  2 sequential tasks: { delete nodegroup "standard-workers", delete cluster control plane "Guest" [async] }
[ℹ]  will delete stack "eksctl-Guest-nodegroup-standard-workers"
[ℹ]  waiting for stack "eksctl-Guest-nodegroup-standard-workers" to get deleted
[ℹ]  will delete stack "eksctl-Guest-cluster"
[✔]  all cluster resources were deleted

TL:DR

  • EKS is a managed Kubernetes service offered by Amazon Web Services.
  • There are two ways to deploy Kubernetes clusters, by using Fargate or Linux workload.
  • In the Fargate deployment you don't have to provision or manage nodes, it will be taken care of by Fargate. Fargate scales up or down the cluster configuration according to the needs of the application.
  • In Linux workload, you have to specify the size of the cluster and define the min/max according to the requirements of application. You’ll have to manage the Node group in the future to scale in or out according to your requirements.
  • Install and configure EKSCTL to deploy the cluster from CLI.
  • Deploy the Guestbook application pods to the cluster and their required services.
  • Delete the service created while doing the lab (to save from being charged).