14-days FREE Trial

 

Right-size Kubernetes cluster, boost app performance and lower cloud infrastructure cost in 5 minutes or less

 

GET STARTED

  Blog

Demystifying Docker, Kubernetes For Your Organization

Introduction

If you are in the business of technology, there’s a good chance you’ve heard of Docker and Kubernetes. If you’re currently thinking about making the switch to these technologies in your organization but aren’t sure if this is the right choice, this article is designed to walk you through some basics of Docker and Kubernetes and provide a glimpse of some of the benefits of containerizing your services and orchestrating them in Production.

For the purposes of this article, the words application, service, and microservice all mean the same thing: your custom deployable artifact.

What Is Docker?

Docker this and Docker that, but what is it really? To answer that, we’ll have to jump one level down and talk about containers. So, what are containers? A container is a uniform deployable piece of software that contains everything your application will need to run. That means all the dependencies (e.g., software packages, symlinks, ports, additional files, etc.) are all in one deliverable package, called a container image.

A great analogy for this is shipping containers. When you ship a container anywhere across the world, you can load it with anything you want, but the reasons why this works are:

  1. There is an understanding that the containers will adhere to a standard format, in this case, it’s the dimensions of the shipping container.
  2. As long as you’re adhering to the standards, you should be able to receive that container.
  3. With our software container example, you can stuff in everything you need to run your application, and you’re able to ship that container anywhere as long as you have something to run it.

So what is Docker? Docker is simply the name brand most closely associated with containers. It’s the software that allows anyone to build, ship, and run their own containers. From this point on, we’re going to use Docker and containers as synonymous terms.

Why Are Containers Important?

From an operational perspective, containers lower the complexity of running your applications in the following areas (just to highlight a few):

  1. Troubleshooting with containers quickly helps eliminate whatever is NOT the root cause. Is it the container image, or the application? Aside from edge cases, we should be able to take any container and run it anywhere, and see the same results. This is especially important if you have multiple environments (Dev, Stage, Prod, etc.) and have run into issues where your application works in one environment, but not in another. It’s much easier to ensure your servers are all running the same version of Docker versus ensuring all the servers in your cluster are running identically provisioned operating systems, with the exact same versions of all your dependencies.
  2. CI/CD solutions become easier to manage because the only requirement you’ll have is whether your solution supports Docker or not. You no longer have to worry about complex build scripts, long markup files, or if your CI/CD solution can support the latest (or legacy) version of your chosen programming language. You also get the benefit of testing the same container throughout your release cycle. You don’t have to build a version for each environment.
  3. Multi-app-tenancy (as we like to call it), or running as many things as you can on one box. Docker allows for application isolation. For example, you can run Docker containers with applications running Java 6, Java 7, and Java 8 all on the same machine without having to worry about dependency conflicts or custom scripts. You can also run multiple instances of the same container on the same machine simultaneously. By contrast, try running the same process twice on your machine. It’s quite challenging.
  4. Less computing resource overhead. In some cases, you might need to launch a full VM for one application. You ask yourself, do I really need an OS, on top of an OS to run this application or do I just need the application? You can definitely save yourself some resources by eliminating the Guest OS.

Docker Isn’t Going To Solve All Your Problems, But It Can Help

Docker works well in environments that are embracing a shift in software engineering practices and principles. Have you ever heard of the term DevOps? This term came from the idea that software engineers (devs) and systems engineers (ops) working together, will build a better system overall. By leveraging Docker, software engineers can build their own software, and systems engineers can easily take that container and deploy it to its designated target. This is a great first step in bridging a gap across teams.

Docker can also help kickstart your microservice initiatives. A paradigm of Docker is to do one thing, and do that one thing well. If you’re interested in breaking up a monolithic system, Docker is a great way to start breaking up your applications.

On the technical side, a limitation of Docker is that containers can’t communicate with each other if they live on different servers. For local development, Docker works because you can run your entire stack leveraging containers (e.g., your application, database, cache, etc.) without having to install and manage these apps on your local machine. You’d simply pull the down the container you need from a public Docker repository, and run it. Because we’re doing one thing well, this doesn’t mean to create one container that runs your entire stack. You’ll find this very challenging. Instead, it’s better to break services up into individual containers and run them individually. Docker’s internal networking handles communication between containers.

In production, you would need something to bridge all your containers together, regardless of which server they reside on. This is where Kubernetes comes in.

Kubernetes (Minikube)

Kubernetes is a container orchestrator. That means it’s a complete system that allows you to deploy, run, and manage your container environment. By deploying your containers to Kubernetes, you’ll get:

  1. Service discovery and load balancing: Kubernetes can auto-detect new containers and automatically join them to a load balancing pool. Because Kubernetes has its own networking layer, it can provide container-to-container communication across different machines.
  2. Storage orchestration: You can allocate persistent storage to containers.
  3. Automated rollouts and rollbacks: You declare the state of your apps in a YAML file and Kubernetes does everything it can to adhere to this desired state.
  4. Automatic bin packing: By declaring the amount of resources your container needs, Kubernetes will figure out which underlying host has those resources available, and deploy it to that node.
  5. Self-healing: Kubernetes can try to restart or replace containers that don’t comply with your defined health checks.
  6. Secret and configuration management: Kubernetes let’s you maintain secrets, such as passwords, environment variables, and ssh keys outside of the container. Think of it as a pool of key-value pairs, and you tell each container which secret to use.

Now that you have an idea of what Docker and Kubernetes are, and how they can be helpful, let’s dive into a use case that will highlight the benefits mentioned above.

In our scenario, we’re going to build a simple webserver container, deploy it to minikube, and then scale up.

Prerequisites

If you want to skip to the deployment and head straight to the scaling part, jump to step 5a.

Step 1: Create Your Application

Create a file called app.js

var express = require('express')
var app = express()
app.set('port', (process.env.SERVER_PORT || 5000))
app.use(express.static(__dirname + '/app'))

app.get('/', function(request, response) {
  response.send('Hello World!')
})

app.listen(app.get('port'), function() {
  console.log("Node app is running at localhost:" + app.get('port'))
})

In our sample application, we’re running an express server using the environment variable SERVER_PORT to specify the port number, or port 5000 (if it’s not defined), that will return a web page that states “Hello World!” It’s also expecting files in the /app directory and also writes something to the console so you can see that the server has started.

Because this is a node app, we need package.json to install some dependencies.

In the same directory as app.js, create this file package.json

{
    "name": "hello-world",
    "version": "0.0.1",
    "description": "An app for demos",
    "main": "app.js",
    "scripts": {
      "start": "node app.js"
    },
    "dependencies": {
      "express": "^4.17.1"
    }
}

Step 2: Create A Docker Image

In the same directory as app.js, created a file called Dockerfile

FROM node:current-slim 
WORKDIR /app        
COPY . /app
RUN npm install
CMD npm start  
EXPOSE 5000
  • Line 1: A container to start from. This is the official node container from the node.js maintainers on Docker Hub. https://hub.docker.com/_/node.
  • Line 2: This is the directory you want to be in when the container builds. It’s equivalent to the command 'cd /app' inside of the container.
  • Line 3: This tells Docker to copy all the files from your current directory into the container's /app directory. It’s is equivalent to cp -rf . /app
  • Line 4: Since this is a node app, we need to run npm install to build.
  • Line 5: This is the executable command that should start your app.
  • Line 6: This is the port your container will listen on during runtime

Step 3: Build The Image

Because we’re running minikube, we need to leverage the local environment.

eval $(minikube docker-env)

Now when you build your container, that container will be available to minikube.

This will build your container.

docker build -t org/hello-world .
  • docker: The executable that comes when you install Docker.
  • build: The command-line parameter that tells Docker to build an image.
  • -t org/hello-world: Here, you’re specifying an org, and name for your image, and a tag. You can tag your containers by adding a colon and whatever you want your tag to be. The correct tagging format is as follows:
  • org_name/container_name:tag: In our case, org is our org_name, and we’re naming our image hello-world. When not specifying a tag, it defaults to latest. You can verify this by running docker images.
Demystifying Docker, Kubernetes For Your Organization

In our Dockerfile, we had 6 lines. They’re represented here in 6 steps.

Step 4: Run Your Container Locally

docker run -p 5000:5000 org/hello-world
  • docker run: This tells Docker to run an image.
  • -p 5000:5000: Remember how we were exposing port 5000 in the Dockerfile? We need to map local_port:docker_port. So in this case, we are telling it to map local TCP port 5000 to the Docker’s container TCP port 5000.
Demystifying Docker, Kubernetes For Your Organization

Now check your browser at http://localhost:5000.

You should see this:

Demystifying Docker, Kubernetes For Your Organization

If you see this, you know that you now have a fully functioning app.

Check Point

You’ve just created a node.js application that has been containerized and made available to your local minikube instance. You’ve also run your container locally to ensure it’s working.

In contrast, if you wanted to run this application on new servers, you would most likely need to install Node.js, Express, your code, and then start the app in the background. You might even have to run npm install on the server. Here, you install everything once, and then just ship the container to any new server that has Docker installed. For one server, leveraging Docker might not make sense, but what happens when you need to run multiple instances of your application?

Now that you have built a working Docker image, you’re ready to deploy it to your minikube instance.

Step 5: Deploy Your Container In Minikube

This command will deploy your local container to your minikube instance:

kubectl run hello-world --image=org/hello-world --port=5000 --image-pull-policy=Never
  • kubectl: This is the Kubernetes command-line tool for administering a cluster. Since minikube is a compact local version of Kubernetes, the tool also works here. This should come packaged with your minikube installation. If not, you can find it available here.
  • run: This is the subcommand telling kubectl that you’re going to run something.
  • hello-world: This is the name of your application.
  • --image=org/hello-world: This is the image you just built.
  • --port=5000: This is the service port that we determined earlier.
  • --image-pull-policy=Never: This last parameter is what ties it all together. By default, Kubernetes is going to try and pull your image from a public repository. Out of the box, that public repository will be the Docker Hub. In this case, we want to deploy our local Docker image so we need to add this flag to tell minikube to not pull this image from the outside world.

To verify, run kubectl get pods

Demystifying Docker, Kubernetes For Your Organization

In Kubernetes, a pod is synonymous with a running container, so you can use the kubectl command-line tool to check your deployment.

If you’d like to try using a public Docker image, run the following command before moving forward:

kubectl delete deployment hello-world
  • delete: Is your command line option.
  • Deployment: Is the “thing” you want to delete.
  • hello-world: It’s the name of your deployment.

Step 6: Deploy A Publicly Accessible Container In Minikube

Demystifying Docker, Kubernetes For Your Organization

If you don’t want to bother with building your own container, you can deploy a publicly accessible container. We've gone ahead and published the same container like the one you’ve just created above, so all you have to do is run this command:

kubectl run hello-world --image=airwavetechio/hello-world --port=5000

As you can see, the image name is different and the --image-pull-policy=Never

parameter has been omitted. Now we’re telling Kubernetes, or minikube, to pull this public image from the Docker Hub. The image can be found here.

This would be very similar to how you would typically deploy your containers in production. After your container is built, you would push it up to an accessible location and then have Kubernetes pull it down when it needs to deploy it.

Demystifying Docker, Kubernetes For Your Organization

Check Point

So far, we’ve built our own container, deployed it to our local minikube instance, and then deployed a publicly accessible container to the same minikube instance.

At a very high level, when compared to a non-container environment, our process hasn’t really changed much.

Software gets developed > software is built > software is deployed > software is run

The benefits start to kick in when you have to add another instance of your application.

Step 7: Scale-Up

In a traditional environment, if you wanted to add a second instance of your application, you would have to find a server, install all the dependencies, deploy the application, test it, and then add it to the load balancer.

With Kubernetes, scaling up is as easy as issuing one command:

kubectl scale deployment.v1.apps/hello-world --replicas=2

By now, you know what kubectl is, but let’s see what the other parameters do.

  • scale: Here, you’re informing Kubernetes that you want to scale something, either up or down.
  • deployment.v1.apps/hello-world: This is specifically what you want to scale. It’s our hello-world deployment.
  • --replicas=2: This is how many instances of your app you want in total.
Demystifying Docker, Kubernetes For Your Organization

When you run kubectl get pods

Demystifying Docker, Kubernetes For Your Organization

You’ll now see two instances because you’ve just scaled up! This works because the containers are exactly the same. We don’t have to worry about any additional dependencies because everything required for the application to run is self-contained.

TL:DR

Every situation and environment is different, but the following is a set of questions to ask yourself to help figure out if migrating is for you and your organization:

  • Do you have a culture that supports collaboration, specifically DevOps, or would you like to have one?
  • Do you have infrastructure and services that require lots of hands-on management?
  • Do you typically have to scale up quickly?
  • Are your CI/CD pipelines overly complex?
  • Can you afford to dedicate a healthy portion of your teams’ bandwidth to invest in a migration?
  • Do you want to break your monolithic application into smaller pieces, easier to manage?
  • Are you willing to dive in 100%, accepting all the benefits and drawbacks?
  • Are you okay with migrating, or potentially giving up your current toolset for new tools to support your new environment?

If you answered “yes” to all the questions, then migrating probably makes a lot of sense. If you answered “no” to at least one of the questions, now might not be the right time to migrate to a containerized environment.

Kubernetes is the culmination of years of lessons learned from running systems at scale. It solves a lot of challenges many of us have faced in production, but it also introduces new ones. Because Kubernetes is a complex ecosystem consisting of many moving parts, it could be difficult to troubleshoot without some deep, resident knowledge.

If you do decide to move forward, start small. Leverage the power of Docker on your desktop first. Get familiar with the benefits, and apply it to a grander vision.

Download Kubernetes Application Patterns E-Book

Tony Chong

May 24, 2020