<img src="https://ws.zoominfo.com/pixel/JHVDdRXH2uangmUMQBZd" width="1" height="1" style="display: none;">

Nodejs App From Docker To Kubernetes Cluster

DevOps Kubernetes Docker Cluster write-for-cloud-native NodeJs
Nodejs App From Docker To Kubernetes Cluster
DevOps Kubernetes Docker Cluster write-for-cloud-native NodeJs

In this article, we’re developing a starter NodeJS server and deploying it to a Kubernetes cluster, starting from the very primary server, then building the image from docker and deploying it to the Kubernetes cluster.Nodejs App Sample From Docker to Kubernetes Cluster

Prerequisites

  • NodeJS has to be installed locally to start developing server.
  • Docker has been installed and runs fine.
  • Kubernetes will be running, if you’re using your laptop or PC then Minikube must be initiated and running.
  • To check docker engine is working fine, run docker ps (If an error occurred docker is not running)
  • Kubectl version (If it shows both the client and server version you’re good to go)

Step 1: Make A Separate Directory And Initialize The Node Application

First, we’ll initialize the project with npm (Node Package Manager)

npm init
This utility will walk you through creating a package.json file.
It only covers the most common items and tries to guess sensible defaults.

See `npm help JSON` for definitive documentation on these fields
and exactly what they do.

Use `npm install ` afterward to install a package and
save it as a dependency in the package.json file.

Press ^C at any time to quit.
package name: (nodongo)
version: (1.0.0)
description: Basic NodeJS with Docker and Kubernetes
entry point: (index.js)
test command:
git repository:
keywords:
author: Muhammad zarak
license: (ISC)
About to write to E:\Magalix\nodongo\package.json:

{
  "name": "nodongo",
  "version": "1.0.0",
  "description": "Basic NodeJS with docker and kubernetes",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "Muhammad zarak",
  "license": "ISC"
}


Is this OK? (yes) yes

After doing npm init, npm will ask for some basic configuration info i.e., your project name (our project name is nodongo), then version and starting point which is index.js (note: whenever the server starts, it looks for index.js to execute).

From here, you’ll have a file name package.json, which holds the relevant information about the project and dependencies.

Step 2: Installing Express

Next, we’ll install Express through npm (Node Package Manager). The Express framework is used to build a web application and API’s:

npm install express --save

The above command installs Express dependency in your project. --save tag is used to save this dependency in the project.json.

Step 3: Make index.js File And Write Some Code

First, create a file named index.js in the root folder. Then we can write some code to test the application on the Kubernetes cluster:

const express = require("express");
const app = express();
 
app.listen(3000, function () {
  console.log("listening on 3000");
});
 
app.get("/", (req, res) => {
  res.send("Users Shown");
});
 
app.get("/delete", (req, res) => {
  res.send("Delete User");
});
 
app.get("/update", (req, res) => {
  res.send("Update User");
});
 
app.get("/insert", (req, res) => {
  res.send("Insert User");
});

From the first line, we’ve imported the Express module using a require function, this function returns an object that’s used to configure our application.

Then we’ll use a callback function that starts listening on a specific host and port i.e., port 3000 in our case. After that, we configured a route update delete insert that doesn’t do the actual database CRUD function but is implemented to have a route to check. res.send() function returned the response from the server.

You can now check the server by using the following command, and browsing localhost:3000/

node index.js

Learn How to Continuously Optimize your K8s Cluster

Step 4: Dockerizing The Node Server

Here comes the fun part - we have the code and the server is ready to deploy. But first, we have to build the image, and for that, we’ll have to write the Docker-file.

FROM node:13
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
CMD node index.js
EXPOSE 3000

The images are built with many layers and each of the Docker-file steps construct these layers for us. Here, we’ll guide you through each step:

  1. We must start with FROM Keyword, tell the docker which image to use as your base image. Here, we’re using node version 13
  2. WORKDIR, tells docker the working directory of our image (in our case it is /app). CMD or RUN commands execute in this folder
  3. CP stands for copy; Here, we’re copying package.json file to /app
  4. RUN executes a command on the working directory that’s defined above. The npm install command installs required dependencies defined in the package.json, which we’ve just copied to /app directory
  5. Now, we copy the files in the root directory to /app directory where we’re using all the commands. We’ve done it this way so that we have our index.js file in /app directory. Although we just cp index.js /app to copy index.js file to our app directory, we’re purposely doing it in a generic way because we want all of our data to copy from root to app folder.
  6. CMD stands for command, and here we’re running node index.js as we had seen at the beginning of this article to start the NodeJS server or run file. We have index.js in the app directory from the last step, and we’re starting our server from the index.js file.
  7. EXPOSE 3000, here it informs the user container (using this image) that it needs to open port 3000.

Next, from Docker-file we’ll start building our image.

docker build -t node-server .

The Docker build command is used to create an image with instructions given by Docker-file. -t flag is used to tag the images with our node-server name. Here you can see a full stop at the very end followed by space, and this defines the build context that we’re building this image on, and that we’re using the current context or local Docker-file.

Step 5: Create And Run The Container

Now, we’ll then run the container to ensure it works as intended.

docker run -d --name nodongo -p 3000:3000 node-server

Here we run a container using our NodeJS image. The run command used to run container -d flag indicates container will be running on detach mode. --name is optional. You can give any name to your container. -p flag is used to define the port on which our server is running, the first port is the container port, and the second one is the host port. Next, we have to specify which image is used to run the container, and that it’s our node-server image. You can curl 127.0.0.1:3000 or browse this address to test that it’s running.

Step 6: Upload The Image To Docker Registry Docker Hub

The image registry that we’re using is Docker Hub. First, your account has to be created, then create a repository with any name, we’ve named it nodejs-starter. Now, let see the steps:

docker tag node-server zarakmughal/nodejs-starter

We’ve tagged our existing docker image node-server to zarakmughal/nodejs-starter so we can push it to the docker hub.

docker push zarakmughal/nodejs-starter:1.0

Now, we’ve pushed our docker image to the registry by using a docker push and tagged it with the 1.0 version, and it’s not mandatory but highly recommended so you will roll back to the previous version and not override the latest build from the previous build.

Step 7: Start The Kubernetes Cluster

Whether you’re using amazon EKS, Google Cloud GKE, or standalone machine, just make sure your cluster is running.

We’re are doing this lab on Minikube (used to run Kubernetes locally):

minikube start

This command will spin up the cluster, having one node that serves as a worker and one as a master node.

Step 8: Define YAML File To Create A Deployment In Kubernetes Cluster

YAML is a human-readable extensible markup language. It’s used in Kubernetes to create an object in a declarative way.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nodejs-deployment
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: nodejs  
    spec:
      containers: 
      - name: nodongo
        image: zarakmughal/nodejs-starter:1.0
        ports:
        - containerPort: 3000

Breakdown Of Our YAML file

  1. Describe which API version you’re using to create this object i.e., deployment - we’re using apps/v1
  2. What kind of object you’re creating. In our case, it’s Deployment.
  3. Metadata is used to organize the object.
    1. The name of our Deployment is nodejs-deployment
  4. Spec is used to define the specification of the object.
    1. How many pods you want to deploy in the cluster under this Deployment. In our case, we want to deploy two pods running containers from our image. 
    2. The template is used to define how to spin up the new pod and the specification of the pod.
      1. Metadata of the newly created pod with this Deployment
        1. We have one label - key is app and value is nodejs
          1. The labels of the freshly created pods
      2. Spec defines the specification of how the containers will be created
        1. Containers spec 
          1. Name of the container
          2. The image that can be used by the container
          3. Which port option to use
            1. We’re using containerPort 3000

Step 9: Create Deployment In Kubernetes Cluster

As we’ve created the YAML file, we can go ahead and create a deployment from this YAML file.

kubectl create -f deploy.yaml

Kubectl is Kubernetes' client which is used to create objects. With kubectl create, you can create any object -f indicates we’re using a file and deploy.yaml is the file that will be used to create an object. You can check Deployment with the following command:

kubectl get deploy,po

NAME READY UP-TO-DATE AVAILABLE AGE

deployment.apps/nodejs-deployment 2/2 2 2 5m44s

NAME                                READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nodejs-deployment   2/2     2            2           5m44s

NAME                                    READY   STATUS    RESTARTS   AGE
pod/nodejs-deployment-746bdb6c4-h4dwb   1/1     Running   0          5m44s
pod/nodejs-deployment-746bdb6c4-kqjmj   1/1     Running   0          5m44s

Given the output, we see that our Deployment and both pods are working fine.

Step 10: Expose The Deployment To The Internet

Next, we’re going live through Kubernetes service object:

kubectl expose deployment nodejs-deployment --type="LoadBalancer"

This service will create a load balancer service that exposes the Deployment to the internet.

Kubectl expose is used to expose Deployment named nodejs-deployment of the type Load Balancer.

kubectl get svc

Now, you’ll see the service that has been created:

NAME                TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)          AGE
Kubernetes          ClusterIP      10.96.0.1                 443/TCP          40h
nodejs-deployment   LoadBalancer   10.100.153.82   192.168.79.70   3000:30250/TCP   22h

Note: At this point, you won’t yet have an EXTERNAL IP, we’ll see that in the next step - how to get External IP for minikube. Cloud platforms do provide load balancer and you should be getting external IP.

Here, we do have two services. The second one you’re seeing is the service which we’ve created. It has an external IP and port. Visit <External_IP>:<PORT> to access your service. You can visit different routes to see each working /add /delete.

Step 11: Using MetalLB In Your Minikube Environment

You can skip this step if you’re using a cloud provider for your cluster. If you’re using minikube, you’ll notice that you won’t get an external IP because the load balancer will not work on minikube. Here’s the workaround below, just follow these commands and you’ll start getting an external IP:

kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.9.3/manifests/namespace.yaml

kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.9.3/manifests/metallb.yaml # On the first install only

kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

After that, run minikube IP:

Minikube ip

Here, you’ll get your minikube IP - ours is 192.168.79.68. After this, we’ll create a config map for the address pool.

apiVersion: v1
kind: ConfigMap
metadata:
    namespace: metallb-system
    name: config
data:
    config: |
        address-pools:
            - name: default
              protocol: layer2
              addresses:
              - 192.168.79.61-192.168.79.71

In this configuration, MetalLB is instructed to hand out addresses from 192.168.79.61 to 192.168.79.71. After that, we’ll create a config map in the metallb-system namespace.

kubectl create -f configmap.yaml

Next, we have to delete the svc and create the service again:

Kubectl delete svc nodejs-deployment
kubectl expose deployment  nodejs-deployment --type="LoadBalancer"

Now that’s done, you’ll be getting External IP.

kubectl get svc
NAME                TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)          AGE
kubernetes          ClusterIP      10.96.0.1                443/TCP          9d
nodejs-deployment   LoadBalancer   10.109.88.42   192.168.79.68   3000:30479/TCP   7d18h

Note, this is only workable on minikube; otherwise, Load Balancer service is available on the Kubernetes cluster via Cloud providers.

TL;DR

  • NodeJS is a javascript runtime, used to develop API and web application framework
  • Docker delivers software in packages called containers, we leverage this functionality through nodejs development and build images using Docker
  • We use Kubernetes as our container orchestration tool to deploy and run these containers in a minikube environment
  • Then we’re able to expose the service to the internet
  • If using minikube, you can get external IP through minikube itself

 

Comments and Responses

Related Articles

Team Productivity: Resource Management

Since the introduction of containers, the method of building and running applications in an organization has

Read more
Capacity Management for Teams on Kubernetes: Setting Pod CPU and Memory Limits

Capacity management is a complex, ever-moving target, for teams on any infrastructure, whether on-prem,

Read more
Kubernetes cost saving K8s
Kubernetes Cost Optimization with Magalix

Is our Spending Getting Worse? I woke up one day to see this email from our CEO in my mailbox. I knew this

Read more

start your 14-day free trial today!

Automate your Kubernetes cluster optimization in minutes.

Get started View Pricing
No Card Required