14-days FREE Trial

 

Right-size Kubernetes cluster, boost app performance and lower cloud infrastructure cost in 5 minutes or less

 

GET STARTED

  Blog

Implementing A Reverse Proxy Server In Kubernetes Using The Sidecar

What Is A Sidecar?

A sidecar refers to a seat attached to the bicycle or motorbike so that they run on three wheels. The sidecar is often used to carry a passenger or equipment. There are many uses for the sidecar in sports as well as the military. Regardless of where you want to use the sidecar, the concept is the same: an object that is attached to another and, thus, becomes part of it. The auxiliary object’s main purpose is aiding the main one.

What Is The Kubernetes Sidecar Pattern?

From the above definition, we understand what a sidecar is and what it is used for. In Kubernetes, we have the sidecar pattern, which makes use of one of the most powerful components of Kubernetes: the Pod. Among the important features of a pod is that containers hosted on the same pod share the same network address and filesystem. All containers on the same pod can connect to each other through the localhost address the same way processes communicate with each other over HTTP on the same machine. Therefore, we can create a pod that hosts the main application, through the application container, and add a sidecar container on the same pod that provides an extra layer of functionality. Like the sidecar seat, containers on the same pod are treated as one unit: they’re created, destroyed, and moved from one node to another together. So, what are the possible uses of the sidecar pattern? There are many. For example, we can add a sidecar container to our application pod that runs Nginx. With Nginx at our disposal, we can add an HTTPS layer to our application that natively supports HTTP only. Another use case is providing a frontend reverse proxy for uWSGI applications (for example, Python Flask), which is the topic of this article.

The Architecture

We have a very simple API that is built using Python Flask microframework and the Gunicorn server. The application is hosted in a container, but we need a web server in front of it. The web server receives the HTTP request, routes it internally to the application pod and relays back the response to the client. You might be asking what’s the use of a web server here and why not just use Flask? Here’s an excellent Reddit discussion that’ll probably address your questions. For the moment, here is a depiction of what our architecture looks like:

Implementing A Reverse Proxy Server In Kubernetes Using The Sidecar

Now, let’s start coding our application.

The Code

As mentioned, our application is a simple Python Flask API. The application itself consists of two files: the one containing the actual code that runs when the application launches (mainapp.py), and the other containing the instructions required by Gunicorn to run our app (wsgi.py). The mainapp.py contents are:

from flask import Flask
from flask import jsonify

app = Flask(__name__)

@app.route("/")
def hello():
   response = {"message":"Hello World!"}
   return jsonify(response)

if __name__ == "__main__":
   app.run(host='0.0.0.0')
YAML

And the wsgi.py file looks like:

from mainapp import app

if __name__ == "__main__":
   app.run()
YAML

Next comes the Dockerfile:

FROM python:3
RUN pip install gunicorn flask
ADD mainapp.py wsgi.py /app/
EXPOSE 5000
WORKDIR /app
ENTRYPOINT [ "gunicorn","--bind","0.0.0.0:5000","wsgi:app" ]
YAML

You may want to test this locally on your machine first. This can be done by building the image:

docker build -t magalix/flasksidecar .

And then run the container from the image:

docker run -d -p 5000:5000 magalixcorp/flasksidecar

Then, you can confirm that the API is working by issuing an HTTP request (using any client). In this lab, we’re using curl:

$ curl localhost:5000
{"message":"Hello World!"}

The Kubernetes Deployment And Service

Before applying our deployment, we need to push our Docker image so that it’s available for Kubernetes:

docker push magalixcorp/flasksidecar

Now, we can combine our Deployment and Service in one file (let’s call it deploy.yml):

apiVersion: v1
kind: Service
metadata:
 name: hello-svc
spec:
 selector:
   role: app
 ports:
   - protocol: TCP
     port: 80
     targetPort: 5000
     nodePort: 32001
 type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
 name: hello-deployment
 labels:
   role: app
spec:
 replicas: 2
 selector:
   matchLabels:
     role: app
 template:
   metadata:
     labels:
       role: app
   spec:
     containers:
     - name: app
       image: "magalixcorp/flasksidecar"
YAML

So, now our service is listening for connections on port 32001 (on any of our nodes) and relays the connection to one of the backend pods (we have two replicas) on port 5000. The Deployment creates two pods, and each is hosting a container using the magalixcorp/sidecar image.

Let’s apply this configuration to our cluster: kubectl apply -f deploy.yml

We can test our setup so far by issuing an HTTP request to the IP address of any of our cluster nodes on port 32001:

$ curl 35.223.240.101:32001
{"message":"Hello World!"}
YAML

Perfect! Now, let’s add our sidecar container.


Already working in production with Kubernetes? Learn best practices for managing network traffic, like the Sidecar pattern, with Magalix.

Learn More


 

Adding The Reverse Proxy Sidecar Container

Since everything is working fine now, you may be asking: why do we need a reverse proxy? The answer is simple: because Gunicorn is not a web server. It can be thought of as an application server. The web server is needed for several reasons. For example:

  • To serve static content. In this lab, we are creating an HTML page that connects to the backend API through an AJAX call. It’s much more efficient to place static content (HTML, JS, CSS, images, PDF files, etc.) on the webserver instead of making it part of the application server’s task list.
  • It can be used as an effective caching layer to increase performance.
  • To terminate HTTPS traffic. You can install the certificate and configure SSL on the webserver, lifting that burden off the application server.

So, now that we know some of the merits of using a web server, let’s add one. In this lab, we’re using Nginx so the key file that you want to change is nginx.conf. The contents of this file should be:

server {
   listen       80;
   server_name  localhost;
   location / {
       root   /usr/share/nginx/html;
       index  index.html index.htm;
   }
   location /api/ {
       proxy_pass  http://localhost:5000/;
   }
   error_page   500 502 503 504  /50x.html;
   location = /50x.html {
       root   /usr/share/nginx/html;
   }
}
YAML

The file simply instucts Nginx to serve content by default from /usr/share/nginx/html and any dynamic content (any calls to /api) from localhost:5000, where our Flask application is running. Now, let’s add some static content starting with the index.html page:

<!DOCTYPE html>
<html>

<head>
   <meta charset="utf-8">
   <title>Hello World</title>
</head>

<body>
   <h1&Message from backend API</h1>
   <div id="backend"></div>
   <script src="http://code.jquery.com/jquery-1.9.1.min.js"></script>
   <script src="script.js"></script>
</body>

</html>
YAML

And the required JavaScript to make the AJAX call can be found in the script.js file:

$(document).ready(function () {
   $.getJSON("/api/", function (result) {
       $("#backend").text(
           result['message']
       );
   });
});
YAML

As you can see, we tried to make things as minimalistic as possible so that we concentrate on the more important stuff. The page just displays a header followed by the content dynamically retrieved from the backend API.

Let’s dockerize the whole thing. Our Dockerfile should look as follows:

FROM nginx
ADD default.conf /etc/nginx/conf.d/
ADD index.html script.js /usr/share/nginx/html/
YAML

Nothing complicated, just the Nginx configuration and the static files. Let’s build and push this image:

docker build -t magalixcorp/nginxsidecar .
docker push magalixcorp/nginxsidecar
YAML

Finally, let’s modify our deploy.yml file to add the new container:

apiVersion: v1
kind: Service
metadata:
 name: hello-svc
spec:
 selector:
   role: app
 ports:
   - protocol: TCP
     port: 80
     targetPort: 80
     nodePort: 32001
 type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
 name: hello-deployment
 labels:
   role: app
spec:
 replicas: 2
 selector:
   matchLabels:
     role: app
 template:
   metadata:
     labels:
       role: app
   spec:
     containers:
     - name: app
       image: "magalixcorp/flasksidecar"
     - name: web
       image: "magalixcorp/nginxsidecar"
YAML

As a matter of fact, we made two changes: instructed our Service to connect to the pod at port 80 instead of 5000. Since our pod is now hosting two containers, and since containers on the same pod share the same network, then connecting to a pod on port 80 is analogous to connecting to the container that is listening on port 80 (nginx). The second change we made was adding that container to our pod (lines 34 and 35).

Now, let’s apply our modified configuration to the cluster:

kubectl apply -f deploy.yml

Lastly, we can test our work using any web browser and navigate to one of our IP addresses on port 32001: 

Implementing A Reverse Proxy Server In Kubernetes Using The Sidecar

 

TL;DR

In this article, we discussed the Kubernetes sidecar pattern. You can use this pattern whenever you want to delegate extra functionality needed by your application to a separate container rather than a separate pod. Throughout the lab, we explored one of the possible use cases for using a sidecar container, which is adding a web server in front of an application server.

Perhaps the main advantage of using the sidecar here is that you can easily replace Python Flask with any other application server (e.g., NodeJS, Go, Ruby, etc.) as long as it listens on the same endpoints and follows the same business logic.You can find the source code of this lab here https://github.com/MagalixCorp/sidecar

Mohamed Ahmed

Feb 26, 2020