<img src="https://ws.zoominfo.com/pixel/JHVDdRXH2uangmUMQBZd" width="1" height="1" style="display: none;">

Learn the 3 Key Elements to Successfully Shifting your Security Left - Live Webinar

Exit icon Register Now
The Adapter Pattern
DevOps Containers Tutorials Kubernetes Cloud Native Docker kubernetesio Patterns kubernetes pattern adapter Prometheus


Why do we need an adapter container?

All containerized applications are able to communicate with each other through a well-defined protocol, typically HTTP. Each application has a set of endpoints that expect an HTTP verb (GET, POST, etc.) to do a specific action. It is the responsibility of the client to determine how to communicate with the server application.

However, you could have a service that expects a specific response from any application. The most common example of this service type is Prometheus. Prometheus is a very well-known monitoring application that checks not only if an application is working, but also if it is working “as expected” or “perfectly”.

Prometheus works by querying an endpoint exposed by the target application. The endpoint must return the diagnostic data in a format that Prometheus expects. A possible solution is to configure each application to output its health data in a Prometheus-friendly way. However, You may need to switch your monitoring solution to another tool that expects another format. Changing the application code each time you need a new health-status format is largely inefficient. Following the Adapter Pattern, we can have a sidecar container in the same Pod as the app’s container. The only purpose of the sidecar (the adapter container) is to “translate” the output from the application’s endpoint to a format that Prometheus (or the client tool) accepts and understands.



Learn How to Continuously Optimize your K8s Cluster

Scenario: using an adapter container with Nginx

Nginx has an endpoint that is used for querying the web server's status. In this scenario, we add an adapter container to transform this endpoint’s output to the required format for Prometheus.

First, we need to enable this endpoint on Nginx. To do this, we need to make a change to the default.conf file. The following configMap contains the required default.conf file:

apiVersion: v1
kind: ConfigMap
  name: nginx-conf
  default.conf: |
    server {
      listen       80;
      server_name  localhost;
      location / {
          root   /usr/share/nginx/html;
          index  index.html index.htm;
      error_page   500 502 503 504  /50x.html;
      location = /50x.html {
          root   /usr/share/nginx/html;
      location /nginx_status {
        allow;  #only allow requests from localhost
        deny all;   #deny all other hosts 

This is the default default.conf file that ships with the nginx Docker image. The additional data lays in lines 18 to 22. We define an endpoint /nginx_status that makes use of the stub_status module to display nginx’s diagnostic information.

Next, let’s create the Nginx Pod and the adapter container:

apiVersion: v1
kind: Pod
  name: webserver
  - name: nginx-conf
      name: nginx-conf
      - key: default.conf
        path: default.conf
  - name: webserver
    image: nginx
    - containerPort: 80
    - mountPath: /etc/nginx/conf.d
      name: nginx-conf
      readOnly: true
  - name: adapter
    image: nginx/nginx-prometheus-exporter:0.4.2
    args: ["-nginx.scrape-uri","http://localhost/nginx_status"]
    - containerPort: 9113

The Pod definition contains two containers; the nginx container, which acts as the application container, and the adapter container. The adapter container uses the nginx/nginx-prometheus-exporter which does the magic of transforming the metrics that Nginx exposes on /nginx_status following the Prometheus format. If you’re interested in seeing the difference between both metrics, do the following:

kubectl exec -it webserver bash
root@webserver:/# apt update && apt install curl -y
Defaulting container name to webserver.
Use 'kubectl describe pod/webserver -n default' to see all of the containers in this pod.
root@webserver:/# curl localhost/nginx_status
Active connections: 1
server accepts handled requests
 3 3 3
Reading: 0 Writing: 1 Waiting: 0
root@webserver:/# curl localhost:9313/metrics
curl: (7) Failed to connect to localhost port 9313: Connection refused
root@webserver:/# curl localhost:9113/metrics
# HELP nginx_connections_accepted Accepted client connections
# TYPE nginx_connections_accepted counter
nginx_connections_accepted 4
# HELP nginx_connections_active Active client connections
# TYPE nginx_connections_active gauge
nginx_connections_active 1
# HELP nginx_connections_handled Handled client connections
# TYPE nginx_connections_handled counter
nginx_connections_handled 4
# HELP nginx_connections_reading Connections where NGINX is reading the request header
# TYPE nginx_connections_reading gauge
nginx_connections_reading 0
# HELP nginx_connections_waiting Idle client connections
# TYPE nginx_connections_waiting gauge
nginx_connections_waiting 0
# HELP nginx_connections_writing Connections where NGINX is writing the response back to the client
# TYPE nginx_connections_writing gauge
nginx_connections_writing 1
# HELP nginx_http_requests_total Total http requests
# TYPE nginx_http_requests_total counter
nginx_http_requests_total 4
# HELP nginx_up Status of the last metric scrape
# TYPE nginx_up gauge
nginx_up 1
# HELP nginxexporter_build_info Exporter build information
# TYPE nginxexporter_build_info gauge
nginxexporter_build_info{gitCommit="f017367",version="0.4.2"} 1

So, we logged into the webserver Pod, installed curl to be able to establish HTTP requests, and examined the /nginx_status endpoint and the exporter’s one (located under: 9113/metrics). Notice that in both requests, we used localhost as the server address. That’s because both containers are running in the same Pod and using the same loopback address.

Magalix Trial


  • The adapter container is no different from the sidecar when it comes to the concept where both terms refer to another container running in the same Pod as the application container. The essence of both pattern is making changes to how the application container operates without changing its logic or modifying its code.
  • Using the Adapter Pattern, we establish a unified interface for our application container that can be used by a third-party service. In our example, we needed to expose Nginx’s metrics in a way that Prometheus understands. However, no changes should be made to the application container. The metrics transformation is done through the adapter container.

Comments and Responses

Related Articles

How Shifting Left Helps Organizations Mitigate Cloud-Native Security Risks

By shifting-left, organizations are instilling security measures into the DevOps workflows, not just at the tail-end of the process. Shift-left now for a more agile, friction-free & secure environment

Read more
Breaking Down the Complexity of Cloud Native Security for Leadership

Securing Cloud-Native applications can be complex because of the volume of skills and knowledge required

Read more
Securing Cloud-Native Applications is the New Foundation to Digital Transformation Success

Security can no longer remain on its own independent island & must be incorporated into the rest of the stack in to maintain a hardened infrastructure

Read more

Start Your 30-day Free Trial Today!

Automate your Kubernetes cluster optimization in minutes.

Get Started View Pricing
No Card Required