<img src="https://ws.zoominfo.com/pixel/JHVDdRXH2uangmUMQBZd" width="1" height="1" style="display: none;">

Kubernetes Patterns : The Reflection Pattern

DevOps Kubernetes Docker kubernetesio kubernetes pattern reflection pattern
Kubernetes Patterns : The Reflection Pattern
DevOps Kubernetes Docker kubernetesio kubernetes pattern reflection pattern



What is “Reflection”?

Reflection is a concept that is available in most (if not all) programming languages. It simply refers to the ability of an object of some type to reveal important information about itself. For example, its name, its parent class, and any metadata that it happens to contain. In Cloud and DevOps arenas, the same concept holds. For example, if you are logged into an AWS EC2 instance, you can easily get a wealth of information about that particular instance (its reflection) by issuing a GET request to from within the instance itself.

Why Do We Need an Object’s Reflection?

An object here is used as a generic term to refer to the unit of work. So, in a programming language, an object is an instance of a class, in your on-prem infrastructure, the object may be a physical or virtual host, in a cloud environment, it is the instance, and in Kubernetes, it’s the Pod.

In this article, we are interested in Kubernetes, so Pod and object may be used interchangeably.

There are many use cases where you need the metadata of a Pod, especially if that Pod is part of a stateless application where Pods are dynamic by nature. Let’s see some possible scenarios:

  • You need the IP address of the Pod to identify whether or not it was the source of suspicious traffic that was detected on your network.
  • The application running inside the container needs to know the namespace in which the Pod is running, perhaps because it is programmed to behave differently depending on the environment where it is running, conveyed by the namespace.
  • You need to know the current resource limit (CPU and memory) imposed on the container. You can further use this data to automatically adjust the heap size of a Java application when it starts, for example.

Fortunately, Kubernetes has made this task relatively easy by using the Downward API.

How Does The Downward API Work?

The Downward API injects metadata into the containers through environment variables and files. They are used the same way as we use configMaps and Secrets to handle passing outside information to the application. However, the Downward API does not inject all the available metadata to the containers. Instead, we select which variables we need to be available to our containers.

To get a sense of what this is all about, let’s have an example. The following definition file creates a Pod that runs a container from the bash image. We use the Downward API to inject three of the available variables: the Pod’s IP address, the namespace where this Pod is running, and the current memory limit imposed on it. This scenario can be depicted in the following illustration:




Learn How to Continuously Optimize your K8s Cluster

And the definition file looks like this:

apiVersion: v1
kind: Pod
  name: mypod
  - image: bash
    name: mycontainer
    command: ['bash','-c','sleep 1000000']
    - name: MY_IP
          fieldPath: status.podIP
    - name: MY_NAMESPACE
          fieldPath: metadata.namespace
    - name: MY_MEMORY_LIMIT
          containerName: mycontainer
          resource: limits.memory
          divisor: 1Mi


Let’s run this definition by using kubectl apply -f filename and then access the container inside the Pod and see whether our metadata is available:

$ kubectl exec -it mypod bash                                                                                                                                 
bash-5.0# echo MY_IP
bash-5.0# echo $MY_IP
bash-5.0# echo $MY_NAMESPACE
bash-5.0# echo $MY_MEMORY_LIMIT

So, we were able to get the IP address and the namespace on which this Pod is running by querying the respective environment variables: MY_IP, MY_NAMESPACE, and MY_MEMORY_LIMIT.

The FieldRef Parameter

In our first example, we used the fieldRef parameter to select which information we need to inject into the Pod through fieldPath. For your reference, the following is a listing of the possible values that are available to you through fieldRef:

Name Description
spec.nodename The name of the node where the Pod is running.
status.hostIP The IP address of the node where the Pod us running.
metadata.name The Pod name (notice that this is different than the container’s name. A Pod may have more than one container)
metadata.namespace The namespace of the Pod
status.podIP The IP address of the Pod
spec.serviceAccountName The service account that was used with the Pod.
metadata.uid The UID of the running Pod
metadata.labels[‘label’] The value of the label put on the Pod. For example, if a Pod is labeled env=prod, then metadata.labels[‘env’] returns ‘prod’.
metadata.annotations[‘annotation’] Similar to labels, it gets the value of the specified annotation.


The ResourceFieldRef Parameter

The fieldRef parameter allows you to inject metadata about the Pod. If you need data about the resources that the Pod consume, namely CPU and memory, you should use the resourceFieldRef. The following is a list of the available options that you can use to get this data:

Name Description
requests.cpu The amount of CPU specified in the requests field of the Pod definition
requests.memory The amount of memory specified in the requests field of the Pod definition
limits.cpu The CPU limit of the Pod
limits.memory The memory limit of the Pod

Magalix trial

The requests and limits can be imposed on the Pod within its definition file. They both allow you to control the hard and soft limit of how much resources a given Pod can consume. They also help the scheduler allocate Pods to appropriate nodes depending on their resource requests and limits. For more information about this topic, you can refer to our article Capacity Planning.

Getting Pod Metadata After They’ve Been Modified

A Kubernetes user is allowed to change some of the Pod’s metadata while the Pod is running. So, although fields like the resource requests and limits cannot be changed unless the Pod is deleted and recreated, the Pod labels can be. If users issued the kubectl edit pod pod_name command, they’re able to make dynamic modifications to the Pod definition as long as the field accepts that.

If Pod’s metadata was dynamically changed, they cannot be reinjected to the container through environment variables, as this requires restarting the container. However, you can still monitor and grab those changes by using the other method of injecting data into the container: volumes.

The following definition demonstrates how you can use volumes that same way you use it with configMaps and Secrets to allow the container access to data:


apiVersion: v1
kind: Pod
  name: mypod
    env: prod
  - image: bash
    name: mycontainer
    command: ['bash','-c','sleep 1000000']
    - name: mypod-vol
      mountPath: /mypod-metadata
  - name: mypod-vol
      - path: labels
          fieldPath: metadata.labels
      - path: annotations
          fieldPath: metadata.annotations


Applying this definition and logging in to the Pod, we can see that we have a volume mounted for us with the relevant data:

$ kubectl exec -it mypod bash                                                                                                                                                          
bash-5.0# cat /mypod-metadata/labels  && echo
bash-5.0# cat /mypod-metadata/annotations && echo
kubectl.kubernetes.io/last-applied-configuration="{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"env\":\"prod\"},\"name\":\"mypod\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"command\":[\"bash\",\"-c\",\"sleep 1000000\"],\"image\":\"bash\",\"name\":\"mycontainer\",\"volumeMounts\":[{\"mountPath\":\"/mypod-metadata\",\"name\":\"mypod-vol\"}]}],\"volumes\":[{\"downwardAPI\":{\"items\":[{\"fieldRef\":{\"fieldPath\":\"metadata.labels\"},\"path\":\"labels\"},{\"fieldRef\":{\"fieldPath\":\"metadata.annotations\"},\"path\":\"annotations\"}]},\"name\":\"mypod-vol\"}]}}\n"

As you can see, we have our data through a file that’s stored on a volume rather than through environment variables. This allows us to retrieve information dynamically wherever they change without the need to restart containers. However, the running application still needs to be configured to detect changes in the labels or annotation files (if it’s using their values) and act accordingly when a modification occurs.

Magalix Trial


  • On many occasions, you need the application to be aware of some of the metadata of the infrastructure on which it is running. The application may use this information to make intelligent decisions or automate manual tasks.
  • Kubernetes offers the Downward API that allows you to inject some of this metadata to the Pods and make them accessible to the containers inside it.
  • The Downward API allows you to query the API server for a number of metadata items, as well as the resource requests and limits.
  • You can inject the information that you need to the Pods through environment variables or mounting volumes.
  • The downside of using environment variables is that they cannot reflect dynamic changes to the Pod’s metadata like labels and annotations. You can use volumes as a workaround solution.
  • Although the Downward API is an elegant way of achieving Pod reflection, it is still limited in the amount of data it can provide. There are other Pod aspects that may be needed and are not offered through the Downward API. The answer to this shortcoming is to let the application query the API server directly to get the missing data. There are many client-side libraries in many programming languages that allow you to query the API server through code.

*The outline of this article outline is inspired by the book of Roland Huss and Bilgin Ibryam : Kubernetes Patterns.

Comments and Responses

Related Articles

Product In-Depth: Enforce Policies and Standards from a Single Console

Magalix provides a single management interface to control, enforce and visualize the state of compliance for all of your clusters.

Read more
Product In-Depth: Centralized Policy Management

achieving DevSecOps isn’t as difficult as you may have been led to believe. Interested in learning more about how to start resolving violations in minutes

Read more
Product In Depth: Detailed Violation Analysis

Security, compliance, and governance are not just one-time events that happen every so often. Managing a compliant environment is a 24x7 operation.

Read more