14-days FREE Trial

 

Right-size Kubernetes cluster, boost app performance and lower cloud infrastructure cost in 5 minutes or less

 

GET STARTED

  Blog

Kubernetes Services 101: The Pods Interfaces

 Pods are the basic building blocks of any Kubernetes cluster. They host one or more containers. A Kubernetes Service acts as a layer above the pods. It is always aware of the pods that it manages: their count, their internal IP addresses, the ports they expose and so on.

But how can pods reach another one? Consider the following example: you created a web application that contains four pods on the frontend, having Nginx as their application. The frontend pod needs to make a request to one of the backend NodeJS pods. If we have 2 of them, which one should the request be directed to? Additionally, pods get new IPs after each restart. An abstraction layer is needed to keep communication consistent across different pod instances. Kubernetes offers the Service object a solution to these situations. As in below illustration, when Nginx receives an HTTP request, it is not aware of which of the NodeJS pods to direct the request to. A service will expose an interface that will route the request coming from Nginx to one of its pods, receives the response and directs it back to the web server.

services are loadbalancing requests to kubernetes pods

Deploying a Kubernetes Service

Like all other Kubernetes objects, a Service can be defined using a YAML or JSON file that contains the necessary definitions (they can also be created using just the command line, but this is not the recommended practice). Let’s create a NodeJS service definition. It may look like the following:

apiVersion: v1
kind: Service
metadata:
  name: external-backend
spec:
  ports:
  - protocol: TCP
    port: 3000
    targetPort: 3000
  clusterIP: 10.96.0.1 
  • The file starts with defining the API version on which it will contact the Kubernetes API server.
  • Then, it defines the kind of object that it intends to manage: a Service.
  • The metadata contains the name that this service. Later on, applications will use this name to communicate with the service.
  • The spec part defines a selector. This is where we inform the service which pods will come under its control. Any pod that has a label “app=nodejs” will be handled by our service.
  • The spec also defines how our service will handle the network in the ports array. Each port will have a protocol (TCP in our example, but services support UDP and other protocols), a port number that will be exposed, and a targetPort on which the service will contact the target pod(s). In our example, the pod will be available on port 80, but it will reach its pods on port 3000 (handled by NodeJS).

Applying this service definition (and all other service definitions) can be done using kubectl as follows:

kubectl apply -f definition.yaml

Where definition.yaml is the YAML file that contains the instructions.

Learn how to continuously optimize your k8s cluster

How Can Kubernetes Services Expose More Than One Port?

Kubernetes Services allow you to define more than one port per service definition. Let’s see how a web server service definition file may look like:

apiVersion: v1
kind: Service
metadata:
  name: webserver
spec:
 selector:
   app: web
  ports:
  - name: http
    port: 80
    targetPort: 80
  - name: https
    port: 443
    targetPort: 443

Notice that if you are defining more than one port in a service, you must provide a name for each port so that they are recognizable.

Can We Use A Kubernetes Service Without Pods?

While the traditional use of a Kubernetes Service is to abstract one or more pods behind a layer, services can do more than that. Consider the following use cases where services do not work on pods:

  • You need to access an API outside your cluster (examples: weather, stocks, currency rates).
  • You have a service in another Kubernetes cluster that you need to contact.
  • You need to shift some of your infrastructure components to Kubernetes. But, since you’re still evaluating the technology, you need it to communicate with some backend applications that are still outside the cluster.
  • You have another service in another namespace that you need to reach.

The common thing here is that the service will not be pointing to pods. It’ll be communicating with other resources inside or outside your cluster. Let’s create a service definition that will route traffic to an external IP address:

apiVersion: v1
kind: Service
metadata:
  name: webserver
spec:
 selector:
   app: web
  ports:
  - name: http
    port: 80
    targetPort: 80
  - name: https
    port: 443
    targetPort: 443

Here, we have a service that connects to an external NodeJS backend on port 3000. But, this definition does not have pod selectors. It doesn’t even have the external IP address of the backend! So, how will the service route traffic then?

Normally, a service uses an Endpoint object behind the scenes to map to the IP addresses of the pods that match its selector.

What Is An 'Endpoint' Object In Terms Of Kubernetes?

Endpoints are used to track which pods are available so that the service can direct traffic to them. Yet, here we are not using pods at all. So, we’ll need to manually create an endpoint object. Consider the following Endpoint definition file:

apiVersion: v1
kind: Service
metadata:
  name: external-backend
spec:
  ports:
  - protocol: TCP
    port: 3000
    targetPort: 3000

You will need to apply this endpoint definition file then create the service.

Now, any traffic arriving at our service on port 80 will be automatically routed to 159.76.214.243:3000. This may be your NodeJS backend that is running outside the cluster.

As a side note, you cannot use any of the loopback IP addresses (127.0.0.0/8) or one of the link-locals (169.254.0.0/16 and 224.0.0.0/24) as the destination IP address. Additionally, the destination IP address cannot be the cluster IP of another Kubernetes Service. More on the Service cluster IP later in this article.

But My Pods Already Have Internet Access, Why Use A Service?

Consider that you have hundreds of pods, all of them are designed to contact that NodeJS server on 159.76.214.243:3000. Now, what if the server started using a different IP? There are many reasons why IP addresses get changed. You will have to manually modify all your containers YAML files to point to the new IP address. Using a Kubernetes Service as a proxy here will allow you to make this change once and in only one place.

What If The Remote Application Uses A DNS Name?

Kubernetes Services can also connect to external servers by specifying their DNS names rather than their IP addresses. Those are referred to as ExternalName services. The following service definition will route traffic to an external API:

apiVersion: v1
kind: Service
metadata:
  name: weather
spec:
  type: ExternalName
  externalName: api.weather.com
  ports:
  - port: 80

Any web requests going to http://weather will be automatically routed to api.weather.com.

Notice that the externalName can also be the DNS name of another service in another namespace. For example, externalName: middleware.prod.svc.cluster.local

Additionally, if you’re deploying an ExternalName service in your cluster, you must use DNS as a service discovery method to be able to contact it. We’ll discuss service discovery methods in the next section.

What Is Service Discovery in Kubernetes?

Let’s revisit our web application example. You are writing the configuration files for Nginx and you need to specify an IP address or URL to which web server shall route backend requests. For demonstration purposes, here’s a sample Nginx configuration snippet for proxying requests:

server {
  listen 80;

  server_name myapp.example.com;

  location /api {
      proxy_pass http://??/;
  }
}

The proxy_pass part here must point to the service’s IP address or DNS name to be able to reach one of the NodeJS pods. In Kubernetes, there are two ways to discover services: (1) environment variables, or (2) DNS. let’s talk about each one of them in a bit of detail.

Service Discovery Through Environment Variables

When a pod is scheduled to a node, the kubelet provides the pod with the necessary information to access services through environment variables. If we have a service backend that exposes port 3000 and was assigned an IP of 10.0.0.20, Kubernetes will automatically export environment variables like:

BACKEND_SERVICE_HOST=10.0.0.20
BACKEND_SERVICE_PORT=3000

This is not the most reliable way to discover services. In order to inject those environment variables to the pod, the service must be created before the pod is. So, if the service was recreated for any reason after the pods are already running, they won’t have access to those environment variables and service discovery will fail.

Also, not all applications support injecting environment variables in their configurations. For example, Nginx does not recognize environment variables in its configuration files out of the box. There are some workarounds for this, though.

Service Discovery Through DNS

It is highly recommended to use DNS for service discovery. A DNS component like CodeDNS will always listen for any newly created services by constantly contacting the API server of the cluster. Once it detects the presence of a new service, it creates the necessary records for it so that pods can communicate with it through a URL.

So, back to our example, if our service, backend was created in the middleware namespace, it can be accessed by pods in the same namespace by talking to http://backend or http://backend:3000 if it was not listening on port 80. If the client pod was created in another namespace, then it must use the fully qualified name for the service like http://backend.middleware.

So, our Nginx configuration can have this line now to work properly:

proxy_pass http://backend.middleware/;

Kubernetes Services Connectivity Methods

If you reached that far, you are able to contact your services by name. Whether you’re using environment variables or you’ve deployed a DNS, you get the service name resolved to an IP address. Now you want to be serious about it and make it accessible from outside your cluster? There are three ways to do that:

CluserIP

The ClusterIP is the default service type. Kubernetes will assign an internal IP address to your service. This IP address is reachable only from inside the cluster. You can - optionally - set this IP in the service definition file. Think of the case when you have a DNS record that you don’t want to change and you want the name to resolve to the same IP address. You can do this by defining the clusterIP part of the service definition as follows:

apiVersion: v1
kind: Service
metadata:
  name: external-backend
spec:
  ports:
  - protocol: TCP
    port: 3000
    targetPort: 3000
  clusterIP: 10.96.0.1 

However, you cannot just add any IP address. It must be within the service-cluster-ip-range, which is a range of IP addresses assigned to the service by the Kubernetes API server. You can get this range through a simple kubectl command as follows:

kubectl cluster-info dump | grep service-cluster-ip-range

You can also set the clusterIP to none, effectively creating a Headless Service.

Magalix Trial

What Is The Use Of A Headless Service In Kubernetes?

As mentioned, the default behavior of Kubernetes is to assign an internal IP address to the service. Through this IP address, the service will proxy and load-balance the requests to the pods behind. If we explicitly set this IP address (clusterIP) to none, this is like telling Kubernetes “I don’t need load balancing or proxying, just connect me to the first available pod”.

Let’s consider a common use case. If you host, for example, MongoDB on a single pod, you will need a service definition on top of it to take care of the pod being restarted and acquiring a new IP address. But you don’t need any load balancing or routing. You only need the service to patch the request to the backend pod. Hence, the name: headless: a service that does have an IP.

But, what if a headless service was created and was managing more than one pod? In this case, any query to the service’s DNS name will return a list of all the pods managed by this service. The request will accept the first IP address returned. Obviously, this is not the best load-balancing algorithm if at all. The bottom line here, use a headless service when you need a single pod.

NodePort

This is one of the service types that are used when you want to enable external connectivity to your service. If you’re having four Nginx pods, the NodePort service type is going to use the IP address of any node in the cluster combined with a specific port to route traffic to those pods. The following graph will demonstrate the idea:

exposing kubernetes services through node ports

You can use the IP address of any node, the service will receive the request and route it to one of the pods.

A service definition file for a service of type NodePort may look like this:

apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  type: NodePort
  ports:
    - port: 80
      nodePort: 30000
      targetPort: 80
  selector:
    app: web

Manually allocating a port to the service is optional. If left undefined, Kubernetes will automatically assign one. It must be in the range of 30000-32767. If you are going to choose it, ensure that the port was not already used by another service. Otherwise, Kubernetes will report that the API transaction has failed.

Notice that you must always anticipate the event of a node going down and its IP address becomes no longer reachable. The best practice here is to place a load balancer above your nodes.

LoadBalancer

This service type works when you are using a cloud provider to host your Kubernetes cluster. When you choose LoadBalancer as the service type, the cluster will contact the cloud provider and create a load balancer. Traffic arriving at this load balancer will be forwarded to the backend pods. The specifics of this process is dependent on how each provider implements its load balancing technology.

Different cloud providers handle load balancer provisioning differently. For example, some providers allow you to assign an IP address to the component, while others choose to assign short-lived addresses that constantly change. Kubernetes was designed to be highly portable. You can add loadBalancerIP to the service definition file. If the provider supports it, it will be implemented. Otherwise, it will be ignored. Let’s have a sample service definition that uses LoadBalancer as its type:

apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  type: LoadBalancer
  loadBalancerIP: 78.11.24.19
  selector:
    app: web
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

One of the main differences between the LoadBalancer and the NodePort service types is that in the latter you get to choose your own load balancing layer. You are not bound to the cloud provider’s implementation.

TL;DR

  • Kubernetes services provide the interfaces through which pods can communicate with each other. They also act as the main gateway for your application. Services use selectors to identify which pods they should control. They expose an IP address and a port that is not necessarily the same port at which the pod is listening. Services can expose more than one port, albeit they must have names in this case.
  • Services can also route traffic to other services, external IP addresses, or DNS names. They can be discovered through environment variables, which get injected to the pods when they are started. Alternatively, a DNS can be deployed to the cluster and used to track the services names and IP addresses, which is the recommended service discovery method.
  • Services use different ways to expose their IP addresses: the cluster IP, which is the default, where it is accessible inside the cluster network only, the NodePort, where the service uses the IP address of any node in the cluster, combined with a specific port. and the LoadBalancer, where the cloud provider is contacted behind the scenes to provision a load balancer that will route traffic to the pods.

Read the other Magalix Kubernetes 101 Articles 

Mohamed Ahmed

Jul 4, 2019