Kubernetes Authorization

DevOps, Kubernetes, K8s, Authorization, Resources
Kubernetes Authorization
DevOps, Kubernetes, K8s, Authorization, Resources

 

The role of kubernetes authorization is important, While authentication refers to validating the identity of a specific subject to decide whether or not it could be granted access, authorization handles what follows that access. Using authorization mechanisms, you can fine-tune who has access to which resources on your Kubernetes cluster. In this article, we will be discussing the Role-Based Access Control (RBAC) and how you can use it to secure your cluster.

How Does Kubernetes Authorize Requests?

Kubernetes uses the API server to authorize requests against a set of policies. It’s worth noting that the authorization is a step that comes after the authentication is successful. The workflow goes as follows:

  1. The user is authenticated to the API server using one of the supported authentication methods. For more information about this process, please refer to our article about Kubernetes Authentication.
  2. Assuming that the request was aiming to retrieve a list of pods in the namespace kube-system (for example, using kubectl get pods -n kube-system). First, the user is authenticated in step 1, then the credentials are passed together with the verb, the resource (and other aspects) that the user is trying to execute to the authorization module.
  3. If the user (by a user we refer to a human user as well as applications) is authorized to execute the request, it is passed on to the admission controller. Otherwise, the API server replies with a 403 Forbidden HTTP status code.

Introduction To RBAC

Role-Based Access Control (RBAC for short) has been part of Kubernetes since version 1.8. You’re strongly encouraged to use it even in non-production environments. There are some important terms that you should know when dealing with RBAC:

  • The Entity: this is a subject that needs to access a resource on the cluster. The Entity could be you, one of your colleagues, a pod, or an external program that you need to grant programmatic access to the cluster.
  • The Resource: the object that needs to be accessed. For example, a pod, a configMap, a Secret, etc.
  • The Role: since it is inefficient to grant each user a specific set of permissions, and perhaps replicate them to multiple users who need the same access level, it’s better to create a role with that set of permissions. Multiple roles can belong to a user, and multiple users can belong to a single role. So, instead of manually removing permissions from a user account, you can just remove it from the role.
  • The Role Binding: this is where the actual link between the role and the users (Entities in RBAC terminology) who will belong to this role is made.

Once you create a role, you need to define which actions the Entity can execute. They can be classified into:

  • Read-only: where the entity cannot modify the resource. The verbs are get and list
  • Read-write: where the entity can modify the resource. The verbs that fall into this category are create, update, delete, and deletecollection.

Notice that you should distinguish between Roles and ClusterRoles. A Role is bound to a namespace. For example, you may create a Role for accessing pods in the kube-system namespace. Pods in the default namespace, or in any other namespace than the kube-system will fall outside this role’s authority. On the other hand, ClusterRole is applied cluster-wide. All namespaces abide to the ClusterRole rules. Some resources by nature, requires that you grant their requesters ClusterRoles since they are not namespaced. Example of such resources is nodes.

Kubernetes Pre-Defined Roles

The API server automatically creates a number of default ClusterRoles and ClusterRoleBinding, whereby different resources on the cluster needs them to function correctly. Those roles are prefixed by system: to indicate that they were created and owned by the infrastructure itself. For example, the system:node is used by the kubelet. Modifying this role may result in the nodes not functioning correctly and, consequently, may bring the entire cluster to a halt. In addition to the system: prefix, the system roles, and role bindings have the kubernetes.io/bootstrapping=rbac-defaults label attached to them.

Kubernetes also creates default roles that are intended to be granted to users. Such roles can be considered as pre-made templates for common functions. They are called User-facing Roles.

User-Facing Roles

ClusterRole ClusterRoleBinding Description
cluster-admin system:masters This role can be treated as the root user on Linux machines. Notice that when used with a RoleBinding, it gives the user full access to resources under that RoleBinding’s namespace including the namespace itself (but not others). However, when used with a ClusterRoleBinding, it gives the user super control on each resource in the entire cluster (all namespaces).
admin None Used with a RoleBinding, it gives the user full access to the resources inside that namespace including other roles and rolebindings. However, it does not allow write-access to resources quota or the namespace itself.
edit None Used with a RoleBinding, it grants the user the same access level as admin on the given namespace except for viewing or modifying roles and rolebindings.
view None Used with RoleBinding, it grants the user read-only access to most of the namespace resources except Roles, RoleBindings and Secrets.

 

LAB 01: Create An Admin User

Requirement: you have a new colleague that’s joined your team. After passing the probation period, you need to give her administrative access to the cluster. The authentication method that you use in your origination is X.509 certificates. Now, the first step is to create an account for her.

Step 01: Create A User Account For Alice

1. You will need to download and install the CFSSL tool from here https://pkg.cfssl.org/

2. Create a certificate-signing request JSON file as follows:

{
    "CN": "alice",
    "key": {
        "algo": "rsa",
        "size": 4096
    },
    "names": [{
        "O": "alice",
        "email": "alice@mycompany.com"
    }]
}

Generate the CSR (Certificate Signing Request) from the file as follows: cfssl genkey user.json  | cfssljson -bare client. The output should be as follows:

$ cfssl genkey user.json | cfssljson -bare client
2019/11/09 18:14:33 [INFO] generate received request
2019/11/09 18:14:33 [INFO] received CSR
2019/11/09 18:14:33 [INFO] generating key: rsa-4096
2019/11/09 18:14:34 [INFO] encoded CSR

3. You should have a client.csr file created for you containing the csr data. There’s also the key file                client-key.pem that contains the private key that was used to sign the request.

4. Convert the request to Base64 encoding by using the following command: cat client.csr | base64              | tr -d '\n'. Keep a copy of the resulting text because we’ll use it in the next step.

5. Create a CertificateSigningRequest resource by creating a csr.yaml file and adding the following              lines:



apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
 name: alice
spec:
 groups:
 - mycompany
 request: LS0tLS1CRUdJTiBDRVJUSUZJQ0F...
 usages:
 - digital signature
 - key encipherment
 - client auth

Notice that the requested data is the base64-encoded CSR that we obtained in the previous step.

6. Send the request to the API server using the kubectl as follows: 

$ kubectl apply -f request.yaml
certificatesigningrequest.certificates.k8s.io/alice created

7. As a cluster admin, you can approve this certificate request using the following command:

$ kubectl certificate approve alice
certificatesigningrequest.certificates.k8s.io/alice approved

8. Since the csr is approved, we need to download the actual certificate (notice that the output is                  already base64-encoded. We don’t need to decrypt it as we’ll use it again in the same form in the                  kubeconfig file later):

kubectl get csr alice -o jsonpath='{.status.certificate}'
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1...

 

9. Now, Alice has her certificate approved. To use it, she needs a kubeconfig file that references her certificate, private key, and cluster CA that was used to sign this request, in addition to the API server’s API. We already have the private key and certificate, let’s get the other information from the existing kubeconfig file that we have:

$ kubectl config view --flatten --minify
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0t...
    server: https://104.198.41.185
--- the rest of the output was trimmed for brevity ---

10. Given all the information that we have now, we can create a config file for Alice containing the following lines (make sure that the client-certificate-date, client-key-data, and certificate-authority-data is base64-encoded prior to adding it to the config file):


apiVersion: v1
kind: Config
users:
- name: alice
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ…
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFU...
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDR...
    server: https://104.198.41.185
  name: gke
contexts:
- context:
    cluster: gke
    user: alice
  name: alice-context
current-context: alice-context

The last step is to hand the file to Alice to add it under ~/.kube/config.  However, we can verify that her credentials are working by passing the config file that we’ve just created (let’s say we named it alice_config). Let’s try listing the pods:

$ kubectl  get pods --kubeconfig ./alice_config 
Error from server (Forbidden): pods is forbidden: User "alice" cannot list resource "pods" in API group "" in the namespace "default"

The response that we received is very important because it allows us to verify that the API server does recognize the certificate. The output states “Forbidden”, which means that the API server acknowledges the presence of a user called Alice. However, she does not have permission to view the pods on the default namespace. That makes sense. Let’s give her the required permissions.

Learn more about Configuration Patterns

Step 02: Grant Alice Admin Permissions On The Cluster

Since we need to grant cluster-wide admin permissions to Alice, we can use the ready-made cluster-admin role. Hence, we only need a ClusterRoleBinding resource. Create a YAML file with the following lines:

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
 name: cluster-admin-binding
subjects:
- kind: User
 name: alice
 apiGroup: ""
roleRef:
 kind: ClusterRole
 name: cluster-admin
 apiGroup: ""

Two points of interest here:

  1. The subjects (line 5) is an array. Thereby, we can add multiple users using the same resource.
  2. In line 11, we refer to the cluster-admin ClusterRole which was already created for us (as we discussed earlier). Hence, we saved ourselves having to create a ClusterRole.

Apply the above YAML file using kubectl:

$ kubectl apply -f clusterrolebinding.yml
clusterrolebinding.rbac.authorization.k8s.io/cluser-admin-binding created

Now, let’s double-check that Alice can execute commands against the cluster:

$ kubectl  get pods --kubeconfig ./alice_config
NAME      READY   STATUS    RESTARTS   AGE
testpod   1/1     Running   105        29h
$ kubectl  get nodes --kubeconfig ./alice_config                                                                                                 
NAME                                          STATUS   ROLES    AGE   VERSION
gke-security-lab-default-pool-46f98c95-qsdj   Ready       46h   v1.13.11-gke.9
$ kubectl  get pods -n kube-system --kubeconfig ./alice_config
NAME                                                     READY   STATUS    RESTARTS   AGE
event-exporter-v0.2.4-5f88c66fb7-6l485                   2/2     Running   0          46h
fluentd-gcp-scaler-59b7b75cd7-858kx                      1/1     Running   0          46h
fluentd-gcp-v3.2.0-5xlw5                                 2/2     Running   0          46h
heapster-5cb64d955f-mvnhb                                3/3     Running   0          46h
kube-dns-79868f54c5-kv7tk                                4/4     Running   0          46h
kube-dns-autoscaler-bb58c6784-892sv                      1/1     Running   0          46h
kube-proxy-gke-security-lab-default-pool-46f98c95-qsdj   1/1     Running   0          46h
l7-default-backend-fd59995cd-gzvnj                       1/1     Running   0          46h
metrics-server-v0.3.1-57c75779f-dfjlj                    2/2     Running   0          46h
prometheus-to-sd-k6627                                   1/1     Running   0          46h

Through the above few commands, Alice is able to view the pods in multiple namespaces and also get information about the cluster nodes.

LAB 02: The Nginx-Ingress Controller RBAC, A Real-World Example

Kubernetes uses the Ingress resource as a means of routing external traffic to one or more services inside the cluster.

However, the Ingress resource only specifies the rules that should be followed (for example, /example.com/users should go to the users-svc service, /example.com/auth gets routed to the auth-svc service and so on). To actually put those rules into effect, you need a controller. Kubernetes does not offer Ingress controllers of its own (at least at the time of this writing). It leaves this option for you to choose from many Ingress-controller providers. In this example, we discuss the Nginx Ingress Controller. For the controller to function correctly, it needs to operate through a service account, a Role, a ClusterRole, and the necessary bindings for those roles to work. Let’s see how we can configure our cluster for this type of access and see how RBAC works.

Note: if you actually need to deploy the Nginx Ingress controller, the RBAC steps we discuss below will probably be part of an automated deployment method like using their Nginx Controller Helm Chart. We’re doing the RBAC part manually for learning purposes only.

The Service Account

This is the simplest part. You only need to create a service account that will be further used by the ingress controller. All the roles and binding we’ll create shall be tied to this service account. Create a new file called service-account.yaml and add the following lines:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount

Apply the file:

$ kubectl apply -f service-account.yaml
serviceaccount/nginx-ingress-serviceaccount created

The Role

The role looks like this:

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

The role has several permissions, let’s go through them briefly:

  • Lines 6 to 14: gives the controller read access (get) to the configMaps, pods, secrets, and namespaces.
  • Lines 15 to 23: allows the controller read and write access to a specific configmap: ingress-controller-leader-nginx. This is a resource that is created as part of the controller’s deployment steps. The write access is through the update verb.
  • Lines 24 to 29: enables the controller to create configmaps by granting the create verb to the configmap resource. Notice that this is a role not a cluster role so those actions are bound to a specific namespace (ingress-nginx as we’ll see later).
  • Lines 30 to 35: specifies that the controller should also have read access to the endpoint resource.

The ClusterRole

The ClusterRole includes permissions that apply to the entire cluster. The file looks as follows:

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update

Note that, because cluster-wide permissions need to be handled carefully, the cluster role grants the ingress controller write access to the ingress resources only. This can be found in lines 39, 40 (create and patch) and line 56 (update). The resources that can be created/modified here are ingresses and ingresses/status respectively. The rest of the role grants read-only verbs to other resources.

The RoleBinding And ClusterRoleBinding

The final step is to bind the service account that was created earlier with the ClusterRole and the ClusterRoleBinding as follows:

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

Notice that the RoleBinding restricts the permissions to the ingress-nginx namespace (line 16)

Applying the above files to the cluster enables the Ingress controller to properly watch the API server for new Ingress resources, parses them for the rules, and creates the necessary work for routing the traffic to the appropriate services based on those rules.

TL;DR

  • Kubernetes handles authorization through RBAC (Role-Based Access Control)
  • RBAC works by using Roles and ClusterRoles. Roles operate in the context of a namespace while ClusterRoles work cluster-wide.
  • To enable an entity (a human user or a program) to use a role, you must create a binding that references the role and the entities that are bound to that role. For Roles, we use the RoleBinding resource while for ClusterRoles we use ClusterRoleBinding.
  • RBAC uses verbs to define the type of access the entity can use. Some verbs allow read-only access while others entail write-access.
  • Out of the box, Kubernetes provides some predefined roles. Some of them are used by the system and shouldn’t be altered by administrators while Others are meant to serve as quick templates for applying permissions to users. For example, the cluster-admin role is used to grant cluster-wide admin access to users.
  • We had two labs in this article. In the first one, we demonstrated how we could create a user account for a new administrator using the X.509 certificate authentication strategy. Then we used the cluster-admin role to give the user administrative privileges without having to create a role and manually specifying permissions.
  • In the second lab, we discussed a real-world example of the necessary RBAC permissions required by the Nginx-Ingress controller to function correctly.

Comments and Responses

Related Articles

DevOps, Kubernetes, cost saving, K8s
Kubernetes Cost Optimization 101

Over the past two years at Magalix, we have focused on building our system, introducing new features, and

Read more
The Importance of Using Labels in Your Kubernetes Specs: A Guide

Even a small Kubernetes cluster may have hundreds of Containers, Pods, Services and many other Kubernetes API

Read more
How to Deploy a React App to a Kubernetes Cluster

Kubernetes is a gold standard in the industry for deploying containerized applications in the cloud. Many

Read more

start your 14-day free trial today!

Automate your Kubernetes cluster optimization in minutes.

Get started View Pricing
No Card Required