Weaveworks 2022.03 release featuring Magalix PaC | Learn more
Balance innovation and agility with security and compliance
risks using a 3-step process across all cloud infrastructure.
Step up business agility without compromising
security or compliance
Everything you need to become a Kubernetes expert.
Always for free!
Everything you need to know about Magalix
culture and much more
The role of Kubernetes authorization is important, While authentication refers to validating the identity of a specific subject to decide whether or not it could be granted access, authorization handles what follows that access. Using authorization mechanisms, you can fine-tune who has access to which resources on your Kubernetes cluster. In this article, we will be discussing Role-Based Access Control (RBAC) and how you can use it to secure your cluster.
Kubernetes uses the API server to authorize requests against a set of policies. It’s worth noting that the authorization is a step that comes after the authentication is successful. The workflow goes as follows:
The user is authenticated to the API server using one of the supported authentication methods. For more information about this process, please refer to our article about Kubernetes Authentication.
Assuming that the request was aiming to retrieve a list of pods in the namespace kube-system (for example, using kubectl get pods -n kube-system). First, the user is authenticated in step 1, then the credentials are passed together with the verb, the resource (and other aspects) that the user is trying to execute to the authorization module.
If the user (by a user we refer to a human user as well as applications) is authorized to execute the request, it is passed on to the admission controller. Otherwise, the API server replies with a 403 Forbidden HTTP status code.
Role-Based Access Control (RBAC for short) has been part of Kubernetes since version 1.8. You’re strongly encouraged to use it even in non-production environments. There are some important terms that you should know when dealing with RBAC:
Once you create a role, you need to define which actions the Entity can execute. They can be classified into:
Notice that you should distinguish between Roles and ClusterRoles. A Role is bound to a namespace. For example, you may create a Role for accessing pods in the kube-system namespace. Pods in the default namespace, or in any other namespace than the kube-system will fall outside this role’s authority. On the other hand, ClusterRole is applied cluster-wide. All namespaces abide by the ClusterRole rules. Some resources by nature require that you grant their requesters ClusterRoles since they are not namespaced. An example of such resources is nodes.
The API server automatically creates a number of default ClusterRoles and ClusterRoleBinding, whereby different resources on the cluster needs them to function correctly. Those roles are prefixed by system: to indicate that they were created and owned by the infrastructure itself. For example, the system:node is used by the kubelet. Modifying this role may result in the nodes not functioning correctly and, consequently, may bring the entire cluster to a halt. In addition to the system: prefix, the system roles, and role bindings have the kubernetes.io/bootstrapping=rbac-defaults label attached to them.
Kubernetes also creates default roles that are intended to be granted to users. Such roles can be considered as pre-made templates for common functions. They are called User-facing Roles.
ClusterRole | ClusterRoleBinding | Description |
cluster-admin | system:masters |
This role can be treated as the root user on Linux machines. Notice that when used with a RoleBinding, it gives the user full access to resources under that RoleBinding’s namespace including the namespace itself (but not others). However, when used with a ClusterRoleBinding, it gives the user super control on each resource in the entire cluster (all namespaces). |
admin | None |
Used with a RoleBinding, it gives the user full access to the resources inside that namespace including other roles and rolebindings. However, it does not allow write-access to resources quota or the namespace itself. |
edit | None |
Used with a RoleBinding, it grants the user the same access level as admin on the given namespace except for viewing or modifying roles and rolebindings. |
view | None |
Used with RoleBinding, it grants the user read-only access to most of the namespace resources except Roles, RoleBindings and Secrets. |
Requirement: you have a new colleague that’s joined your team. After passing the probation period, you need to give her administrative access to the cluster. The authentication method that you use in your origination is X.509 certificates. Now, the first step is to create an account for her.
1. You will need to download and install the CFSSL tool from here https://pkg.cfssl.org/
2. Create a certificate-signing request JSON file as follows:
{
"CN": "alice",
"key": {
"algo": "rsa",
"size": 4096
},
"names": [{
"O": "alice",
"email": "alice@mycompany.com"
}]
}
Generate the CSR (Certificate Signing Request) from the file as follows: cfssl genkey user.json | cfssljson -bare client. The output should be as follows:
$ cfssl genkey user.json | cfssljson -bare client
2019/11/09 18:14:33 [INFO] generate received request
2019/11/09 18:14:33 [INFO] received CSR
2019/11/09 18:14:33 [INFO] generating key: rsa-4096
2019/11/09 18:14:34 [INFO] encoded CSR
3. You should have a client.csr file created for you containing the csr data. There’s also the key file client-key.pem that contains the private key that was used to sign the request.
4. Convert the request to Base64 encoding by using the following command: cat client.csr | base64 | tr -d '\n'. Keep a copy of the resulting text because we’ll use it in the next step.
5. Create a CertificateSigningRequest resource by creating a csr.yaml file and adding the following lines:
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: alice
spec:
groups:
- mycompany
request: LS0tLS1CRUdJTiBDRVJUSUZJQ0F...
usages:
- digital signature
- key encipherment
- client auth
Notice that the requested data is the base64-encoded CSR that we obtained in the previous step.
6. Send the request to the API server using the kubectl as follows:
$ kubectl apply -f request.yaml
certificatesigningrequest.certificates.k8s.io/alice created
7. As a cluster admin, you can approve this certificate request using the following command:
$ kubectl certificate approve alice
certificatesigningrequest.certificates.k8s.io/alice approved
8. Since the csr is approved, we need to download the actual certificate (notice that the output is already base64-encoded. We don’t need to decrypt it as we’ll use it again in the same form in the kubeconfig file later):
kubectl get csr alice -o jsonpath='{.status.certificate}'
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1...
9. Now, Alice has her certificate approved. To use it, she needs a kubeconfig file that references her certificate, private key, and cluster CA that was used to sign this request, in addition to the API server’s API. We already have the private key and certificate, let’s get the other information from the existing kubeconfig file that we have:
$ kubectl config view --flatten --minify
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0t...
server: https://104.198.41.185
--- the rest of the output was trimmed for brevity ---
10. Given all the information that we have now, we can create a config file for Alice containing the following lines (make sure that the client-certificate-date, client-key-data, and certificate-authority-data is base64-encoded prior to adding it to the config file):
apiVersion: v1
kind: Config
users:
- name: alice
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ…
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFU...
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDR...
server: https://104.198.41.185
name: gke
contexts:
- context:
cluster: gke
user: alice
name: alice-context
current-context: alice-context
The last step is to hand the file to Alice to add it under ~/.kube/config. However, we can verify that her credentials are working by passing the config file that we’ve just created (let’s say we named it alice_config). Let’s try listing the pods:
$ kubectl get pods --kubeconfig ./alice_config
Error from server (Forbidden): pods is forbidden: User "alice" cannot list resource "pods" in API group "" in the namespace "default"
The response that we received is very important because it allows us to verify that the API server does recognize the certificate. The output states “Forbidden”, which means that the API server acknowledges the presence of a user called Alice. However, she does not have permission to view the pods on the default namespace. That makes sense. Let’s give her the required permissions.
Since we need to grant cluster-wide admin permissions to Alice, we can use the ready-made cluster-admin role. Hence, we only need a ClusterRoleBinding resource. Create a YAML file with the following lines:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: cluster-admin-binding
subjects:
- kind: User
name: alice
apiGroup: ""
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: ""
Two points of interest here:
The subjects (line 5) is an array. Thereby, we can add multiple users using the same resource.
In line 11, we refer to the cluster-admin ClusterRole which was already created for us (as we discussed earlier). Hence, we saved ourselves having to create a ClusterRole.
Apply the above YAML file using kubectl:
$ kubectl apply -f clusterrolebinding.yml
clusterrolebinding.rbac.authorization.k8s.io/cluser-admin-binding created
Now, let’s double-check that Alice can execute commands against the cluster:
$ kubectl get pods --kubeconfig ./alice_config
NAME READY STATUS RESTARTS AGE
testpod 1/1 Running 105 29h
$ kubectl get nodes --kubeconfig ./alice_config
NAME STATUS ROLES AGE VERSION
gke-security-lab-default-pool-46f98c95-qsdj Ready 46h v1.13.11-gke.9
$ kubectl get pods -n kube-system --kubeconfig ./alice_config
NAME READY STATUS RESTARTS AGE
event-exporter-v0.2.4-5f88c66fb7-6l485 2/2 Running 0 46h
fluentd-gcp-scaler-59b7b75cd7-858kx 1/1 Running 0 46h
fluentd-gcp-v3.2.0-5xlw5 2/2 Running 0 46h
heapster-5cb64d955f-mvnhb 3/3 Running 0 46h
kube-dns-79868f54c5-kv7tk 4/4 Running 0 46h
kube-dns-autoscaler-bb58c6784-892sv 1/1 Running 0 46h
kube-proxy-gke-security-lab-default-pool-46f98c95-qsdj 1/1 Running 0 46h
l7-default-backend-fd59995cd-gzvnj 1/1 Running 0 46h
metrics-server-v0.3.1-57c75779f-dfjlj 2/2 Running 0 46h
prometheus-to-sd-k6627 1/1 Running 0 46h
Through the above few commands, Alice is able to view the pods in multiple namespaces and also get information about the cluster nodes.
Kubernetes uses the Ingress resource as a means of routing external traffic to one or more services inside the cluster.
However, the Ingress resource only specifies the rules that should be followed (for example, /example.com/users should go to the users-svc service, /example.com/auth gets routed to the auth-svc service and so on). To actually put those rules into effect, you need a controller. Kubernetes does not offer Ingress controllers of its own (at least at the time of this writing). It leaves this option for you to choose from many Ingress-controller providers. In this example, we discuss the Nginx Ingress Controller. For the controller to function correctly, it needs to operate through a service account, a Role, a ClusterRole, and the necessary bindings for those roles to work. Let’s see how we can configure our cluster for this type of access and see how RBAC works.
Note: if you actually need to deploy the Nginx Ingress controller, the RBAC steps we discuss below will probably be part of an automated deployment method like using their Nginx Controller Helm Chart. We’re doing the RBAC part manually for learning purposes only.
This is the simplest part. You only need to create a service account that will be further used by the ingress controller. All the roles and binding we’ll create shall be tied to this service account. Create a new file called service-account.yaml and add the following lines:
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
Apply the file:
$ kubectl apply -f service-account.yaml
serviceaccount/nginx-ingress-serviceaccount created
The role looks like this:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
The role has several permissions, let’s go through them briefly:
The ClusterRole includes permissions that apply to the entire cluster. The file looks as follows:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
- update
Note that, because cluster-wide permissions need to be handled carefully, the cluster role grants the ingress controller write access to the ingress resources only. This can be found in lines 39, 40 (create and patch) and line 56 (update). The resources that can be created/modified here are ingresses and ingresses/status respectively. The rest of the role grants read-only verbs to other resources.
The final step is to bind the service account that was created earlier with the ClusterRole and the ClusterRoleBinding as follows:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
Notice that the RoleBinding restricts the permissions to the ingress-nginx namespace (line 16)
Applying the above files to the cluster enables the Ingress controller to properly watch the API server for new Ingress resources, parses them for the rules, and creates the necessary work for routing the traffic to the appropriate services based on those rules.
Self-service developer platform is all about creating a frictionless development process, boosting developer velocity, and increasing developer autonomy. Learn more about self-service platforms and why it’s important.
Explore how you can get started with GitOps using Weave GitOps products: Weave GitOps Core and Weave GitOps Enterprise. Read more.
More and more businesses are adopting GitOps. Learn about the 5 reasons why GitOps is important for businesses.
Implement the proper governance and operational excellence in your Kubernetes clusters.
Comments and Responses