How to grant users to access the Kubernetes cluster with a client certificate(restricted access)

Gaurav Wadghule
4 min readJan 14, 2021

Hello folks, Have you ever faced issues like the developers in your team need to see the status of Pods, services, or any other resources in development clusters. And you don’t want to give whole access to the cluster. Sometimes developers need to see logs of an application but they didn't know much about Kubernetes. In that case, we can give them restricted access. So I have created this simple guide to give restricted access to the teammates.

Generate a private key and CSR(certificate signing request)

openssl genrsa -out developer.key 4096
openssl req -new -key developer.key -out developer.csr

This command will ask for your details like country, state, city, organization name, department, common name(CN), etc.

Note — Do remember a common name that will be used as your username while authentication.

Convert CSR to base64

csr_base64=$(cat developer.csr | base64 | tr -d “\n”)

Create Certificate Signing Request in kubernetes

cat << EOF > CertificateSigningRequest.yaml
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: developer
spec:
groups:
— system:authenticated
request: ${csr_base64}
usages:
— client auth
EOF

Substitute csr_base64 in request

cat CertificateSigningRequest.yaml | envsubst | kubectl apply -f -kubectl get csr

Approve Certificate Signing Request

kubectl certificate approve developer

Extract Client Certificate from approved CSR

kubectl get csr developer -o jsonpath='{.status.certificate}' | base64 --decode > developer-client-certificate.crt

Now use this generated Client Certificate file (developer-client-certificate.crt) for creating kubeconfig file

apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <CA-DATA>
server: https://<APISERVER-HOST>:<APISERVER-PORT>
name: <CLUSTER-NAME>
contexts:
- context:
cluster: <CLUSTER-NAME>
user: <USER> # e.g. kube-ops
name: <USER>@<CLUSTER-NAME>
kind: Config
users:
- name: <USER> # e.g. kube-ops
user:
client-certificate-data: <CLIENT-CRT-DATA>
client-key-data: <CLIENT-KEY-DATA>
  1. Update <APISERVER-HOST> and <APISERVER-PORT> with you Kubernetes API server (i.e. master) host and port.
  2. Update <CLUSTER-NAME> with your Kubernetes cluster name.
  3. Update <USER> with “your-username”
  4. Update <CA-DATA> with the based64 encoded Kubernetes CA certificate.
  5. Update <CLIENT-CRT-DATA> with the based64 encoded client certificate developer-client-certificate.crt
  6. Update <CLIENT-KEY-DATA> with the based64 encoded client key developer.key.

You can generate <CA-DATA>, <CLIENT-CRT-DATA> and <CLIENT-KEY-DATA with the following command:

# Generate the <CLIENT-CRT-DATA>
cat developer-client-certificate.crt | base64 | tr -d '\n'
# Generate the <CLIENT-KEY-DATA>
cat developer.key | base64 | tr -d '\n'

Create Cluster Role and Cluster Role Bindings

You can change access according to your need. Here I am considering my teammates need to see logs of all pods from dev namespace. Click here for more details about Role-based access control in Kubernetes.

kubectl create role developer --verb=get --resource=pod,pods/logs -n devkubectl create rolebinding developer --clusterrole=developer --user=gaurav -n dev

Now lets test the access given to the user

Note: Here user(gaurav) is the common name field(CN) that we have given while creating the Certificate Signing request.

kubectl auth can-i get pods/logs — as gaurav -n dev

I have created a bash script for all the above steps here. You can change the values required accordingly and execute this script this will automatically execute all the above steps. Then you just need to create a Kube config file from the generated client certificate for your user.

https://github.com/gauravwadghule/kubernetes-authenticatication/blob/main/create-user.sh

--

--

Gaurav Wadghule

Devops | Kubernetes | Docker | CICD | Test Automation