DevTech101

DevTech101
1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 1.00 out of 5)
Loading...

Configuring RBAC, TLS Node Bootstrapping On An Existing Kubernetes(1.11) Cluster.

Below is a continuation to my previous post(S) part 1-6 on how to configure Kubernetes 3 Master Node cluster. In the post below I am going to show you.
  1. How to enable and configure RBAC on an your existing kubernetes cluster.
  2. how to automatically bootstrap a Kubernetes worker Node, when SSL/TLS is being used.
Please check out the full series to see how to configure a 3 node Kubernetes master, the links are below. This is Part 7 – Enabling / Configuring RBAC, TLS Node bootstrapping.

Enabling RBAC in your Kubernetes cluster.

A very important aspect of every Kubernetes deployment is security. One of the security features Kubernetes added doing the years is, Role-based access control (RBAC). it was in beta since version 1.6, but is now very stable. Note: The RBAC configuration was left out of the Kubernetes examples in the series (part 1-6) configuring Kubernetes. Below I am going to show you how to enable and configure RBAC on your existing Kubernetes cluster Note: The below configuration assumes you are using a recent Kubernetes version. So lets jump right in. All you need to do to enable RBAC on your Kubernetes cluster is, turn on RBAC on you kube-api Nodes. since almost every movement in Kubernetes passes the API server, that is the place to enable RBAC. However, to successful configure RBAC there are still a number of of configuration changes required, all outlined below. Note: Using SSL and turning on RBAC is not the only security configuration to secure a Kubernetes implementation, there are a number of other configurations you should consider for example. additional authentication/authorization like LDAP,etc.. First, lets verify if our cluster supports RBAC, you do so by running the below, output should look something like the below.
kubectl api-versions|grep authorization
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
As mentioned above, turning on RBAC is as as simple as adding the below in your Kubernetes api server. but in order to allow to administer your cluster, you will have to create and allow a service account, all outlined below.. First, create a service account (below i am using admin as the account), feel free to use whatever you like.
kubectl create serviceaccount admin --namespace kube-system

# Verify the new account
kubectl get serviceAccounts -n kube-system admin -o yaml
In Kubernetes there are two type’s of permission(s) assignments.
  1. Full Cluster wide
  2. Namespace only
We will be using the pre-defined cluster-admin role, which grants cluster wide access. Note: In the recent version(s) of Kubernetes is included a list of pre-defined roles, in older versions you might need to create the role your self. You can check all available roles by running the below, we will be using the cluster-admin role.
kubectl get clusterroles
The below is only necessary if you followed this series part 1-6. When using SSL Kubernetes uses as the identification / authentication the cn= as the user account, meaning if cn=usera, then usera will be the user who we need to grant access wrights. In our case I used for the cn=, CN=etcd-node. Now, If you like you can keep the name as CN=etcd-node, and keep that name to grand the required access below. but what if you wont to change that. You can to re-generate the certificates by using the steps in Part 2. Then copy (scp) all the *pem files that get generate to /etc/kubernetes/ssl (make sure to stop all your master and worker nodes before doing so). I decided to modify/update the certificate definitions with the user name admin(like the below), then redistributed(scp) all the certificates. Note: I changed the cn= to be cn=admin and the CA CN to be CN=Kube-CA. Below is an updated certificate generating script you can use.
# Generate the CA private key
openssl genrsa -out ca-key.pem 2048
sed -i 's/^CN.*/CN                 = Kube-CA/g' cert.conf

# Generate the CA certificate.
openssl req -x509 -new -extensions v3_ca -key ca-key.pem -days 3650 \
-out ca.pem \
-subj '/C=US/ST=New York/L=New York/O=example.com/CN=Kube-CA' \
-config cert.conf

 # Generate the server/client private key.
openssl genrsa -out etcd-node-key.pem 2048
sed -i 's/^CN.*/CN                 = admin/g' cert.conf

# Generate the server/client certificate request.
openssl req -new -key etcd-node-key.pem \
-newkey rsa:2048 -nodes -config cert.conf \
-subj '/C=US/ST=New York/L=New York/O=bnh.com/CN=admin' \
-outform pem -out etcd-node-req.pem -keyout etcd-node-req.key
# Sign the server/client certificate request.
openssl x509 -req -in etcd-node-req.pem \
-CA ca.pem -CAkey ca-key.pem -CAcreateserial \
-out etcd-node.pem -days 3650 -extensions v3_req -extfile cert.conf
Below is the output of the new generated certificate. Notice, I left the file name the same as it would require to many changes. Notice the CN=Kube-CA and CN=admin.
openssl x509 -in /etc/kubernetes/ssl/etcd-node.pem -text -noout
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            d3:e7:f0:b1:01:32:e5:76
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=US, ST=NY, L=New York, O=Company1, OU=Ops, CN=Kube-CA
        Validity
            Not Before: Aug 16 18:40:00 2018 GMT
            Not After : Aug 13 18:40:00 2028 GMT
        Subject: C=US, ST=NY, L=New York, O=Company1, OU=Ops, CN=admin
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:e7:13:79:87:9c:99:5a:58:0f:3e:d0:0d:49:7d:
                    4e:4b:4e:58:e6:84:a3:70:5a:17:f6:ae:9b:3f:30:
                    3a:ef:53:bb:09:24:88:c3:2c:42:86:18:5c:0a:c0:
                    51:80:85:20:5a:33:17:42:49:31:7a:6a:09:ab:e7:
                    66:17:a0:8b:30:21:c3:27:f8:61:cb:03:a4:06:86:
                    29:cc:ec:de:e0:57:af:d2:d4:4e:af:72:93:0c:e7:
                    43:3e:48:6a:3b:eb:34:f0:74:71:62:d8:ae:ca:a7:
                    11:d5:01:23:e3:45:9e:c6:3e:94:e9:94:19:b6:ad:
                    63:e0:cf:9d:54:66:00:91:0b:43:dd:37:2a:c1:04:
                    75:28:61:82:2e:32:99:5b:43:d7:52:45:e9:d1:bf:
                    5a:9c:05:6a:ee:fd:ef:69:88:8e:c6:9e:1f:b3:6c:
                    13:79:91:b2:02:e6:7f:79:3a:46:48:8f:c3:7f:24:
                    be:89:fd:8b:1d:99:bd:56:be:df:32:e2:59:40:af:
                    f8:8d:5f:4d:3a:35:08:52:7a:03:46:75:0d:6d:db:
                    5a:55:58:76:1b:0f:92:1c:78:6d:61:1a:ab:69:68:
                    1a:00:24:3b:c1:4a:76:40:a6:d2:cf:17:ce:ee:65:
                    a5:d8:6e:8e:a1:2b:17:1e:3a:80:6a:b4:80:1d:dd:
                    f7:a3
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Non Repudiation, Key Encipherment
            X509v3 Extended Key Usage:
                TLS Web Client Authentication, TLS Web Server Authentication
            X509v3 Basic Constraints: critical
                CA:FALSE
            X509v3 Subject Key Identifier:
                FB:A2:70:18:F5:E7:07:C9:D8:D9:DF:A3:57:A4:FC:AE:D7:3E:29:EC
            X509v3 Subject Alternative Name:
                DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, DNS:kube-apiserver, DNS:kube-admin, DNS:localhost, DNS:domain.com, DNS:kmaster1, DNS:kmaster2, DNS:kmaster3, DNS:kmaster1.local, DNS:kmaster2.local, DNS:kmaster3.local, DNS:kmaster1.domain.com, DNS:kmaster2.domain.com, DNS:kmaster3.domain.com, DNS:knode1, DNS:knode2, DNS:knode3, DNS:knode1.domain.com, DNS:knode2.domain.com, DNS:knode3.domain.com, IP Address:127.0.0.1, IP Address:0.0.0.0, IP Address:10.3.0.1, IP Address:10.3.0.10, IP Address:10.3.0.50, IP Address:172.20.0.1, IP Address:172.20.0.2, IP Address:172.20.0.11, IP Address:172.20.0.12, IP Address:172.20.0.13, IP Address:172.20.0.51, IP Address:172.20.0.52, IP Address:172.20.0.53, email:admin@example.com
    Signature Algorithm: sha256WithRSAEncryption
         56:83:63:7a:ac:06:12:54:b1:b1:3b:04:ff:d0:52:89:35:61:
         d3:85:37:4f:26:1c:73:43:fe:1b:da:20:28:2d:83:32:8a:15:
         2c:1d:20:7a:93:89:16:7b:4f:1f:da:ad:5d:ff:56:8f:6e:f1:
         e4:1b:a1:72:3f:a6:59:b5:8b:37:4f:ae:ad:21:e0:7e:01:59:
         dd:a2:86:c7:80:1b:cf:5c:09:0b:12:55:13:5d:a3:19:d6:44:
         94:af:f3:9d:d4:22:53:0e:b7:88:3a:20:9e:f4:7d:c2:3a:21:
         d9:c9:3a:fe:09:6e:ae:96:5c:7c:bf:ae:81:74:a4:c5:34:b2:
         a6:57:e8:86:39:2f:e6:d0:19:5d:ca:05:17:df:fc:21:30:5b:
         30:a2:a1:f8:eb:f4:b8:f2:fa:c5:50:14:d2:fd:be:8b:5d:f1:
         f2:9e:f2:7e:bf:d3:2d:59:47:9f:e3:50:4f:3e:6a:71:56:0a:
         52:3d:08:69:2a:ee:1d:1e:6a:be:f0:63:f1:0e:00:85:74:48:
         17:76:8b:2b:e8:e6:66:88:93:31:7d:39:b5:60:b5:fe:31:67:
         d0:4b:81:c4:41:11:83:6a:9a:be:c6:ed:6b:2e:d7:9e:07:43:
         8b:56:d8:8c:61:0e:8d:a2:56:70:4e:36:42:02:34:ed:77:01:
         3b:d2:0d:52
We now need to assign the cluster admin role to the new admin service account we created. Note: If you didn’t re-do your certificates, then use etcd-node instead of admin below.
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole=cluster-admin \
--namespace=kube-system \
--user=admin

# If you left the certificate created in part2 run the below.
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole=cluster-admin \
--namespace=kube-system \
--user=etcd-node
Note: Without running the clusterrolebinding, the kube-api server will not come online after enabling RBAC(below). To verify the cluster role got created as expected, just run the below.
kubectl get clusterrolebinding cluster-admin-binding -o yaml
Now, lets enable RBAC in you api server, you do so by adding the below.
cat /etc/kubernetes/manifests/kube-apiserver.yaml
- apiserver   <<<<<---- add in this section of the file
... [snip]
- --authorization-mode=Node,RBAC
... [snip]
Also, replace the below line.
cat /etc/kubernetes/manifests/kube-apiserver.yaml
# From
    - --runtime-config=extensions/v1beta1/networkpolicies=true,extensions/v1beta1=true

# To
    - --runtime-config=extensions/v1beta1/networkpolicies=true,extensions/v1beta1=true,rbac.authorization.k8s.io/v1
Note: Make sure to create the below Role and RoleBinding, this will allow the kube-proxy running on the worker nodes to generate the proper iptable rules for clusterIP / service IP access. Create the ClusterRoleBinding.
cat <
Create the cluster Role.
cat <
To make sure the proper iptable rules got generated run the below and look for your service ip’s.
iptables -t nat -L

# Example of iptable rule with service ip.
iptables -t nat -L|grep 10.3
KUBE-MARK-MASQ  tcp  -- !10.20.0.0/20         10.3.0.1             /* default/kubernetes:https cluster IP */ tcp dpt:https
KUBE-SVC-NPX46M4PTMTKRN6Y  tcp  --  anywhere             10.3.0.1             /* default/kubernetes:https cluster IP */ tcp dpt:https
KUBE-MARK-MASQ  udp  -- !10.20.0.0/20         10.3.0.10            /* kube-system/kube-dns:dns cluster IP */ udp dpt:domain
Make sure your API server(s) are restarted properly. In many instances you will need a full master node restart, otherwise you might see many errors and the node will not work as expected. Now is a good idea to updated your coredns configuration to work with RBAC. Below is an updated coredns.yaml which includes RBAC as part of the config.
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        log
        errors
        health
        kubernetes cluster.local 10.20.0.0/20 10.3.0.0/21 {
            pods insecure
            upstream 8.8.4.4 8.8.8.8
            fallthrough
        }
        prometheus :9153
        proxy . /etc/resolv.conf
        cache 30 cluster.local 10.20.0.0/20 10.3.0.0/21
        reload
        loadbalance
    }
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  replicas: 2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      serviceAccountName: coredns
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      containers:
      - name: coredns
        image: coredns/coredns:1.2.1
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.3.0.10
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
Just apply the configuration by running the below.
kubectl apply -f coredns.yaml
Next, lets move on to Node bootstrapping

Auto SSL/TLS Node bootstrapping

If you run a kubernetes cluster with SSL enabled, one of the issues you might run in-to (sooner or latter), are generating Node SSL certificates to work with the masters as well as certificate rotation, more is explained below. When we initially configure our kubernetes cluster, we used/generated our own CA certificate. we also generated a certificate containing a list of Alt valid Names and Alt valid IP’s . For example in our case we used kmaster1-kmaster3 as well as knode1-knode3. Now what happens in day2, if we would like to add Worker Node4(knode4), we would normally have to re-generated all certificates to achieve that. Also, we would like to streamline / automate the creation of new additional worker nodes with SSL fully working. To address this issues, kubernetes created a set of additional roles and process. the full process is outlined below. Note: The process below assumes you are using a kubernetes version of 1.10+, since there are many enhancements/changes in the recent versions. However, I will try to highlight some of them key point changes.

Creating the bootstrap account(s)

Lets begin by creating a bootstrap service account (if it doesn’t exist).
kubectl create serviceaccount kubelet-bootstrap --namespace kube-system
Now lets assign the node bootstrap role to this user, you do so by running the below.
kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

Configuring / generating Node token(s)

Let me try to explain the process of joining a new Node to the cluster without having a pre- generated certificate.
  • New Node comes online.
  • Connects to one of the Master API servers with SSL.
  • Authenticates by using a token.
  • Creates a certificate requests with his Node name to the API server.
  • Master API server validates the Token and certificate request
  • Master API servers (auto) approves certificate request and signs (by the Master controller-manager)
  • Master API signs the Node certificate request (by the Master controller-manager)
  • Master API hands back the signed certificate to the Node
  • The new Node creates a new Node client certificate file
  • The Node creates a permanent kubeconfig.yaml file, and comes online
So now that you have a better understating of the process, lets try to implement the changes required to makes this happen. Note: In order for a new Node to communicate to the Master API server to request a certificate he will still need the Master CA certificate. First, lets generated a token that we will be using as the initial (temporary) authentication, you do so by running the below.
head -c 16 /dev/urandom | od -An -t x | tr -d ' '
Next, take the output and lets generated our csv token file which we will be using latter for initial authentication. An example file would look like the below – save that in token-bootstrap-auth.csv.
cat /etc/kubernetes/ssl/token-bootstrap-auth.csv
3af4fcd12f67ff3418fa0f1f0a43bb62,kubelet-bootstrap,10001,"system:node-bootstrapper"
Next, we have to modify our kube-apiserver.yaml Make sure to add Node first in the –authorization-mode line.
cat /etc/kubernetes/manifests/kube-apiserver.yaml
- apiserver <<<<<<<<<<-------- add to this section
... [snip]
- --authorization-mode=Node,RBAC
- --token-auth-file=/etc/kubernetes/ssl/token-bootstrap-auth.csv
... [snip]
Next add the below two lines to your config.yaml.
cat /etc/kubernetes/config.yaml
... [snip]
tlsPrivateKeyFile:
... <<-- add below the above line
RotateCertificates: true
ServerTLSBootstrap: true
... [snip]

Create a Node bootstrap file

On the new Worker Node, we need to create a bootstrap-kubeconfig.yaml, this file will be used at the Node initial bootstrap process i.e. to request a certificate. Note: The token line below correspond and shuld contain to the same tokan generated on the master in token-bootstrap-auth.csv.
cat/etc/kubernetes/ssl/bootstrap-kubeconfig.yaml
apiVersion: v1
kind: Config
clusters:
- name: local
  cluster:
    certificate-authority: /etc/kubernetes/ssl/ca.pem
    server: https://172.20.0.11:443
contexts:
- context:
    cluster: local
    user: tls-bootstrap-token-user
  name: tls-bootstrap-token-user@kubernets
current-context: tls-bootstrap-token-user@kubernets
preferences: {}
users:
- name: tls-bootstrap-token-user
  user:
    token: 3af4fcd12f67ff3418fa0f1f0a43bb62
We are almost done. The last thing we need to modify is the kubelet.service file.
cat /etc/systemd/system/kubelet.service
... [snip]
--kubeconfig=/etc/kubernetes/ssl/kubeconfig.yaml \  <<<<<---- add below this line
...
--bootstrap-kubeconfig=/etc/kubernetes/ssl/bootstrap-kubeconfig.yaml \
--cert-dir=/etc/kubernetes/ssl \
Make sure to remove the /etc/kubernetes/ssl/kubeconfig.yaml (if you have one), by running the below, otherwise the bootstarp wont kick-in.
rm /etc/kubernetes/ssl/kubeconfig.yaml
Now we are ready for prime time. Reload the kubelet service and start the process.
systemctl daemon-reload

# Get startup debug info output
journalctl -u kubelet -f &

systemctl enable kubelet && systemctl start kubelet.
To verify all is working as expected, head over to the master and run the below.
kubectl get csr --all-namespaces -o wide
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-Wnmy0oFiycWKaT8LsWRv1EN7m333dv-ZVYz-Bd2X14w   1m        kubelet-bootstrap   Approved,Issued

# And now
kubectl describe csr node-csr-Wnmy0oFiycWKaT8LsWRv1EN7m333dv-ZVYz-Bd2X14w
Name:               node-csr-Wnmy0oFiycWKaT8LsWRv1EN7m333dv-ZVYz-Bd2X14w
Labels:             
Annotations:        
CreationTimestamp:  Wed, 22 Aug 2018 15:05:09 -0400
Requesting User:    kubelet-bootstrap
Status:             Approved,Issued
Subject:
         Common Name:    system:node:knode3
         Serial Number: 
         Organization:   system:nodes
Events:  
You can also check for the new node, something like the below.
kubectl get nodes --all-namespaces -o wide
NAME       STATUS    ROLES     AGE       VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
kmaster1   Ready         22d       v1.11.1   172.20.0.11           CentOS Linux 7 (Core)   3.10.0-862.9.1.el7.x86_64    docker://18.6.0
kmaster2   Ready         21d       v1.11.1   172.20.0.12           CentOS Linux 7 (Core)   3.10.0-862.9.1.el7.x86_64    docker://18.6.0
kmaster3   Ready         21d       v1.11.1   172.20.0.13           CentOS Linux 7 (Core)   3.10.0-862.9.1.el7.x86_64    docker://18.6.0
knode1     Ready         9d        v1.11.2   172.20.0.51           CentOS Linux 7 (Core)   3.10.0-862.9.1.el7.x86_64    docker://18.6.0
knode2     Ready         9d        v1.11.2   172.20.0.52           CentOS Linux 7 (Core)   3.10.0-862.9.1.el7.x86_64    docker://18.6.0
knode3     Ready         1d        v1.11.2   172.20.0.53           CentOS Linux 7 (Core)   3.10.0-862.11.6.el7.x86_64   docker://18.6.0

# OR
kubectl get all  --all-namespaces -o wide
NAMESPACE     NAME                                     READY     STATUS    RESTARTS   AGE       IP            NODE
... [snip]
kube-system   pod/kube-proxy-knode3                    1/1       Running   7          1d        172.20.0.53   knode3
... [snip]
If you are running a cluster with an older kubernetes version, you might need to add the below entry’s which are now obsolete in version 1.11.
# cat /etc/systemd/system/kubelet.service - instead of the consfig.yaml entry's
--feature-gates=RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true
By default kubernetes certificates are generated with a 1 year expiration, to update / change that for a longer/shorter period just set the below.
cat /etc/kubernetes/manifests/kube-controller-manager.yaml
# Change from the default of 1 year (8760h0m0s).
... [snip]
- ./hyperkube
- controller-manager
... [snip] <<<<<<<-------------- add below this
--experimental-cluster-signing-duration=87600h0m0s
...[snip]
Kubernetes pre- 1.9 might require additional steps outlined below, not tested but feel free to try.
kubectl create clusterrolebinding node-client-auto-approve-csr \
  --clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient \
  --group=system:node-bootstrapper

kubectl create clusterrolebinding node-client-auto-renew-crt \
  --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient \
  --group=system:nodes

# Required auto-renewal (might not be needed after 1.7), for sure not required in latest 1.11.
kubectl create clusterrolebinding node-client-auto-approve-csr --clusterrole=approve-node-client-csr --group=system:bootstrappers
kubectl create clusterrolebinding node-client-auto-renew-crt --clusterrole=approve-node-client-renewal-csr --group=system:nodes
kubectl create clusterrolebinding node-server-auto-renew-crt --clusterrole=approve-node-server-renewal-csr --group=system:nodes

Generating a user certificate for cluster access

It is a good idea to generate a certificate for an admin (or regular) user to use while administrating the cluster. below are outlined the steps to do so. First lets prepare our user certifcate reqest. Note: The example below uses a user account of usera.
openssl genrsa -out usera.pem 2048
openssl req -new -key usera.pem -out usera.csr -subj "/CN=usera"

crt_out=`cat usera.csr |base64 |tr -d '\n'`

cat usera_auth-req.yaml
cat << EOF > usera_auth-req.yaml
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
  name: user-request-usera
spec:
  groups:
  - system:authenticated
  request: $crt_out
  usages:
  - digital signature
  - key encipherment
  - client auth
EOF
Reqest the user usera certfricate to be signed by the cluster
kubectl apply -f usera_auth-req.yaml 

# Output shuld show the below.
certificatesigningrequest.certificates.k8s.io/user-request-usera created
Next, Approve the usera certfricate reqest, you do so by running the below.
kubectl certificate approve user-request-usera

# Output shuld show the below.
certificatesigningrequest.certificates.k8s.io/user-request-usera approved
Last, sign the certificate reqest, you do so by running the below.
kubectl get csr user-request-usera -o jsonpath='{.status.certificate}' | base64 -d > usera.crt
To verify the newly created certfciate, just run the below.
kubectl get csr
# Expected out shuld be somthing like the below.
NAME                 AGE       REQUESTOR          CONDITION
user-request-usera   1m        system:unsecured   Approved,Issued
To assign privleges to this user (usera), run the below. (by default he will have no privlages assigned). Note: The below will assign full cluster admin rights to usera, this might not be the desired results you like. you shuld rely only assign to the user the role he rely needs (use kubectl get clusterroles for a list of roles).
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole=cluster-admin \
--user=kube-system:usera
To set and use the newly created certficate for usera, just run the below. Note: You can always set KUBECONFIG=your_new_kubeconfig_config_file (in our case ~/.kube/config), and kubectl will use that.
kubectl --kubeconfig ~/.kube/config config set-cluster usera --insecure-skip-tls-verify=true --server=https://kmaster1.bnh.com
kubectl --kubeconfig ~/.kube/config config set-credentials usera --client-certificate=usera.crt --client-key=usera.pem --embed-certs=true
kubectl --kubeconfig ~/.kube/config config set-context usera --cluster=usera --user=usera
kubectl --kubeconfig ~/.kube/config config use-context usera
Helpful commends to check your Kubernetes roles and bindings.
kubectl get roles
kubectl get rolebindings
kubectl get clusterroles
kubectl get clusterrolebindings
kubectl get clusterrole system:node -o yaml
On-line helpful resource https://kubernetes.io/docs/reference/access-authn-authz/rbac/#command-line-utilities https://kubernetes.io/docs/reference/kubectl/cheatsheet/ Effective RBAC – Jordan Liggitt, Red Hat Other On-line resource defiantly helpful – maybe a bit outdated with the most recent kubernetes versions. https://sysdig.com/blog/kubernetes-security-rbac-tls/ https://medium.com/containerum/configuring-permissions-in-kubernetes-with-rbac-a456a9717d5d https://medium.com/@toddrosner/kubernetes-tls-bootstrapping-cf203776abc7 In Part 8 I will show you how to install / Configuring Helm, Prometheus, Alertmanager, Grafana and Elasticsearch. You might also like – Other related articles to Docker and Kubernetes / micro-service. Like what you’re reading? please provide feedback, any feedback is appreciated.
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
%d bloggers like this: