Installing, Configuring 3 Node Kubernetes(master) Cluster on CentOS 7.5 – Configuring Manifest and Kubelet Service – Part 4

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Installing, configuring 3 node Kubernetes(master) cluster on CentOS 7.5 – Creating kubernetes manifest and kubelet service

In Part 3 I described how to install and configure Flanneld, CNI plugin and Docker daemon, below I am continuing with with the installation and configuration of kubernetes manifest and kubelet service.

This is Part 4 – Installing and configuring kubernetes manifest and kubelet service.

Creating the kubernets master manifest

As you might know every Kubernetes Master consists of multiple components(process), it usually includes the below 4 process.

  1. API service
  2. Enterprise Controller
  3. Kube Proxy
  4. Kube Scheduler

In the next step, we are going to create the Kubernetes manifest(s) yaml files for each of the master server process.
Note: The main kubelet process reads the Kubernetes manifest directory at startup time (as well as watches later for changes) and will read/start each of the components listed at startup time.

So lets begin, below I am listing each of the manifest content, just create the below 4 files in /etc/kubernetes/manifests/.

Note: Make sure to replace/update the IP Address for each master.
The below examples are from Master1.
cat /etc/kubernetes/manifests/kube-apiserver.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kube-apiserver
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-apiserver
    image: gcr.io/google_containers/hyperkube:v1.11.1
    command:
    - /hyperkube
    - apiserver
    - --bind-address=0.0.0.0
    - --etcd-servers=https://172.20.0.11:2379,https://172.20.0.12:2379,https://172.20.0.13:2379
    - --allow-privileged=true
    - --service-cluster-ip-range=10.3.0.0/21
    - --secure-port=443
    - --insecure-port=8080
    - --advertise-address=172.20.0.11
    - --storage-backend=etcd3
    - --storage-media-type=application/json
    - --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota
    - --etcd-certfile=/etc/kubernetes/ssl/etcd-node.pem
    - --etcd-keyfile=/etc/kubernetes/ssl/etcd-node-key.pem
    - --tls-cert-file=/etc/kubernetes/ssl/etcd-node.pem
    - --tls-private-key-file=/etc/kubernetes/ssl/etcd-node-key.pem
    - --kubelet-certificate-authority=/etc/kubernetes/ssl/ca.pem
    - --kubelet-client-certificate=/etc/kubernetes/ssl/etcd-node.pem
    - --kubelet-client-key=/etc/kubernetes/ssl/etcd-node-key.pem
    - --service-account-key-file=/etc/kubernetes/ssl/etcd-node-key.pem
    - --etcd-cafile=/etc/kubernetes/ssl/ca.pem
    - --client-ca-file=/etc/kubernetes/ssl/ca.pem
    - --runtime-config=extensions/v1beta1/networkpolicies=true,extensions/v1beta1=true
    - --anonymous-auth=false
    - --audit-log-path=/var/log/kubernetes/kube-apiserver-audit.log
    - --audit-log-maxage=30
    - --audit-log-maxbackup=3
    - --audit-log-maxsize=100
    - --v=3
    livenessProbe:
      httpGet:
        host: 127.0.0.1
        port: 8080
        path: /healthz
      initialDelaySeconds: 15
      timeoutSeconds: 15
    ports:
    - containerPort: 443
      hostPort: 443
      name: https
    - containerPort: 8080
      hostPort: 8080
      name: local
    volumeMounts:
    - mountPath: /etc/kubernetes/ssl
      name: ssl-certs-kubernetes
      readOnly: true
    - mountPath: /etc/ssl/certs
      name: ssl-certs-host
      readOnly: true
    - mountPath: /var/log/kubernetes
      name: var-log-kubernetes
      readOnly: false
  volumes:
  - hostPath:
      path: /etc/kubernetes/ssl
    name: ssl-certs-kubernetes
  - hostPath:
      path: /usr/share/ca-certificates
    name: ssl-certs-host
  - hostPath:
      path: /var/log/kubernetes
    name: var-log-kubernetes

cat /etc/kubernetes/manifests/kube-controller-manager.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kube-controller-manager
  namespace: kube-system
spec:
      hostNetwork: true
      containers:
      - name: kube-controller-manager
        image: gcr.io/google_containers/hyperkube:v1.11.1
        command:
        - ./hyperkube
        - controller-manager
        - --master=https://172.20.0.11:443
        - --kubeconfig=/etc/kubernetes/ssl/kubeconfig.yaml
        - --leader-elect=true
        - --allocate-node-cidrs=true
        - --cluster-cidr=10.20.0.0/20
        - --service-cluster-ip-range=10.3.0.0/21
        - --service-account-private-key-file=/etc/kubernetes/ssl/etcd-node-key.pem
        - --root-ca-file=/etc/kubernetes/ssl/ca.pem
        - --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem
        - --service-account-private-key-file=/etc/kubernetes/ssl/etcd-node-key.pem
        - --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem
        livenessProbe:
          httpGet:
            host: 127.0.0.1
            path: /healthz
            port: 10252  # Note: Using default port. Update if --port option is set differently.
          initialDelaySeconds: 15
          timeoutSeconds: 5
        volumeMounts:
        - mountPath: /etc/ssl/certs
          name: ssl-host
          readOnly: true
        - mountPath: /var/log/kube-controller-manager.log
          name: logfile
          readOnly: false
        - mountPath: /etc/kubernetes/ssl
          name: "kube-ssl"
          readOnly: true
      volumes:
      - hostPath:
          path: /usr/share/ca-certificates
        name: ssl-host
      - hostPath:
          path: /var/log/kube-controller-manager.log
        name: logfile
      - hostPath:
          path: "/etc/kubernetes/ssl"
        name: "kube-ssl"

cat /etc/kubernetes/manifests/kube-proxy-master.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kube-proxy
  namespace: kube-system
  labels:
    tier: node
    k8s-app: kube-proxy
spec:
      hostNetwork: true
      containers:
      - name: kube-proxy
        image: gcr.io/google_containers/hyperkube:v1.11.1
        command:
        - ./hyperkube
        - proxy
        - --master=https://172.20.0.11:443
        - --kubeconfig=/etc/kubernetes/ssl/kubeconfig.yaml
        - --logtostderr=true
        - --proxy-mode=iptables
        - --masquerade-all
        - --hostname-override=172.20.0.11
        - --cluster-cidr=10.20.0.0/20
        - --v=3
        env:
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
        securityContext:
          privileged: true
        volumeMounts:
        - mountPath: /etc/ssl/certs
          name: ssl-certs-host
          readOnly: true
        - name: kube-ssl
          mountPath: /etc/kubernetes/ssl
          readOnly: true
      tolerations:
      - key: "node-role.kubernetes.io/master"
        operator: "Exists"
        effect: "NoSchedule"
      volumes:
      - hostPath:
          path: /usr/share/ca-certificates
        name: ssl-certs-host
      - name: kube-ssl
        hostPath:
          path: /etc/kubernetes/ssl

cat /etc/kubernetes/manifests/kube-scheduler.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kube-scheduler
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-scheduler
    image: gcr.io/google_containers/hyperkube:v1.11.1
    command:
    - ./hyperkube
    - scheduler
    - --master=https://172.20.0.11:443
    - --kubeconfig=/etc/kubernetes/ssl/kubeconfig.yaml
    - --address=0.0.0.0
    - --leader-elect=true
    - --v=3
    livenessProbe:
        httpGet:
          host: 127.0.0.1
          path: /healthz
          port: 10251  # Note: Using default port. Update if --port option is set differently.
        initialDelaySeconds: 15
        timeoutSeconds: 15
    nodeSelector:
      node-role.kubernetes.io/master: ""
    securityContext:
      runAsNonRoot: true
      runAsUser: 65534
    volumeMounts:
    - mountPath: /var/log/kube-scheduler.log
      name: logfile
    - mountPath: /etc/kubernetes/ssl
      name: "kube-ssl"
      readOnly: true
  volumes:
  - hostPath:
      path: /var/log/kube-scheduler.log
    name: logfile
  - hostPath:
      path: "/etc/kubernetes/ssl"
    name: "kube-ssl"

Create your kubeconfig.yaml, think of this this file like your authentication method.

cat /etc/kubernetes/ssl/kubeconfig.yaml
apiVersion: v1
kind: Config
clusters:
- name: local
  cluster:
    server: https://172.20.0.11:443
    certificate-authority: /etc/kubernetes/ssl/ca.pem
users:
- name: kubelet
  user:
    client-certificate: /etc/kubernetes/ssl/etcd-node.pem
    client-key: /etc/kubernetes/ssl/etcd-node-key.pem
contexts:
- context:
    cluster: local
    user: kubelet

Create the config.yaml file. this file is contains additional kubelet configuration.

cat /etc/kubernetes/config.yaml
address: 0.0.0.0
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/ssl/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: cgroupfs
cgroupsPerQOS: true
clusterDNS:
- 10.3.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kind: KubeletConfiguration
tlsCertFile: "/etc/kubernetes/ssl/etcd-node.pem"
tlsPrivateKeyFile: "/etc/kubernetes/ssl/etcd-node-key.pem"
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
port: 10250
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s

Finally, create your kubelet service file.

cat /etc/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/bin/kubelet \
--register-node=true \
--allow-privileged \
--hostname-override=kmaster1 \
--kubeconfig=/etc/kubernetes/ssl/kubeconfig.yaml \
--config=/etc/kubernetes/config.yaml \
--network-plugin=cni \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--lock-file=/var/run/lock/kubelet.lock \
--exit-on-lock-contention \
--logtostderr=true \
--v=2
Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target

Create the kubelet directory

mkdir /var/lib/kubelet

We are finnaly ready to start the kubelet service.

systemctl daemon-reload

journalctl -u kubelet -f &
systemctl enable kubelet && systemctl start kubelet

To verify your pods runing/wokring run the below.
If all is working properly, you should see something like the below output.

kubectl get all --all-namespaces -o wide           
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE       IP            NODE
kube-system   pod/kube-apiserver-kmaster1             1/1       Running   6          4d        172.20.0.11   kmaster1
kube-system   pod/kube-apiserver-kmaster2             1/1       Running   9          7d        172.20.0.12   kmaster2
kube-system   pod/kube-apiserver-kmaster3             1/1       Running   11         11d       172.20.0.13   kmaster3
kube-system   pod/kube-controller-manager-kmaster1    1/1       Running   6          4d        172.20.0.11   kmaster1
kube-system   pod/kube-controller-manager-kmaster2    1/1       Running   9          7d        172.20.0.12   kmaster2
kube-system   pod/kube-controller-manager-kmaster3    1/1       Running   11         11d       172.20.0.13   kmaster3
kube-system   pod/kube-proxy-kmaster1                 1/1       Running   6          4d        172.20.0.11   kmaster1
kube-system   pod/kube-proxy-kmaster2                 1/1       Running   5          7d        172.20.0.12   kmaster2
kube-system   pod/kube-proxy-kmaster3                 1/1       Running   6          7d        172.20.0.13   kmaster3
kube-system   pod/kube-scheduler-kmaster1             1/1       Running   6          4d        172.20.0.11   kmaster1
kube-system   pod/kube-scheduler-kmaster2             1/1       Running   9          7d        172.20.0.12   kmaster2
kube-system   pod/kube-scheduler-kmaster3             1/1       Running   11         11d       172.20.0.13   kmaster3

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE       SELECTOR
default       service/kubernetes   ClusterIP   10.3.0.1             443/TCP         11d       

To run all kubernetes process(api, scheduler, etc) I used the hyperkube image version v1.11.1.
You can always sreplace/update to the latest images, to get a list of the images avalble, you can run somthing like the below.

# Curent hyperkube image used
image: gcr.io/google_containers/hyperkube:v1.11.1

curl -k -s -X GET https://gcr.io/v2/google_containers/hyperkube/tags/list | jq -r '.tags[]'
0.14.1
dev
v0.14.1
v0.14.2
v0.15.0
... [snip]
v1.11.0-rc.1
v1.11.0-rc.2
v1.11.0-rc.3
v1.11.1
v1.11.1-beta.0
v1.11.2
... [snip]

Optional You can add an alias for Docker management.

cat /etc/profile.d/aliases.sh
dps () { docker ps -a ; }
dl () { docker logs -f $1 & }
drm () { docker stop $1 && docker rm $1; }

If you reached this point you pride your self as completed a 3 master node Kubernetes cluster.

In Part 5 I will continue adding additional services like CoreDNS as part of the Kubernetes cluster, if you like to skip this optional part and jump directly to Part 6 adding worker nodes(coming soon) click here.

You might also like - Other related articles to Docker and Kubernetes / micro-service.

Like what you're reading? please provide feedback, any feedback is appreciated.

8
Leave a Reply

avatar
3000
2 Comment threads
6 Thread replies
1 Followers
 
Most reacted comment
Hottest comment thread
3 Comment authors
Eli Kleinmancamerujnzxw Recent comment authors
  Subscribe  
newest oldest most voted
Notify of
ujnzxw
Guest
ujnzxw

Hi, thanks very much for your sharing!
If the other service file can be shared it were better, such as :
kube-apiserver.service,
kube-proxy.service,
kube-scheduler.service,
kube-controller-manager.service

camer
Guest
camer

Hello Eli,

After all I got below error:
failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file “/var/lib/kubelet/config.yaml”, error: open /var/lib/kubelet/config.yaml: no such file or directory.

Do you have any idea?

Thanks!