



(1 votes, average: 1.00 out of 5)Installing, configuring 3 node Kubernetes(master) cluster on CentOS 7.5 – Creating kubernetes manifest and kubelet service
In Part 3 I described how to install and configure Flanneld, CNI plugin and Docker daemon, below I am continuing with with the installation and configuration of kubernetes manifest and kubelet service.- Part 1: Initial setup – bear-metal installation, configuration
- Part 2: Installing the Kubernetes VM’s
- Part 3: Installing and configuring Flanneld, CNI plugin and Docker
- Part 4: Installing and configuring kubernetes manifest and kubelet service
- Part 5: Adding CoreDNS as part of the Kubernetes cluster
- Part 6: Adding / Configuring Kubernetes worker nodes
- Part 7: Enabling / Configuring RBAC, TLS Node bootstrapping
- Part 8: Installing / Configuring Helm, Prometheus, Alertmanager, Grafana and Elasticsearch
Creating the kubernets master manifest
As you might know every Kubernetes Master consists of multiple components(process), it usually includes the below 4 process.- API service
- Enterprise Controller
- Kube Proxy
- Kube Scheduler
apiVersion: v1
kind: Pod
metadata:
name: kube-apiserver
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-apiserver
image: gcr.io/google_containers/hyperkube:v1.11.1
command:
- /hyperkube
- apiserver
- --bind-address=0.0.0.0
- --etcd-servers=https://172.20.0.11:2379,https://172.20.0.12:2379,https://172.20.0.13:2379
- --allow-privileged=true
- --service-cluster-ip-range=10.3.0.0/21
- --secure-port=443
- --insecure-port=8080
- --advertise-address=172.20.0.11
- --storage-backend=etcd3
- --storage-media-type=application/json
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota
- --etcd-certfile=/etc/kubernetes/ssl/etcd-node.pem
- --etcd-keyfile=/etc/kubernetes/ssl/etcd-node-key.pem
- --tls-cert-file=/etc/kubernetes/ssl/etcd-node.pem
- --tls-private-key-file=/etc/kubernetes/ssl/etcd-node-key.pem
- --kubelet-certificate-authority=/etc/kubernetes/ssl/ca.pem
- --kubelet-client-certificate=/etc/kubernetes/ssl/etcd-node.pem
- --kubelet-client-key=/etc/kubernetes/ssl/etcd-node-key.pem
- --service-account-key-file=/etc/kubernetes/ssl/etcd-node-key.pem
- --etcd-cafile=/etc/kubernetes/ssl/ca.pem
- --client-ca-file=/etc/kubernetes/ssl/ca.pem
- --runtime-config=extensions/v1beta1/networkpolicies=true,extensions/v1beta1=true
- --anonymous-auth=false
- --audit-log-path=/var/log/kubernetes/kube-apiserver-audit.log
- --audit-log-maxage=30
- --audit-log-maxbackup=3
- --audit-log-maxsize=100
- --v=3
livenessProbe:
httpGet:
host: 127.0.0.1
port: 8080
path: /healthz
initialDelaySeconds: 15
timeoutSeconds: 15
ports:
- containerPort: 443
hostPort: 443
name: https
- containerPort: 8080
hostPort: 8080
name: local
volumeMounts:
- mountPath: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
readOnly: true
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
- mountPath: /var/log/kubernetes
name: var-log-kubernetes
readOnly: false
volumes:
- hostPath:
path: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host
- hostPath:
path: /var/log/kubernetes
name: var-log-kubernetes
cat /etc/kubernetes/manifests/kube-controller-manager.yaml
apiVersion: v1
kind: Pod
metadata:
name: kube-controller-manager
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-controller-manager
image: gcr.io/google_containers/hyperkube:v1.11.1
command:
- ./hyperkube
- controller-manager
- --master=https://172.20.0.11:443
- --kubeconfig=/etc/kubernetes/ssl/kubeconfig.yaml
- --leader-elect=true
- --allocate-node-cidrs=true
- --cluster-cidr=10.20.0.0/20
- --service-cluster-ip-range=10.3.0.0/21
- --service-account-private-key-file=/etc/kubernetes/ssl/etcd-node-key.pem
- --root-ca-file=/etc/kubernetes/ssl/ca.pem
- --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem
- --service-account-private-key-file=/etc/kubernetes/ssl/etcd-node-key.pem
- --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem
livenessProbe:
httpGet:
host: 127.0.0.1
path: /healthz
port: 10252 # Note: Using default port. Update if --port option is set differently.
initialDelaySeconds: 15
timeoutSeconds: 5
volumeMounts:
- mountPath: /etc/ssl/certs
name: ssl-host
readOnly: true
- mountPath: /var/log/kube-controller-manager.log
name: logfile
readOnly: false
- mountPath: /etc/kubernetes/ssl
name: "kube-ssl"
readOnly: true
volumes:
- hostPath:
path: /usr/share/ca-certificates
name: ssl-host
- hostPath:
path: /var/log/kube-controller-manager.log
name: logfile
- hostPath:
path: "/etc/kubernetes/ssl"
name: "kube-ssl"
cat /etc/kubernetes/manifests/kube-proxy-master.yaml
apiVersion: v1
kind: Pod
metadata:
name: kube-proxy
namespace: kube-system
labels:
tier: node
k8s-app: kube-proxy
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: gcr.io/google_containers/hyperkube:v1.11.1
command:
- ./hyperkube
- proxy
- --master=https://172.20.0.11:443
- --kubeconfig=/etc/kubernetes/ssl/kubeconfig.yaml
- --logtostderr=true
- --proxy-mode=iptables
- --masquerade-all
- --hostname-override=172.20.0.11
- --cluster-cidr=10.20.0.0/20
- --v=3
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
- name: kube-ssl
mountPath: /etc/kubernetes/ssl
readOnly: true
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
volumes:
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host
- name: kube-ssl
hostPath:
path: /etc/kubernetes/ssl
cat /etc/kubernetes/manifests/kube-scheduler.yaml
apiVersion: v1
kind: Pod
metadata:
name: kube-scheduler
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-scheduler
image: gcr.io/google_containers/hyperkube:v1.11.1
command:
- ./hyperkube
- scheduler
- --master=https://172.20.0.11:443
- --kubeconfig=/etc/kubernetes/ssl/kubeconfig.yaml
- --address=0.0.0.0
- --leader-elect=true
- --v=3
livenessProbe:
httpGet:
host: 127.0.0.1
path: /healthz
port: 10251 # Note: Using default port. Update if --port option is set differently.
initialDelaySeconds: 15
timeoutSeconds: 15
nodeSelector:
node-role.kubernetes.io/master: ""
securityContext:
runAsNonRoot: true
runAsUser: 65534
volumeMounts:
- mountPath: /var/log/kube-scheduler.log
name: logfile
- mountPath: /etc/kubernetes/ssl
name: "kube-ssl"
readOnly: true
volumes:
- hostPath:
path: /var/log/kube-scheduler.log
name: logfile
- hostPath:
path: "/etc/kubernetes/ssl"
name: "kube-ssl"
Create your kubeconfig.yaml, think of this this file like your authentication method.
cat /etc/kubernetes/ssl/kubeconfig.yaml
apiVersion: v1
kind: Config
clusters:
- name: local
cluster:
server: https://172.20.0.11:443
certificate-authority: /etc/kubernetes/ssl/ca.pem
users:
- name: kubelet
user:
client-certificate: /etc/kubernetes/ssl/etcd-node.pem
client-key: /etc/kubernetes/ssl/etcd-node-key.pem
contexts:
- context:
cluster: local
user: kubelet
Create the config.yaml file. this file is contains additional kubelet configuration.
cat /etc/kubernetes/config.yaml
address: 0.0.0.0
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/ssl/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: cgroupfs
cgroupsPerQOS: true
clusterDNS:
- 10.3.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kind: KubeletConfiguration
tlsCertFile: "/etc/kubernetes/ssl/etcd-node.pem"
tlsPrivateKeyFile: "/etc/kubernetes/ssl/etcd-node-key.pem"
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
port: 10250
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
Finally, create your kubelet service file.
cat /etc/systemd/system/kubelet.service [Unit] Description=kubelet: The Kubernetes Node Agent Documentation=http://kubernetes.io/docs/ [Service] WorkingDirectory=/var/lib/kubelet ExecStart=/usr/bin/kubelet \ --register-node=true \ --allow-privileged \ --hostname-override=kmaster1 \ --kubeconfig=/etc/kubernetes/ssl/kubeconfig.yaml \ --config=/etc/kubernetes/config.yaml \ --network-plugin=cni \ --cni-conf-dir=/etc/kubernetes/cni/net.d \ --lock-file=/var/run/lock/kubelet.lock \ --exit-on-lock-contention \ --logtostderr=true \ --v=2 Restart=always StartLimitInterval=0 RestartSec=10 [Install] WantedBy=multi-user.targetCreate the kubelet directory
mkdir /var/lib/kubeletWe are finnaly ready to start the kubelet service.
systemctl daemon-reload journalctl -u kubelet -f & systemctl enable kubelet && systemctl start kubeletTo verify your pods runing/wokring run the below. If all is working properly, you should see something like the below output.
kubectl get all --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE kube-system pod/kube-apiserver-kmaster1 1/1 Running 6 4d 172.20.0.11 kmaster1 kube-system pod/kube-apiserver-kmaster2 1/1 Running 9 7d 172.20.0.12 kmaster2 kube-system pod/kube-apiserver-kmaster3 1/1 Running 11 11d 172.20.0.13 kmaster3 kube-system pod/kube-controller-manager-kmaster1 1/1 Running 6 4d 172.20.0.11 kmaster1 kube-system pod/kube-controller-manager-kmaster2 1/1 Running 9 7d 172.20.0.12 kmaster2 kube-system pod/kube-controller-manager-kmaster3 1/1 Running 11 11d 172.20.0.13 kmaster3 kube-system pod/kube-proxy-kmaster1 1/1 Running 6 4d 172.20.0.11 kmaster1 kube-system pod/kube-proxy-kmaster2 1/1 Running 5 7d 172.20.0.12 kmaster2 kube-system pod/kube-proxy-kmaster3 1/1 Running 6 7d 172.20.0.13 kmaster3 kube-system pod/kube-scheduler-kmaster1 1/1 Running 6 4d 172.20.0.11 kmaster1 kube-system pod/kube-scheduler-kmaster2 1/1 Running 9 7d 172.20.0.12 kmaster2 kube-system pod/kube-scheduler-kmaster3 1/1 Running 11 11d 172.20.0.13 kmaster3 NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR default service/kubernetes ClusterIP 10.3.0.1 443/TCP 11dTo run all kubernetes process(api, scheduler, etc) I used the hyperkube image version v1.11.1. You can always sreplace/update to the latest images, to get a list of the images avalble, you can run somthing like the below.
# Curent hyperkube image used image: gcr.io/google_containers/hyperkube:v1.11.1 curl -k -s -X GET https://gcr.io/v2/google_containers/hyperkube/tags/list | jq -r '.tags[]' 0.14.1 dev v0.14.1 v0.14.2 v0.15.0 ... [snip] v1.11.0-rc.1 v1.11.0-rc.2 v1.11.0-rc.3 v1.11.1 v1.11.1-beta.0 v1.11.2 ... [snip]Optional You can add an alias for Docker management.
cat /etc/profile.d/aliases.sh
dps () { docker ps -a ; }
dl () { docker logs -f $1 & }
drm () { docker stop $1 && docker rm $1; }
If you reached this point you pride your self as completed a 3 master node Kubernetes cluster.
In Part 5 I will continue adding additional services like CoreDNS as part of the Kubernetes cluster, if you like to skip this optional part and jump directly to Part 6 adding worker nodes(coming soon) click here.
You might also like – Other related articles to Docker and Kubernetes / micro-service.
Like what you’re reading? please provide feedback, any feedback is appreciated.
0
0
votes
Article Rating
Hi, thanks very much for your sharing!
If the other service file can be shared it were better, such as :
kube-apiserver.service,
kube-proxy.service,
kube-scheduler.service,
kube-controller-manager.service
I have not tested this but feel free to give it a try.
To make things more generic, try to adjust / remove the hard coded component properties. such as –advertise-address of the api server, –master of the controller, proxy and scheduler, and lastly –hostname-override from the proxy. its also a good idea to add the –bind-address=0.0.0.0 to all 4 components.
Hello Eli,
After all I got below error:
failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file “/var/lib/kubelet/config.yaml”, error: open /var/lib/kubelet/config.yaml: no such file or directory.
Do you have any idea?
Thanks!
Hi, I was really busy today and haven’t had a chance to much to look at it. I am also not sure ware you see the error, is it systemctl? But at first glance it looks like the error is related to a misconfiguration in the /etc/systemd/system/kubelet.service File. Did you have the below line in your kubelet.service –config=/etc/kubernetes/config.yaml \ As you can see the config.yaml in my configuration resides in /etc/kubernetes. As a workaround you can try creating/copying the file to /var/lib/kubelet and see if that works However, you might runin to other location specific issues as I am not… Read more »
Hello Eli,
Thanks for fast replying, I guess new version of Kubernetes has some changes. I copied the config manually and as you expected we have more errors!
failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory
Thanks!
Hi,
It looks like your kubelet configuration is being set somewhere else. Meaning its not using the kubelet.service configuration or its being overwritten by some other configuration file under /etc.
I am only addressing bootstrapping in part7, in no part am I addressing the bootstrap config file which is normally optional.
Try running find /etc -type f -name “kubelet*”
Or grep -ri kubelet /etc
See if the results point you somewhere with a conflicting configuration
Hello Eli,
I appreciate your support. Here is the result:
/etc/selinux/targeted/contexts/files/file_contexts:/var/lib/kubelet(/.*)? system_u:object_r:container_file_t:s0
Binary file /etc/selinux/targeted/contexts/files/file_contexts.bin matches
/etc/selinux/targeted/contexts/files/file_contexts.pre:/var/lib/kubelet(/.*)? system_u:object_r:container_file_t:s0
/etc/selinux/targeted/active/file_contexts:/var/lib/kubelet(/.*)? system_u:object_r:container_file_t:s0
/etc/sysconfig/kubelet:KUBELET_EXTRA_ARGS=
/etc/systemd/system/kubelet.service:Description=kubelet: The Kubernetes Node Agent
/etc/systemd/system/kubelet.service:WorkingDirectory=/var/lib/kubelet
/etc/systemd/system/kubelet.service:ExecStart=/usr/bin/kubelet \
/etc/systemd/system/kubelet.service:–lock-file=/var/run/lock/kubelet.lock \
/etc/kubernetes/manifests/kube-apiserver.yaml: – –kubelet-certificate-authority=/etc/kubernetes/ssl/ca.pem
/etc/kubernetes/manifests/kube-apiserver.yaml: – –kubelet-client-certificate=/etc/kubernetes/ssl/etcd-node.pem
/etc/kubernetes/manifests/kube-apiserver.yaml: – –kubelet-client-key=/etc/kubernetes/ssl/etcd-node-key.pem
/etc/kubernetes/ssl/kubeconfig.yaml:- name: kubelet
/etc/kubernetes/ssl/kubeconfig.yaml: user: kubelet
/etc/kubernetes/config.yaml:apiVersion: kubelet.config.k8s.io/v1beta1
/etc/kubernetes/config.yaml:kind: KubeletConfiguration
Thanks!
Hi, From your output it looks like your configuration is coming and being overwritten from the file below. /etc/sysconfig/kubelet With lines like the below, etc.. KUBELET_EXTRA_ARGS… In my case the configuration was stored in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, which I had to disable. Please verify that all lines in the below file(s) are comment-out. I.e. add a # in the beginning of each line. You can see an example in part2. /etc/systemd/system/kubelet.service.d/10-kubeadm.conf. Also make sure to check your /etc/sysconfig/kubelet for similar content, and comment out all the lines, or grep for other default system files overwriting your config somewhere under/ etc. When done… Read more »