1 Star2 Stars3 Stars4 Stars5 Stars (2 votes, average: 3.00 out of 5)

Installing, configuring 3 node Kubernetes(master) cluster on CentOS 7.5 – configure the Kubernetes VM’s

In Part 1 I described how to install and configure the bear-metal OS hosting all the Kubernetes VM’s, below I am continuing with with the installation and configuration of all Kubernetes VM/masters. This is Part 2 – Installing the Kubernetes VM’s.

Installing the Kubernetes VM’s

  • Install CentOS 7.5(1804) / (CentOS-7-x86_64-Minimal-1804.iso), on 5 Virtual Box VM’s.
  • 3 VM’s will be configured as Masters, and 2 will be configured as worker nodes.
    • The host names, IP Address’s used in my configuration are below (feel free to come up with your own configuration schema). Configure each VM with the below resources.
      1. 1 Virtual CPU is fine.
      2. At least 2Gb of RAM.
      3. At least 12Gb HDD.
      4. Assign Network Switch to network port.
      Set the below on each Kubernetes VM (master and Workers). Disable SE Linux by Running the below.
      setenforce 0
      # Save the config after a reboot
      sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
      Disable swap
      swapoff -a
      # Save the config after a reboot by commenting in /etc/fstab
      #/dev/mapper/centos-swap swap                    swap    defaults        0 0
      If you are behind a firewall or corporate proxy, add your proxy to /etc/yum.conf and /etc/environment (for an example check out in part 1).
      modprobe br_netfilter
      echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
      echo '1' > /proc/sys/net/ipv4/ip_forward
      # Run sysctl to verify
      sysctl -p
      Install Docker packages
      yum install -y yum-utils device-mapper-persistent-data lvm2
      yum-config-manager --add-repo
      yum install -y docker-ce
      Add kubernetes repo
      cat < /etc/yum.repos.d/kubernetes.repo
      Install kubernetes and other related packages
      yum install -y kubelet kubeadm kubectl flannel epel-release conntrack jq vim bash-completion lsof screen git net-tools && yum update -y
      Since we are not going to use kubeadm for our configuration comment out all entry’s in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf like the below example.
      cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
      # Note: This dropin only works with kubeadm and kubelet v1.11+
      #Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
      # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
      # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
      # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
      Disable the system firewall by running the below, it would be partially managed by kubernetes.
      systemctl disable firewalld.service
      systemctl stop firewalld
      Note: There are other options to deal with firewall rules, for example just enabling the ports required by kubernetes. Lastly before continuing reboot each vm instances.

      Configuring etcd – kubernetes SSL certificates

      In order for all communication in kubernetes to be secure we need to create certificates. Note: In order to keep things the simplest possible, I will be using the same certificate key for all components. in production you might consider breaking those out in your production environment.

      Creating Kubernetes SSL certificates

      Create a cert.conf file with the content below. add other host names/ ip address as needed, also make sure to replace with your domain name.
      default_bits       = 2048
      prompt             = no
      default_md         = sha256
      distinguished_name = dn
      req_extensions     = v3_req
      x509_extensions    = v3_ca
      [ dn ]
      C                  = US
      ST                 = NY
      L                  = New York
      O                  = Company1
      OU                 = Ops
      CN                 = etcd-node
      [ v3_ca ]
      keyUsage = critical,keyCertSign, cRLSign
      basicConstraints = critical,CA:TRUE
      subjectKeyIdentifier = hash
      [ v3_req ]
      keyUsage = critical,digitalSignature, keyEncipherment, nonRepudiation
      extendedKeyUsage = clientAuth, serverAuth
      basicConstraints = critical,CA:FALSE
      subjectKeyIdentifier = hash
      subjectAltName = @alt_names
      [ alt_names ]
      DNS.1              = kubernetes
      DNS.2              = kubernetes.default
      DNS.3              = kubernetes.default.svc
      DNS.4              = kubernetes.default.svc.cluster.local
      DNS.5              = kube-apiserver
      DNS.6              = kube-admin
      DNS.7              = localhost
      DNS.8              =
      DNS.9              = kmaster1
      DNS.10              = kmaster2
      DNS.11              = kmaster3
      DNS.12              = kmaster1.local
      DNS.13              = kmaster2.local
      DNS.14              = kmaster3.local
      DNS.15              =
      DNS.16              =
      DNS.17              =
      DNS.18              = knode1
      DNS.19              = knode2
      DNS.20              = knode3
      DNS.21              =
      DNS.22              =
      DNS.23              =
      IP.1              =
      IP.2              =
      IP.3              =
      IP.4              =
      IP.5              =
      IP.6              =
      IP.7              =
      IP.8              =
      IP.9              =
      IP.10              =
      IP.11              =
      IP.12              =
      IP.13              =
      email              =
      Next, Copy the below and create a file called in the same directory as cert.conf file.
      # Generate the CA private key
      openssl genrsa -out ca-key.pem 2048
      sed -i 's/^CN.*/CN                 = Etcd/g' cert.conf
      # Generate the CA certificate.
      openssl req -x509 -new -extensions v3_ca -key ca-key.pem -days 3650 \
      -out ca.pem \
      -subj '/C=US/ST=New York/L=New York/' \
      -config cert.conf
       # Generate the server/client private key.
      openssl genrsa -out etcd-node-key.pem 2048
      sed -i 's/^CN.*/CN                 = etcd-node/g' cert.conf
      # Generate the server/client certificate request.
      openssl req -new -key etcd-node-key.pem \
      -newkey rsa:2048 -nodes -config cert.conf \
      -subj '/C=US/ST=New York/L=New York/' \
      -outform pem -out etcd-node-req.pem -keyout etcd-node-req.key
      # Sign the server/client certificate request.
      openssl x509 -req -in etcd-node-req.pem \
      -CA ca.pem -CAkey ca-key.pem -CAcreateserial \
      -out etcd-node.pem -days 3650 -extensions v3_req -extfile cert.conf
      Next, make the file executable, and run to create the certificates. output would look something like the below.
      chmod +x
      Generating RSA private key, 2048 bit long modulus
      e is 65537 (0x10001)
      Generating RSA private key, 2048 bit long modulus
      e is 65537 (0x10001)
      Signature ok
      subject=/C=US/ST=NY/L=New York/O=Company1/OU=Ops/CN=etcd-node
      Getting CA Private Key
      To verify the certificate, run the below.
      openssl x509 -in etcd-node.pem -text -noout
      Once completed you should be left with the below list of files.
      ca-key.pem  ca.pem  cert.conf  etcd-node-key.pem  etcd-node-req.pem  etcd-node.pem
      From the list of above files we only need the ca.pem, ca-key.pem, etcd-node.pem and etcd-node-key.pem. Create the directory if not exist, and copy the certificate files.
      mkdir -p /etc/kubernetes/ssl
      ln -s /etc/kubernetes/ssl /etc/kubernetes/pki
      cp ca.pem ca-key.pem etcd-node.pem etcd-node-key.pem /etc/kubernetes/ssl
      Note: I named the most used key certificate as etcd-node-(key).pem, since its the first SSL certificate being used, feel free to rename at your own desire(just remember to update all references).

      Etcd installation and configuration

      Add to /etc/environment
      If you are behind a corporate firewall or proxy, set the below in /etc/systemd/system/docker.service.d/http-proxy.conf (create the directory if not exist).

      Install etcd on the 3 master servers

      Download the latest etcd and configure
      mkdir -p /var/lib/etcd
      curl -# -LO
      tar xf etcd-v3.3.8-linux-amd64.tar.gz
      chown -Rh root:root etcd-v3.3.8-linux-amd64
      find etcd-v3.3.8-linux-amd64 -xdev -type f -exec chmod 0755 '{}' \;
      cp etcd-v3.3.8-linux-amd64/etcd* /usr/bin/
      Create etcd service, like the below. cat /etc/systemd/system/etcd.service Note: Replace on each each master the below values with the proper name/ip.
      1. –name=kmaster1
      2. –listen-peer-urls=
      3. –advertise-client-urls=
      4. –initial-advertise-peer-urls=
      5. –listen-client-urls=,,
      The example below is from master1.
      Description=Etcd Server
      ExecStart=/usr/bin/etcd \
      --name=kmaster1 \
      --cert-file=/etc/kubernetes/pki/etcd-node.pem \
      --key-file=/etc/kubernetes/pki/etcd-node-key.pem \
      --peer-cert-file=/etc/kubernetes/pki/etcd-node.pem \
      --peer-key-file=/etc/kubernetes/pki/etcd-node-key.pem \
      --trusted-ca-file=/etc/kubernetes/pki/ca.pem \
      --peer-trusted-ca-file=/etc/kubernetes/pki/ca.pem \
      --initial-advertise-peer-urls= \
      --listen-peer-urls= \
      --listen-client-urls=,, \
      --advertise-client-urls= \
      --initial-cluster-token=etcd-token \
      --initial-cluster=kmaster1=,kmaster2=,kmaster3= \
      --data-dir=/var/lib/etcd \
      Create an etcd user
      useradd etcd
      chown -R etcd:etcd /var/lib/etcd
      Add network configuration on all 3 masters, by running the below.
      cat <  /etc/sysctl.d/k8s.conf
      net.bridge.bridge-nf-call-ip6tables = 1
      net.bridge.bridge-nf-call-iptables = 1
      net.bridge.bridge-nf-call-arptables = 1
      # Reload stack
      sysctl --system
      Start and Enable etcd on all 3 masters.
      systemctl enable etcd && systemctl start etcd
      # Verify if etcd started.
      systemctl status etcd
      journalctl -u etcd
      Verify if etcd works across all nodes.
      etcdctl member list
      7c8d40e4de52c1b9: name=kmaster2 peerURLs= clientURLs= isLeader=false
      96e236888999c3ca: name=kmaster3 peerURLs= clientURLs= isLeader=true
      9e191dbdf076e744: name=kmaster1 peerURLs= clientURLs= isLeader=false
      Note: The isLeader=true will automatically get elected/updated based on last elected etcd master (in the case above its kmaster3). Create flannel network key in etcd, by running the below.
      /usr/bin/etcdctl set / '{ "Network": "", "SubnetLen": 24, "Backend": { "Type": "vxlan", "VNI": 1 } }
      Verify that the key got created.
      etcdctl get /
      { "Network": "", "SubnetLen": 24, "Backend": { "Type": "vxlan", "VNI": 1 } }
      In Part 3 will continue configuring Flannel and Docker. You might also like – Other related articles to Docker and Kubernetes / micro-service. Like what you’re reading? please provide feedback, any feedback is appreciated.
0 0 votes
Article Rating
Notify of
Newest Most Voted
Inline Feedbacks
View all comments
April 4, 2019 6:49 am

Hello Eli, I have installed last version of etcd (v3.3.12) but I have configured kmaster1 as you said but to start by systemctl I got “etcd.service: main process exited, code=exited, status=1/FAILURE”. However I can start “etcd” by command successfully. Outpout for journalctl -u etcd.service : Apr 04 06:45:10 localhost.localdomain systemd[1]: etcd.service: main process exited, code=exited, status=1/FAILURE Apr 04 06:45:10 localhost.localdomain systemd[1]: Unit etcd.service entered failed state. Apr 04 06:45:10 localhost.localdomain systemd[1]: etcd.service failed. Apr 04 06:45:16 localhost.localdomain systemd[1]: etcd.service holdoff time over, scheduling restart. Apr 04 06:45:16 localhost.localdomain systemd[1]: Stopped Etcd Server. Apr 04 06:45:16 localhost.localdomain systemd[1]: Started Etcd Server.… Read more »

April 5, 2019 5:20 am
Reply to  Eli Kleinman

I do appreciate your fast reply and details 🙂

I have checked already file owner and it’s “etcd”.
However I got the problem fixed on “kmaster1” by changing path for “–cert*, –key*, –peer” in “/etc/systemd/system/etcd.service” file from “pki” to “ssl”.
I don’t know the reason behind using “pki” path. Because we added SSL certification files in “/etc/kubernetes/ssl/*”. Maybe it’s typo or there is a reason you set the path “/etc/kubernetes/pki/*”.

However I have new problem with kmaster2 and kmaster3 :
listen tcp bind: cannot assign requested address

Please let me know if I am in wrong way.


April 9, 2019 5:44 am
Reply to  Eli Kleinman

Hi Eli, I appreciate your help. Actually I noticed the part you wrote Note: Replace on each each master the below values with the proper name/ip. –name=kmaster1 –listen-peer-urls= –advertise-client-urls= And I also changed following your instruction. However I changed other ip addresses as you said and it works now 🙂 As a result the field “isLeader=true” for your log belongs to kmaster3 but for my log: 7c8d40e4de52c1b9: name=kmaster2 peerURLs= clientURLs= isLeader=false 96e236888999c3ca: name=kmaster3 peerURLs= clientURLs= isLeader=false 9e191dbdf076e744: name=kmaster1 peerURLs= clientURLs= isLeader=true Thanks for suggestion, I wish I could use CoreOS in my case but I have to use CentOS only.… Read more »

April 10, 2019 12:43 am
Reply to  Eli Kleinman

Hello Eli,

Thanks for information. I will continue to end. hope the rest, I get less trouble ;).
Here are 3 files for etcd.service on each master:

Thanks again!

Would love your thoughts, please comment.x
%d bloggers like this: