Installing a Kubernetes 1.11 Cluster On CentOS 7.5(1804) The Manual Way – Install/configure the Kubernetes VM – Part 2

1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 5.00 out of 5)

Installing, configuring 3 node Kubernetes(master) cluster on CentOS 7.5 – configure the Kubernetes VM’s

In Part 1 I described how to install and configure the bear-metal OS hosting all the Kubernetes VM’s, below I am continuing with with the installation and configuration of all Kubernetes VM/masters.

This is Part 2 – Installing the Kubernetes VM’s.

Installing the Kubernetes VM’s

  • Install CentOS 7.5(1804) / (CentOS-7-x86_64-Minimal-1804.iso), on 5 Virtual Box VM’s.
  • 3 VM’s will be configured as Masters, and 2 will be configured as worker nodes.
    • The host names, IP Address’s used in my configuration are below (feel free to come up with your own configuration schema).

      Configure each VM with the below resources.

      1. 1 Virtual CPU is fine.
      2. At least 2Gb of RAM.
      3. At least 12Gb HDD.
      4. Assign Network Switch to network port.

      Set the below on each Kubernetes VM (master and Workers).
      Disable SE Linux by Running the below.

      Disable swap

      If you are behind a firewall or corporate proxy, add your proxy to /etc/yum.conf and /etc/environment (for an example check out in part 1).

      Install Docker packages

      Add kubernetes repo

      Install kubernetes and other related packages

      Since we are not going to use kubeadm for our configuration comment out all entry’s in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf like the below example.

      Disable the system firewall by running the below, it would be partially managed by kubernetes.

      Note: There are other options to deal with firewall rules, for example just enabling the ports required by kubernetes.

      Lastly before continuing reboot each vm instances.

      Configuring etcd – kubernetes SSL certificates

      In order for all communication in kubernetes to be secure we need to create certificates.
      Note: In order to keep things the simplest possible, I will be using the same certificate key for all components. in production you might consider breaking those out in your production environment.

      Creating Kubernetes SSL certificates

      Create a cert.conf file with the content below. add other host names/ ip address as needed, also make sure to replace with your domain name.

      Next, Copy the below and create a file called in the same directory as cert.conf file.

      Next, make the file executable, and run to create the certificates. output would look something like the below.

      To verify the certificate, run the below.

      Once completed you should be left with the below list of files.

      From the list of above files we only need the ca.pem, ca-key.pem, etcd-node.pem and etcd-node-key.pem.

      Create the directory if not exist, and copy the certificate files.

      Note: I named the most used key certificate as etcd-node-(key).pem, since its the first SSL certificate being used, feel free to rename at your own desire(just remember to update all references).

      Etcd installation and configuration

      Add to /etc/environment

      If you are behind a corporate firewall or proxy, set the below in /etc/systemd/system/docker.service.d/http-proxy.conf (create the directory if not exist).

      Install etcd on the 3 master servers

      Download the latest etcd and configure

      Create etcd service, like the below.
      cat /etc/systemd/system/etcd.service
      Note: Replace on each each master the below values with the proper name/ip.

      1. –name=kmaster1
      2. –listen-peer-urls=
      3. –advertise-client-urls=
      4. –initial-advertise-peer-urls=
      5. –listen-client-urls=,,

      The example below is from master1.

      Create an etcd user

      Add network configuration on all 3 masters, by running the below.

      Start and Enable etcd on all 3 masters.

      Verify if etcd works across all nodes.

      Note: The isLeader=true will automatically get elected/updated based on last elected etcd master (in the case above its kmaster3).

      Create flannel network key in etcd, by running the below.

      Verify that the key got created.

      In Part 3 will continue configuring Flannel and Docker.

      You might also like – Other related articles to Docker and Kubernetes / micro-service.

      Like what you’re reading? please provide feedback, any feedback is appreciated.

Leave a Reply

1 Comment threads
8 Thread replies
Most reacted comment
Hottest comment thread
2 Comment authors
Eli Kleinmancamer Recent comment authors
newest oldest most voted
Notify of

Hello Eli, I have installed last version of etcd (v3.3.12) but I have configured kmaster1 as you said but to start by systemctl I got “etcd.service: main process exited, code=exited, status=1/FAILURE”. However I can start “etcd” by command successfully. Outpout for journalctl -u etcd.service : Apr 04 06:45:10 localhost.localdomain systemd[1]: etcd.service: main process exited, code=exited, status=1/FAILURE Apr 04 06:45:10 localhost.localdomain systemd[1]: Unit etcd.service entered failed state. Apr 04 06:45:10 localhost.localdomain systemd[1]: etcd.service failed. Apr 04 06:45:16 localhost.localdomain systemd[1]: etcd.service holdoff time over, scheduling restart. Apr 04 06:45:16 localhost.localdomain systemd[1]: Stopped Etcd Server. Apr 04 06:45:16 localhost.localdomain systemd[1]: Started Etcd Server.… Read more »