Using Kubernetes Cluster For Your Private Cloud Orchestration – Part 2

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Kubernetes Cluster installation, configuration on Ubuntu 17.04

This is the second post of this series, how to install and configure your own Kubernetes cluster. in part one I went over configuring a Kubernetes cluster by using minikube – a simplified process, in this post I moving to the next step – installing, configuring by using kubeadm and kubelet.

First lets make sure we have the latest OS(Ubuntu) bits.

apt-get update && apt-get upgrade
apt-get install -y apt-transport-https

Now we are ready to start with the kubernetes installation.
Lets add / configure the kubernetes repo.

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat </etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF

Now, lets install the kubernetes application binary.

apt-get update
apt-get install -y kubelet kubeadm

Note: I wasn’t able to use the stock kubernetes installation, due to a bug with version 1.7.1 which is part of the Ubuntu 17.04 repo. more details are outlined below. to manual install the latest version just follow these steps – Installing Minikube – Kubectl

Initializing kubernetes

To Initialize kubernetes you can just run kubeadm with the init option. something like the below.

kubeadm init

Now depending on what driver/options(CNI or CNM) you are using for your Docker networking, might/will require more specific options, more below.

There are many network options to choose from in a Docker/kubernetes installation. below I will describe some of these options.

  1. Flannel only
  2. Calico only with Calico Policy
  3. Flannel with Calico Policy
  4. Weave Net (Not discussed below)
  5. Cilium (Not discussed below)
  6. Contrail, based on OpenContrail (Not discussed below)

Note: To specify a specific ip/interface to listen-on add the fowling options to the init process.

kubeadm init --apiserver-advertise-address=

Using the flannel driver

If you are planning on using the flannel driver, make sure to add the –pod-network-cidr option.
Your output will look something like below.

kubeadm init
# For flannel use.
kubeadm init --pod-network-cidr=10.244.0.0/16
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.2
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.06.0-c
e. Max validated version: 1.12
[preflight] Starting the kubelet service
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [your-kub-host kubernetes kubernetes.default kubernetes.d
efault.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.10.50]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 54.501595 seconds
[token] Using token: 0e80f0.289b222173a39a53
[apiconfig] Created RBAC rules
Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 7ec887.b4312a332eeddbb0 10.10.10.50:644

To be able to manipulate and connect locally to the cluster, run the below (or copy to other node).

cp /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

In my case I working on only one physical node, in order to continue with the rest of the steps the below role change is required.

kubectl taint nodes --all node-role.kubernetes.io/master-

Note: By default, your cluster will not schedule pods on the master for security reasons. If you want to be able to schedule pods on the master, e.g. for a single-machine Kubernetes cluster for development, you will have the to run the above.

Next, you have to decide which network driver you are going to use.
Below I will first show you how to use the flannel driver. I will then destroy the configuration and to re-do with the calico driver.

Flannel driver configuration

Lets jump right in.
First we need to get the Flannel yaml configuration files.
Tip You can specify the Flannel yaml web url directly if you have Internet access.

For Flannel we also need the RBAC policy file.
To use / apply the flannel driver just run the below (or specify a local yml file if you don’t have web access).

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Current pods with Flannel network and policy engine.

kubectl get pods --all-namespaces
NAMESPACE     NAME                                   READY     STATUS    RESTARTS   AGE
kube-system   etcd-your-kub-host                     1/1       Running   0          9m
kube-system   kube-apiserver-your-kub-host           1/1       Running   0          9m
kube-system   kube-controller-manager-your-kub-host  1/1       Running   0          9m
kube-system   kube-dns-2425271678-gsq8d              3/3       Running   1          10m
kube-system   kube-flannel-ds-z5hbz                  2/2       Running   0          10m
kube-system   kube-proxy-knhtz                       1/1       Running   0          10m
kube-system   kube-scheduler-your-kub-host           1/1       Running   0          9m

Next verify if the node is ready to join the cluster, by running the below.

kubectl get nodes
NAME            STATUS     AGE       VERSION
your-kub-host   NotReady   9m        v1.7.1

You are now ready to join the cluster.
Note: This would normally run on the nodes (not the master as in my case).

kubeadm join --token 7ec887.b4312a332eeddbb0 10.10.10.50:6443

Note: In Kubernetes stock(version 1.7.1) on Ubuntu 17.04 there is a bug by trying to join the cluster you will get something like to error below.

[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.06.0-ce. Max validated version: 1.12
[preflight] WARNING: hostname "" could not be reached
[preflight] WARNING: hostname "" lookup : no such host
[preflight] Some fatal errors occurred:
        hostname "" a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
        Port 10250 is in use
        /etc/kubernetes/manifests is not empty
        /var/lib/kubelet is not empty
        /etc/kubernetes/pki/ca.crt already exists
        /etc/kubernetes/kubelet.conf already exists
[preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks`

For the time being, you have the below two options:

  1. Add –skip-preflight-checks to the join options
  2. Upgrade / Install the binary from Goggle directly as outlined in part 1 – Installing Minikube / Kubectl

Installing / adding the kubernetes dashboard (optionally).

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml

To access the dashboard, just run the below

kubectl proxy

Now go to http://localhost:8001/ui to access the dashboard.

To find what your kubernetes-dashboard ip is, run the below.

kubectl get svc --all-namespaces
NAMESPACE     NAME                   CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
default       kubernetes             10.96.0.1               443/TCP         7m
kube-system   calico-etcd            10.96.232.136           6666/TCP        6m
kube-system   kube-dns               10.96.0.10              53/UDP,53/TCP   7m
kube-system   kubernetes-dashboard   10.100.139.19           80/TCP          6m

Tip: Your dashboard will normally only be available from withing your cluster, to see how to get around this click here – kubectl proxy options

I created a quick simple script to bring up a fresh cluster, the script can be find at the end of this post.

Cluster tear down (Flannel)- Remove all nodes / pods

To play with the Calico network driver, I am going to tear down the cluster, then re-create with the Calico network driver.

The simple steps below should do it.

kubectl delete deployment kube-dns --namespace=kube-system
kubectl delete deployment kubernetes-dashboard --namespace=kube-system
kubectl delete -f kube-flannel.yml 
kubectl delete -f kube-flannel-rbac.yml 
kubectl drain your-kub-host --delete-local-data --force --ignore-daemonsets
kubeadm reset

kubernetes using the Calico network driver

First lets initialize / re-create the cluster (same steps above).
For complete output check above at the Flenel configuration, as the steps are the same I am not going to repeat.

kubeadm init
[..] snip
cp /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
kubectl taint nodes --all node-role.kubernetes.io/master-

Now, we need to install / apply the calico driver.

kubectl apply -f http://docs.projectcalico.org/v2.3/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml

Thats it, after a few minutes, your cluster is ready to be used.
You will need to join the node to the cluster same as above.
Optional, you can add the dashboard – same as above.

Your current pods with Calico network and policy engine.

kubectl get pods --all-namespaces
NAMESPACE     NAME                                        READY     STATUS    RESTARTS   AGE
default       nginx-1803751077-r9whg                      1/1       Running   0          10d
kube-system   calico-etcd-6k7b1                           1/1       Running   0          10d
kube-system   calico-node-9sqtv                           2/2       Running   0          10d
kube-system   calico-policy-controller-1727037546-s0fst   1/1       Running   0          10d
kube-system   etcd-your-kub-host                          1/1       Running   0          10d
kube-system   kube-apiserver-your-kub-host                1/1       Running   0          10d
kube-system   kube-controller-manager-your-kub-host       1/1       Running   0          10d
kube-system   kube-dns-2425271678-x1wrp                   3/3       Running   0          10d
kube-system   kube-proxy-1z3hj                            1/1       Running   0          10d
kube-system   kube-scheduler-your-kub-host                1/1       Running   0          10d
kube-system   kubernetes-dashboard-3313488171-1bgd2       1/1       Running   0          10d

Cluster tear down (Calico) – Remove all nodes / pods

To tear down the cluster just run the below.

kubectl delete deployment kube-dns --namespace=kube-system
kubectl delete deployment calico-policy-controller --namespace=kube-system
kubectl delete deployment kubernetes-dashboard --namespace=kube-system
kubectl delete -f calico.yaml
kubectl drain your-kub-host --delete-local-data --force --ignore-daemonsets
kubeadm reset

Common helpful tips in your Kubernetes cluster

Get logs flannel, calico

# Get logs - flannel
kubectl logs kube-flannel-ds-ss4sz --namespace=kube-system kube-flannel

# Get logs - calico
kubectl logs kube-flannel-ds-ss4sz --namespace=kube-system kube-flannel

List all nodes

kubectl get nodes

List all pod namespaces

kubectl get pods --all-namespaces

Delete all pods in one namespace

kubectl delete pod --all --namespace=kube-system

To list all deployments

kubectl get deployments --all-namespaces

To list all services

kubectl get svc --all-namespaces

List config contexts

kubectl config get-contexts

Kubectl Cheatsheet/

Other helpful tips

Controlling your cluster from machines other than the master

scp root@:/etc/kubernetes/admin.conf .
kubectl --kubeconfig ./admin.conf get nodes

# Proxying API Server to localhost
scp root@:/etc/kubernetes/admin.conf .
kubectl --kubeconfig ./admin.conf proxy

Installing a sample application from Goggle

Goggle has a sample application that can be used to test/play with kubernetes.

kubectl create namespace sock-shop
kubectl apply -n sock-shop -f "https://github.com/microservices-demo/microservices-demo/blob/master/deploy/kubernetes/complete-demo.yaml?raw=true"

Get front-end service ip.

kubectl -n sock-shop get svc front-end

Get pod info for the sock-shop applaication

kubectl get pods -n sock-shop
kubectl delete namespace sock-shop

Appendix – Kubernetes Script

Flannel script

token=`kubeadm init --pod-network-cidr=10.244.0.0/16|grep 10.10.10.50|tail -1|awk '{n = 3; for (--n; n >= 0; n--){ printf "%s\t",$(NF-n)} print ""}'`
cp /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
sleep 5
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl get pods --all-namespaces
sleep 2
kubectl apply -f kube-flannel-rbac.yml
kubectl apply -f kube-flannel.yml
kubectl create --namespace kube-system -f kube-flannel.yml
## Or get remote directly...
##kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml
##kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
sleep 5
kubectl get pods --all-namespaces
kubectl get nodes

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml

# Add (--skip-preflight-checks) if your client version 1.7.1, you can downgrade to 1.7.0 or upgrade to 1.7.2 has the issue fixed.
kubeadm join --skip-preflight-checks $token

Calico script

token=`kubeadm init |grep 10.10.10.50|tail -1|awk '{n = 3; for (--n; n >= 0; n--){ printf "%s\t",$(NF-n)} print ""}'`
cp /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
kubectl taint nodes --all node-role.kubernetes.io/master-

while [[ `kubectl get pods --all-namespaces 2>&1` == *"No resources found."* ]]; do
  kubectl get pods --all-namespaces
  echo "-----------
  Coming up... Please stand by..."
  sleep 1
done

sleep 2
while  [ `kubectl get pods --all-namespaces|grep Run|awk '{print $4}'|wc -l` -lt 5 ]; do
kubectl get pods --all-namespaces
echo "-----------
Please wait... still in process..."
sleep 2
done

kubectl apply -f calico.yaml
### Or get remote directly...
## kubectl apply -f http://docs.projectcalico.org/v2.3/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml
while  [ `kubectl get pods --all-namespaces|grep Run|awk '{print $4}'|wc -l` -lt 7 ]; do
kubectl get pods --all-namespaces
echo "-----------
Please wait... still in process..."
sleep 5
done

kubectl get nodes
kubectl get pods --all-namespaces

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
kubectl get pods --all-namespaces

kubeadm join --skip-preflight-checks $token
kubectl get nodes

Compile kubernetes your self

git clone https://github.com/kubernetes/kubernetes
cd kubernetes
make quick-release

You might also like:
Master Index – Related Posts To Docker, Kubernetes And Micro-Services.

Whats tools/scripts are you using to manage your Kubernetes Cluster? please let me know in the comments below.

Leave a Reply

avatar
3000
  Subscribe  
Notify of