Configuring Kubernetes 3 Node Cluster On CoreOS Kubelet, RKT, CNI – Part 4

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

In the previous post I went through how to create the Kubernetes manifests, below I am continuing with the final Kubelet, rkt, cni configuration.

I divided the Kubernetes configuration into parts outlined below (still working progress).

Note: An up-to-date example is available on my GitHub project page, or generate your own Kubernetes configuration with the Kubernetes generator available here on my GitHub page.

This is part 4 – Finalizing the kubelet configuration to use RKT and Flannel+CNI.

Required CNI configuration files

Next, lets create the CNI configuration which will be used by rkt.

mkdir -p /etc/kubernetes/cni/net.d

cat /etc/kubernetes/cni/net.d/10-containernet.conf

{
    "name": "podnet1",
    "type": "flannel",
    "delegate": {
        "forceAddress": true,
        "isDefaultGateway": true,
        "hairpinMode": true
    }
}

If you plan to run Docker and Flannel, create the below to ensure docker is not conflicting with Flannel.
cat /etc/kubernetes/cni/docker_opts_cni.env

DOCKER_OPT_BIP=""
DOCKER_OPT_IPMASQ=""

Extra kubernetes rkt services

The configuration below only uses the CoreOS rkt container engine, the below services are required for rocket(rkt) to work properly.

Create the below files in /etc/systemd/system.
cat /etc/systemd/system/rkt-api-tcp.socket

[Unit]
Description=rkt api service socket
PartOf=rkt-api.service

[Socket]
ListenStream=127.0.0.1:15441
ListenStream=[::1]:15441
Service=rkt-api.service
BindIPv6Only=both

[Install]
WantedBy=sockets.target

cat /etc/systemd/system/rkt-api.service

[Unit]
Description=rkt api service
Documentation=http://github.com/rkt/rkt
After=network.target rkt-api-tcp.socket
Requires=rkt-api-tcp.socket

[Service]
ExecStart=/usr/bin/rkt api-service --local-config=/etc/rkt

[Install]
WantedBy=multi-user.target

cat /etc/systemd/system/rkt-gc.service

[Unit]
Description=Garbage Collection for rkt

[Service]
Environment=GRACE_PERIOD=24h
Type=oneshot
ExecStart=/usr/bin/rkt gc --grace-period=${GRACE_PERIOD}

cat /etc/systemd/system/rkt-gc.timer

[Unit]
Description=Periodic Garbage Collection for rkt

[Timer]
OnActiveSec=0s
OnUnitActiveSec=12h

[Install]
WantedBy=multi-user.target

cat /etc/systemd/system/rkt-metadata.service

[Unit]
Description=rkt metadata service
Documentation=http://github.com/rkt/rkt
After=network.target rkt-metadata.socket
Requires=rkt-metadata.socket

[Service]
ExecStart=/usr/bin/rkt metadata-service

[Install]
WantedBy=multi-user.target

cat /etc/systemd/system/rkt-metadata.socket

[Unit]
Description=rkt metadata service socket
PartOf=rkt-metadata.service

[Socket]
ListenStream=/run/rkt/metadata-svc.sock
SocketMode=0660
SocketUser=root
SocketGroup=root
RemoveOnStop=true

[Install]
WantedBy=sockets.target

Lastly, lets create the kubelet service file.
cat /etc/systemd/system/kubelet.service

[Unit]
Description=The primary agent to run pods
Documentation=http://kubernetes.io/docs/admin/kubelet/
Requires=etcd-member.service
After=flanneld.service

[Service]
Slice=system.slice
Environment=KUBELET_IMAGE_TAG=v1.8.0_coreos.0
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/usr/bin/mkdir -p /var/log/containers
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/usr/bin/mkdir -p /opt/cni/bin
ExecStartPre=/usr/bin/mkdir -p /var/lib/cni
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid \
  --net=host \
  --set-env-file=/etc/environment \
  --volume var-lib-rkt,kind=host,source=/var/lib/rkt \
  --mount volume=var-lib-rkt,target=/var/lib/rkt \
  --volume dns,kind=host,source=/etc/resolv.conf \
  --mount volume=dns,target=/etc/resolv.con \
  --volume var-lib-cni,kind=host,source=/var/lib/cni \
  --mount volume=var-lib-cni,target=/var/lib/cni \
  --volume var-log,kind=host,source=/var/log \
  --mount volume=var-log,target=/var/log \
  --volume container,kind=host,source=/var/log/containers \
  --mount volume=container,target=/var/log/containers \
  --volume rkt,kind=host,source=/usr/bin/rkt \
  --mount volume=rkt,target=/usr/bin/rkt"
ExecStart=/usr/lib/coreos/kubelet-wrapper \
  --address="0.0.0.0" \
  --register-node=true \
  --container-runtime=rkt \
  --pod-manifest-path=/etc/kubernetes/manifests \
  --allow-privileged=true \
  --cert-dir=/etc/kubernetes/ssl \
  --tls-cert-file=/etc/kubernetes/ssl/worker.pem \
  --tls-private-key-file=/etc/kubernetes/ssl/worker-key.pem \
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml \
  --cluster-dns=10.3.0.10 \
  --cluster-domain=cluster.local \
  --network-plugin=cni \
  --cni-conf-dir=/etc/kubernetes/cni/net.d \
  --lock-file=/var/run/lock/kubelet.lock \
  --exit-on-lock-contention \
  --cloud-provider="" \
  --cadvisor-port=4194 \
  --cgroups-per-qos=true \
  --cgroup-root=/ \
  --hostname-override=coreos1
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

Ignition config
To use an ignition config, just add the below to your ignition config.

systemd:
  units:
    - name: rkt-api.service
      enabled: true
      contents: |
        [Unit]
        Description=rkt api service
        Documentation=http://github.com/rkt/rkt
        After=network.target rkt-api-tcp.socket
        Requires=rkt-api-tcp.socket

        [Service]
        ExecStart=/usr/bin/rkt api-service --local-config=/etc/rkt

        [Install]
        WantedBy=multi-user.target

    - name: rkt-gc.service
      enabled: true
      contents: |
        [Unit]
        Description=Garbage Collection for rkt

        [Service]
        Environment=GRACE_PERIOD=24h
        Type=oneshot
        ExecStart=/usr/bin/rkt gc --grace-period=${GRACE_PERIOD}

    - name: rkt-gc.timer
      enabled: true
      contents: |
        [Unit]
        Description=Periodic Garbage Collection for rkt

        [Timer]
        OnActiveSec=0s
        OnUnitActiveSec=12h

        [Install]
        WantedBy=multi-user.target

    - name: rkt-metadata.service
      enabled: true
      contents: |
        [Unit]
        Description=rkt metadata service
        Documentation=http://github.com/rkt/rkt
        After=network.target rkt-metadata.socket
        Requires=rkt-metadata.socket

        [Service]
        ExecStart=/usr/bin/rkt metadata-service

        [Install]
        WantedBy=multi-user.target

    - name: rkt-metadata.socket
      enabled: true
      contents: |
        [Unit]
        Description=rkt metadata service socket
        PartOf=rkt-metadata.service

        [Socket]
        ListenStream=/run/rkt/metadata-svc.sock
        SocketMode=0660
        SocketUser=root
        SocketGroup=root
        RemoveOnStop=true

        [Install]
        WantedBy=sockets.target

    - name: kubelet.service
      enabled: false
      contents: |
          [Unit]
          Description=The primary agent to run pods
          Documentation=http://kubernetes.io/docs/admin/kubelet/
          Requires=etcd-member.service
          After=flanneld.service
          
          [Service]
          Slice=system.slice
          Environment=KUBELET_IMAGE_TAG=v1.8.0_coreos.0
          ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
          ExecStartPre=/usr/bin/mkdir -p /var/log/containers
          ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/cni/net.d
          ExecStartPre=/usr/bin/mkdir -p /opt/cni/bin
          ExecStartPre=/usr/bin/mkdir -p /var/lib/cni
          Environment="RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid \
            --net=host \
            --set-env-file=/etc/environment \
            --volume var-lib-rkt,kind=host,source=/var/lib/rkt \
            --mount volume=var-lib-rkt,target=/var/lib/rkt \
            --volume dns,kind=host,source=/etc/resolv.conf \
            --mount volume=dns,target=/etc/resolv.con \
            --volume var-lib-cni,kind=host,source=/var/lib/cni \
            --mount volume=var-lib-cni,target=/var/lib/cni \
            --volume var-log,kind=host,source=/var/log \
            --mount volume=var-log,target=/var/log \
            --volume container,kind=host,source=/var/log/containers \
            --mount volume=container,target=/var/log/containers \
            --volume rkt,kind=host,source=/usr/bin/rkt \
            --mount volume=rkt,target=/usr/bin/rkt"
          ExecStart=/usr/lib/coreos/kubelet-wrapper \
            --address="0.0.0.0" \
            --register-node=true \
            --container-runtime=rkt \
            --pod-manifest-path=/etc/kubernetes/manifests \
            --allow-privileged=true \
            --cert-dir=/etc/kubernetes/ssl \
            --tls-cert-file=/etc/kubernetes/ssl/worker.pem \
            --tls-private-key-file=/etc/kubernetes/ssl/worker-key.pem \
            --client-ca-file=/etc/kubernetes/ssl/ca.pem \
            --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml \
            --cluster-dns=10.3.0.10 \
            --cluster-domain=cluster.local \
            --network-plugin=cni \
            --cni-conf-dir=/etc/kubernetes/cni/net.d \
            --lock-file=/var/run/lock/kubelet.lock \
            --exit-on-lock-contention \
            --cloud-provider="" \
            --cadvisor-port=4194 \
            --cgroups-per-qos=true \
            --cgroup-root=/ \
            --hostname-override=coreos1
          ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
          Restart=always
          RestartSec=10
          
          [Install]
          WantedBy=multi-user.target

Enabling and staring the services

First lets enable/start all the per-reqiered services.

systemctl daemon-reload
systemctl enable rkt-api-tcp.socket  rkt-api.service  rkt-gc.service  \
rkt-gc.timer  rkt-metadata.service  rkt-metadata.socket
systemctl start rkt-api-tcp.socket  rkt-api.service  rkt-gc.service  \
rkt-gc.timer  rkt-metadata.service  rkt-metadata.socket

Finally, we are ready to start the kubelet services, you do so by running the below.
Tip: To see the system logs and if things work as expected, just run journalctl -u kubelet -f in another window.

systemctl enable kubelet
systemctl start kubelet

For the next step you might need to Download the kubectl utili, you do so by running the below.

mkdir /opt/bin && cd /opt/bin
curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/linux/amd64/kubectl
chmod +x /opt/bin/kubectl
export PATH=$PATH:/opt/bin

Now verify all services are running, by running the below. if things are ok you should see something like the below output.

kubectl get po --all-namespaces -o wide
NAMESPACE     NAME                              READY     STATUS    RESTARTS   AGE       IP          NODE
kube-system   kube-apiserver-coreos1            1/1       Running   0          2d        10.0.2.11   coreos1
kube-system   kube-apiserver-coreos2            1/1       Running   0          1d        10.0.2.12   coreos2
kube-system   kube-apiserver-coreos3            1/1       Running   0          1d        10.0.2.13   coreos3
kube-system   kube-controller-manager-coreos1   1/1       Running   0          2d        10.0.2.11   coreos1
kube-system   kube-controller-manager-coreos2   1/1       Running   0          1d        10.0.2.12   coreos2
kube-system   kube-controller-manager-coreos3   1/1       Running   0          1d        10.0.2.13   coreos3
kube-system   kube-proxy-coreos1                1/1       Running   0          2d        10.0.2.11   coreos1
kube-system   kube-proxy-coreos2                1/1       Running   0          1d        10.0.2.12   coreos2
kube-system   kube-proxy-coreos3                1/1       Running   0          1d        10.0.2.13   coreos3
kube-system   kube-scheduler-coreos1            1/1       Running   0          2d        10.0.2.11   coreos1
kube-system   kube-scheduler-coreos2            1/1       Running   0          1d        10.0.2.12   coreos2
kube-system   kube-scheduler-coreos3            1/1       Running   0          1d        10.0.2.13   coreos3

For a complete ct/ignition ready example files. Click here for Node 1, Node 2 and Node 3.

Tips and Tricks / Troubleshooting

You might need to manually fetch the stage1-coreos image, I struggled with this for a wile

rkt image fetch coreos.com/rkt/stage1-coreos:1.29.0

You should also verify to make sure the rkt api-service is running, otherwise the kubelet rkt service will fail to start.

ps -ef |grep rkt|grep api
root       626     1  0 Oct31 ?        00:17:57 /usr/bin/rkt api-service --local-config=/etc/rkt

Verifying and using Etcd

etcdctl cluster-health
member 829c4dcf6567e22f is healthy: got healthy result from https://10.0.2.13:2379
member 8ad2e1df4dc66f9a is healthy: got healthy result from https://10.0.2.12:2379
member b12eaa0af14319e0 is healthy: got healthy result from https://10.0.2.11:2379
cluster is healthy

etcdctl ls /coreos.com/network
/coreos.com/network/config
/coreos.com/network/subnets

etcdctl get /coreos.com/network/config
{ "Network": "10.0.0.0/21", "SubnetLen": 24, "Backend": { "Type": "vxlan", "VNI": 1 } }

To re-join an existing Etcd Memeber
First get the member name list, by running the below.

etcdctl member list
1d9b68db3dfbbb61: name=coreos3 peerURLs=https://10.0.2.13:2380 clientURLs=https://10.0.2.13:2379 isLeader=false
8ad2e1df4dc66f9a: name=coreos2 peerURLs=https://10.0.2.12:2380 clientURLs=https://10.0.2.12:2379 isLeader=false
b12eaa0af14319e0: name=coreos1 peerURLs=https://10.0.2.11:2380 clientURLs=https://10.0.2.11:2379 isLeader=true

Next, remove the memebr, in the below example its coreos3

etcdctl member remove 1d9b68db3dfbbb61 
Removed member 1d9b68db3dfbbb61 from cluster

Now, re-add the member to the cluster.

etcdctl member add coreos3 https://10.0.2.13:2380
Added member named coreos3 with ID 805576726f49436d to cluster

ETCD_NAME="coreos3"
ETCD_INITIAL_CLUSTER="coreos3=https://10.0.2.13:2380,coreos2=https://10.0.2.12:2380,coreos1=https://10.0.2.11:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"

At the next coreos3 Etcd startup it will re-join the cluster cleanly.
Note: Make sure to change the etcd-member config from “new” to “existing” (i.e. –initial-cluster-state=”existing”).

Tips and Testing

Testing the Rocket(rkt) engine.

rkt run --interactive kinvolk.io/aci/busybox:1.24

Fetch a Docker image to the rkt inventory / image list

rkt --insecure-options=image fetch docker://rastasheep/ubuntu-sshd

List the busybox image

rkt image list
ID			NAME							SIZE	IMPORT TIME	LAST USED
sha512-f0e24f10d3c2	quay.io/coreos/etcd:v3.2.8				69MiB	2 weeks ago	2 weeks ago
sha512-05a6acfda824	quay.io/coreos/flannel:v0.9.0				99MiB	2 weeks ago	2 weeks ago
sha512-e50b77423452	coreos.com/rkt/stage1-coreos:1.29.0			211MiB	3 days ago	3 days ago
sha512-0b2741935820	quay.io/coreos/hyperkube:v1.8.0_coreos.0		514MiB	3 days ago	3 days ago
sha512-140375b2a2bd	kinvolk.io/aci/busybox:1.24				2.2MiB	2 days ago	2 days ago
[..] snip

Run the busybox image with rkt

rkt run --dns 8.8.8.8 --interactive --debug registry-1.docker.io/rastasheep/ubuntu-sshd:latest --net=podnet1

List the running images

rkt list
UUID		APP			IMAGE NAME					STATE	CREATED		STARTED		NETWORKS
0d927861	kube-apiserver		quay.io/coreos/hyperkube:v1.8.0_coreos.0	running	2 days ago	2 days ago	
57d51105	etcd			quay.io/coreos/etcd:v3.2.8			running	2 days ago	2 days ago	
5b55b2b9	kube-proxy		quay.io/coreos/hyperkube:v1.8.0_coreos.0	running	2 days ago	2 days ago	
84cdbd78	hyperkube		quay.io/coreos/hyperkube:v1.8.0_coreos.0	running	2 hours ago	2 hours ago	
8c8a66e1	kube-scheduler		quay.io/coreos/hyperkube:v1.8.0_coreos.0	running	2 days ago	2 days ago	
bd3afcff	kube-controller-manager	quay.io/coreos/hyperkube:v1.8.0_coreos.0	running	2 days ago	2 days ago	
ede8bf63	hyperkube		quay.io/coreos/hyperkube:v1.8.0_coreos.0	exited	1 day ago	1 day ago	
f1645450	my-nginx		registry-1.docker.io/library/nginx:latest	running	1 day ago	1 day ago	
f6befb10	flannel			quay.io/coreos/flannel:v0.9.0			running	1 day ago	1 day ago	
[..] snip

Cleaning orphaned images, if you see error like the below.
Tip: Normally this is not needed as the rkt gc service will be doing this over time.

list: 5 error(s) encountered when listing pods:
list: ----------------------------------------
list: Unable to read pod 1be2f6a1-c6de-48f3-a1f6-a6f17fbe9920 manifest:
  error reading pod manifest
[..] snip

# Run to clean
rkt gc --grace-period=0s

Running tow copies of nginx in a pod

kubectl run my-nginx --image=registry-1.docker.io/library/nginx:latest --image-pull-policy=Never --replicas=2 --port=80
kubectl expose deployment my-nginx --port=80 --type=LoadBalancer

# Show status
kubectl get po -o wide --all-namespaces
NAMESPACE     NAME                              READY     STATUS    RESTARTS   AGE       IP          NODE
default       my-nginx-7d69cc96fc-jndtj         1/1       Running   0          1d        10.0.1.43   coreos1
default       my-nginx-7d69cc96fc-tszj6         1/1       Running   0          1d        10.0.6.50   coreos3
kubectl get svc -o wide --all-namespaces
NAMESPACE   NAME         TYPE           CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE       SELECTOR
default     my-nginx     LoadBalancer   10.3.6.151        80:31535/TCP   1d        run=my-nginx

Describe the fill pod details

kubectl describe po my-nginx-7d69cc96fc-jndtj
Name:           my-nginx-7d69cc96fc-jndtj
Namespace:      default
Node:           coreos1/10.0.2.11
Start Time:     Wed, 01 Nov 2017 18:19:23 +0000
Labels:         pod-template-hash=3825775297
                run=my-nginx
[..] snip
Flannel / CNI Tips

For every rkt container started with the option to use the podnet1, the ip allocation will be listed in the below locations

more /var/lib/cni/networks/podnet1/*
::::::::::::::
/var/lib/cni/networks/podnet1/10.0.1.43
::::::::::::::
k8s_3003e3d5-bf31-11e7-9702-080027079313
::::::::::::::
/var/lib/cni/networks/podnet1/last_reserved_ip
::::::::::::::
10.0.1.43

Tip: By using rkt with the –net option without a network name it will use/create a default network of a private range of 172.16.28.0.

Optional, check out the next part, adding Ingress, kube-dns and kube-dashboard – in part 5 or jump to part 6 – Automate the Kubernetes deployment(coming soon).

You might also like – Other articles related to Docker Kubernetes / micro-service.

Like what you’re reading? please provide feedback, any feedback is appreciated.

4
Leave a Reply

avatar
3000
2 Comment threads
2 Thread replies
1 Followers
 
Most reacted comment
Hottest comment thread
3 Comment authors
Eli KleinmanTarik TDanielG Recent comment authors
  Subscribe  
newest oldest most voted
Notify of
DanielG
Guest
DanielG

Your final step to verify everything was installed/configured correctly is to run the command kubectl. However, when I attempt to run that command after following your tutorial, that file doesn’t seem to exist. Did I miss a step?

Thanks!

Tarik T
Guest
Tarik T

i would like to know how to get the admin.conf file required for accessing the the cluster API using kubectl command