Configuring Kubernetes 3 Node Cluster On CoreOS Kubelet, RKT, CNI – Part 4

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)

In the previous post I went through how to create the Kubernetes manifests, below I am continuing with the final Kubelet, rkt, cni configuration.

I divided the Kubernetes configuration into parts outlined below (still working progress).

Note: An up-to-date example is available on my GitHub project page, or generate your own Kubernetes configuration with the Kubernetes generator available here on my GitHub page.

This is part 4 – Finalizing the kubelet configuration to use RKT and Flannel+CNI.

Required CNI configuration files

Next, lets create the CNI configuration which will be used by rkt.

cat /etc/kubernetes/cni/net.d/10-containernet.conf

If you plan to run Docker and Flannel, create the below to ensure docker is not conflicting with Flannel.
cat /etc/kubernetes/cni/docker_opts_cni.env

Extra kubernetes rkt services

The configuration below only uses the CoreOS rkt container engine, the below services are required for rocket(rkt) to work properly.

Create the below files in /etc/systemd/system.
cat /etc/systemd/system/rkt-api-tcp.socket

cat /etc/systemd/system/rkt-api.service

cat /etc/systemd/system/rkt-gc.service

cat /etc/systemd/system/rkt-gc.timer

cat /etc/systemd/system/rkt-metadata.service

cat /etc/systemd/system/rkt-metadata.socket

Lastly, lets create the kubelet service file.
cat /etc/systemd/system/kubelet.service

Ignition config
To use an ignition config, just add the below to your ignition config.

Enabling and staring the services

First lets enable/start all the per-reqiered services.

Finally, we are ready to start the kubelet services, you do so by running the below.
Tip: To see the system logs and if things work as expected, just run journalctl -u kubelet -f in another window.

For the next step you might need to Download the kubectl utili, you do so by running the below.

Now verify all services are running, by running the below. if things are ok you should see something like the below output.

For a complete ct/ignition ready example files. Click here for Node 1, Node 2 and Node 3.

Tips and Tricks / Troubleshooting

You might need to manually fetch the stage1-coreos image, I struggled with this for a wile

You should also verify to make sure the rkt api-service is running, otherwise the kubelet rkt service will fail to start.

Verifying and using Etcd

To re-join an existing Etcd Memeber
First get the member name list, by running the below.

Next, remove the memebr, in the below example its coreos3

Now, re-add the member to the cluster.

At the next coreos3 Etcd startup it will re-join the cluster cleanly.
Note: Make sure to change the etcd-member config from “new” to “existing” (i.e. –initial-cluster-state=”existing”).

Tips and Testing

Testing the Rocket(rkt) engine.

Fetch a Docker image to the rkt inventory / image list

List the busybox image

Run the busybox image with rkt

List the running images

Cleaning orphaned images, if you see error like the below.
Tip: Normally this is not needed as the rkt gc service will be doing this over time.

Running tow copies of nginx in a pod

Describe the fill pod details

Flannel / CNI Tips

For every rkt container started with the option to use the podnet1, the ip allocation will be listed in the below locations

Tip: By using rkt with the –net option without a network name it will use/create a default network of a private range of

Optional, check out the next part, adding Ingress, kube-dns and kube-dashboard – in part 5 or jump to part 6 – Automate the Kubernetes deployment(coming soon).

You might also like – Other articles related to Docker Kubernetes / micro-service.

Like what you’re reading? please provide feedback, any feedback is appreciated.

Leave a Reply

2 Comment threads
2 Thread replies
Most reacted comment
Hottest comment thread
3 Comment authors
Eli KleinmanTarik TDanielG Recent comment authors
newest oldest most voted
Notify of

Your final step to verify everything was installed/configured correctly is to run the command kubectl. However, when I attempt to run that command after following your tutorial, that file doesn’t seem to exist. Did I miss a step?


Tarik T
Tarik T

i would like to know how to get the admin.conf file required for accessing the the cluster API using kubectl command