Adding Ingress, DNS and or Dashboard to Kubernetes
In the previous post I went through how to finalize the kubelet configuration to use RKT and Flannel/CNI. below are examples on how to configure and use Træfik as an Ingress Controller, as well as Kube-dns and Kube-dashboard configuration(coming soon).
I divided the configuration into parts outlined below (still working progress).
- Part 1: Initial setup – getting CoreOS, prepare SSL certificates, etc.
- Part 2: Configure Etcd key value store, Configure Flannel.
- Part 3: Configure Kubernetes manifests for controller, api, scheduler and proxy.
- Part 4: Finalize the kubelet configuration to use RKT and Flannel+CNI.
- Part 5: Optional – configure Ingress, kube-dns and kube-dashboard.
- Part 6: Automate the Kubernetes deployment, etcd, kubelet, rkt, flannel, cni and ssl.
Note: An up-to-date example is available on my GitHub project page, or generate your own Kubernetes configuration with the Kubernetes generator available here on my GitHub page.
This is part 5 Optional – configure Ingress, Kube-dns and Kube-dashboard
The Kube-dashboard examples is still in the works therefor missing from the below configuration, I hope to updated once I get a chance.
Simple Kubernetes kube-dns Configuration
To create your DNS pod just run the below using the kube-dns.yaml(below).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
kubectl create -f kube-dns.yaml serviceaccount "kube-dns" created configmap "kube-dns" created service "kube-dns" created deployment "kube-dns" created # Verify status kubectl get po --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE kube-system kube-apiserver-coreos1 1/1 Running 0 2h 172.0.2.11 coreos1 kube-system kube-apiserver-coreos2 1/1 Running 0 2h 172.0.2.12 coreos2 kube-system kube-apiserver-coreos3 1/1 Running 0 2h 172.0.2.13 coreos3 kube-system kube-controller-manager-coreos1 1/1 Running 0 2h 172.0.2.11 coreos1 kube-system kube-controller-manager-coreos2 1/1 Running 0 2h 172.0.2.12 coreos2 kube-system kube-controller-manager-coreos3 1/1 Running 0 2h 172.0.2.13 coreos3 kube-system kube-dns-5d9f945775-qcqx9 3/3 Running 0 12s 10.20.1.4 worker1 kube-system kube-proxy-coreos1 1/1 Running 0 2h 172.0.2.11 coreos1 kube-system kube-proxy-coreos2 1/1 Running 0 2h 172.0.2.12 coreos2 kube-system kube-proxy-coreos3 1/1 Running 0 2h 172.0.2.13 coreos3 kube-system kube-proxy-worker1 1/1 Running 0 1h 172.0.2.51 worker1 kube-system kube-scheduler-coreos1 1/1 Running 0 2h 172.0.2.11 coreos1 kube-system kube-scheduler-coreos2 1/1 Running 0 2h 172.0.2.12 coreos2 kube-system kube-scheduler-coreos3 1/1 Running 0 2h 172.0.2.13 coreos3 |
To scale to additional nodes, run the below.
1 |
kubectl scale deployment/kube-dns --replicas=2 -n kube-system |
To remove the DNS pod, run the below.
1 2 3 4 5 6 |
kubectl delete -f kube-dns.yaml # or kubectl delete deployment kube-dns -n kube-system kubectl delete service kube-dns -n kube-system kubectl delete configmap kube-dns -n kube-system kubectl delete serviceaccount kube-dns -n kube-system |
To troubleshot the kube-dns pod logs, just run something like the below.
1 |
kubectl logs `kubectl get po --all-namespaces -o wide|grep kube-dns|grep worker1|awk '{print $2}'` -n kube-system kubedns -f |
I struggled with the below error for a while.
1 |
waiting for services and endpoints to be initialized from apiserver... |
Till I noticed that the PodCIDR is being wrongly created causing bad iptable rules on the worker node. redoing the cluster cleared the bad PodCIDR creation.
An example of a bad (or good entry is below) depending on your network, watch for the right Pod CIDR.
1 2 3 |
journalctl -u kubelet -f ... Dec 13 20:14:37 coreos3 kubelet-wrapper[9895]: I1213 20:14:37.950577 9895 kubelet_network.go:276] Setting Pod CIDR: -> 10.0.0.0/24 |
Kube DNS configuration files
cat /etc/kubernetes/ssl/worker-kubeconfig.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
apiVersion: v1 kind: Config clusters: - name: local cluster: server: https://10.3.0.1:443 certificate-authority: /etc/kubernetes/ssl/ca.pem users: - name: kubelet user: client-certificate: /etc/kubernetes/ssl/worker.pem client-key: /etc/kubernetes/ssl/worker-key.pem contexts: - context: cluster: local user: kubelet |
Note: Make sure to append the ca.pem at the end of the worker.pem. an example if below.
1 2 3 4 5 6 7 8 9 |
-----BEGIN CERTIFICATE----- MIIFQTCCBCmgAwIBAgIJAIeb0H3YfptEMA0GCSqGSIb3DQEBCwUAMFwxCzAJBgNV ...[..] snip -----END CERTIFICATE----- # And below add your CA cert. (remove this line) -----BEGIN CERTIFICATE----- MIIDfTCCAmWgAwIBAgIJAKboSpp9s2ZLMA0GCSqGSIb3DQEBCwUAMFwxCzAJBgNV ...[..] snip -----END CERTIFICATE----- |
cat kube-dns.yaml
Note: Replace example.com with your domain and IP Address.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 |
apiVersion: v1 kind: ServiceAccount metadata: name: kube-dns namespace: kube-system --- apiVersion: v1 kind: ConfigMap metadata: name: kube-dns namespace: kube-system labels: addonmanager.kubernetes.io/mode: EnsureExists data: stubDomains: | {"example.com": ["1.2.3.4"]} upstreamNameservers: | ["8.8.8.8", "8.8.4.4"] --- apiVersion: v1 kind: Service metadata: labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: KubeDNS name: kube-dns namespace: kube-system spec: clusterIP: "10.3.0.10" ports: - name: dns port: 53 protocol: UDP targetPort: 53 - name: dns-tcp port: 53 protocol: TCP targetPort: 53 selector: k8s-app: kube-dns sessionAffinity: None type: ClusterIP --- apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" name: kube-dns namespace: kube-system spec: replicas: 1 selector: matchLabels: k8s-app: kube-dns strategy: rollingUpdate: maxSurge: 10% maxUnavailable: 0 type: RollingUpdate template: metadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" creationTimestamp: ~ labels: k8s-app: kube-dns spec: containers: - args: - "--domain=cluster.local." - "--dns-port=10053" - "--kube-master-url=https://10.3.0.1:443" - "--config-dir=/kube-dns-config" - "--kubecfg-file=/etc/kubernetes/ssl/worker-kubeconfig.yaml" - "--v=9" env: ~ image: "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7" livenessProbe: failureThreshold: 5 httpGet: path: /healthcheck/kubedns port: 10054 scheme: HTTP initialDelaySeconds: 60 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 name: kubedns ports: - containerPort: 10053 name: dns-local protocol: UDP - containerPort: 10053 name: dns-tcp-local protocol: TCP - containerPort: 10055 name: metrics protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readiness port: 8081 scheme: HTTP initialDelaySeconds: 3 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi volumeMounts: - mountPath: /kube-dns-config name: kube-dns-config - mountPath: /etc/ssl/certs name: ssl-certs-host readOnly: true - mountPath: /etc/kubernetes/ssl name: kube-ssl readOnly: true - mountPath: /etc/kubernetes/ssl/worker-kubeconfig.yaml name: kubeconfig readOnly: true - args: - "-v=2" - "-logtostderr" - "-configDir=/etc/k8s/dns/dnsmasq-nanny" - "-restartDnsmasq=true" - "--" - "-k" - "--cache-size=1000" - "--log-facility=-" - "--no-resolv" - "--server=/cluster.local/127.0.0.1#10053" - "--server=/in-addr.arpa/127.0.0.1#10053" - "--server=/ip6.arpa/127.0.0.1#10053" - "--log-queries" image: "gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7" livenessProbe: failureThreshold: 5 httpGet: path: /healthcheck/dnsmasq port: 10054 scheme: HTTP initialDelaySeconds: 60 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 30 name: dnsmasq ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP resources: requests: cpu: 150m memory: 20Mi volumeMounts: - mountPath: /etc/k8s/dns/dnsmasq-nanny name: kube-dns-config - args: - "--v=2" - "--logtostderr" - "--probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A" - "--probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A" image: "gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7" livenessProbe: failureThreshold: 5 httpGet: path: /metrics port: 10054 scheme: HTTP initialDelaySeconds: 60 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 30 name: sidecar ports: - containerPort: 10054 name: metrics protocol: TCP resources: requests: cpu: 10m memory: 20Mi dnsPolicy: Default restartPolicy: Always serviceAccount: kube-dns serviceAccountName: kube-dns terminationGracePeriodSeconds: 30 tolerations: - key: CriticalAddonsOnly operator: Exists volumes: - configMap: defaultMode: 420 name: kube-dns optional: true name: kube-dns-config - hostPath: path: /usr/share/ca-certificates name: ssl-certs-host - hostPath: path: /etc/kubernetes/ssl name: kube-ssl - hostPath: path: /etc/kubernetes/ssl/worker-kubeconfig.yaml name: kubeconfig |
Below is a quick simple example on how to use Træfik / Nginx as an Ingres Controller. Once I get around I hope to update the below configuration with a more complex/useful example.
Simple Kubernetes Træfik Ingress Configuration
You will need to create the below pod configuration files.
This creates two Nginx instances(replicas).
cat nginx-deployment.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 |
cat nginx-ingres.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 |
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginxingress spec: rules: - host: coreos1.domain.com http:do paths: - path: / backend: serviceName: nginxsvc servicePort: 80 |
cat nginx-svc.yaml
1 2 3 4 5 6 7 8 9 10 11 12 |
apiVersion: v1 kind: Service metadata: labels: name: nginxsvc name: nginxsvc spec: ports: - port: 80 selector: app: nginx type: ClusterIP |
A very simple Traefik example is below, another example is available here using Docker
cat traefik.toml
1 2 3 4 5 6 7 8 |
[web] address = ":8181" ReadOnly = true [kubernetes] # Kubernetes server endpoint endpoint = "http://localhost:8080" namespaces = ["default","kube-system"] |
Download Traefik version 1.4.2 was the latest at time of this writing.
1 2 |
wget -O traefik https://github.com/containous/traefik/releases/download/v1.4.2/traefik_linux-amd64 chmod u+x traefik |
To Create the configuration, run the below.
1 2 3 4 5 6 |
kubectl create -f nginx-deployment.yaml kubectl create -f nginx-svc.yaml kubectl create -f nginx-ingres.yaml Then start traefik ./traefik -c traefik.toml & |
Note: Ideally, Traefik itself should run in a pod. I hope to update the example soon with such an example.
Tip: To scale from 2 to 3 pods.
1 |
kubectl scale --replicas=3 deployment nginx-deployment |
To quick test
1 2 3 4 |
curl http://coreos1.domain.com # Verify the new pods kubectl get po --all-namespaces -o wide |
To access the Traefik dashboard.
1 |
http://coreos1.domain.com:8181 |
To verify the service
1 |
kubectl describe svc nginxsvc |
To remove / destroy the configuration.
1 2 3 |
kubectl delete -f nginx-ingres.yaml kubectl delete -f nginx-svc.yaml kubectl delete -f nginx-deployment.yaml |
Adding Kube-DNS
How-to coming soon.
Check out the next part – Part 6 – Automate the Kubernetes deployment(coming soon).
You might also like – Other articles related to Docker Kubernetes / micro-service.
Like what you’re reading? please provide feedback, any feedback is appreciated.
Thanks for the guide. However traefik is not suited as ingress controller in Kubernetes due to it not having propper TLS cert handling for the ingress resources
https://github.com/containous/traefik/issues/378
https://github.com/containous/traefik/issues/2438
for more info
really sorry for the late response, I had a serious issue and many comments ware marked as SPAM.
As I am sure you know that traefik now full supports TLS.
Please provide kube-dashboard example
Hi and welcome to my Blog.
This post is kind of a bit outdated, CoreOS is now owned by RedHat and many things have changed, however you can access the official google kube-dashboard repository by going here https://github.com/kubernetes/dashboard, or directly apply it like so kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml.
Alternatively you can use helm to install the dashboard or the full prometheus stack(as well as many other packages), I have a full how-to here http://www.devtech101.com/2018/09/04/deploying-helm-tiller-prometheus-alertmanager-grafana-elasticsearch-on-your-kubernetes-cluster/
I hope to update the RedHat/CoreOS installation as well as the Git-hub kube-auto-generator repository in the future.
I hope this helps.
Eli