Configuring Kubernetes Træfik Ingress Controller, DNS, Dashboard – Part 5

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Adding Ingress, DNS and or Dashboard to Kubernetes

In the previous post I went through how to finalize the kubelet configuration to use RKT and Flannel/CNI. below are examples on how to configure and use Træfik as an Ingress Controller, as well as Kube-dns and Kube-dashboard configuration(coming soon).

I divided the configuration into parts outlined below (still working progress).

Note: An up-to-date example is available on my GitHub project page, or generate your own Kubernetes configuration with the Kubernetes generator available here on my GitHub page.

This is part 5 Optional – configure Ingress, Kube-dns and Kube-dashboard

The Kube-dashboard examples is still in the works therefor missing from the below configuration, I hope to updated once I get a chance.

Simple Kubernetes kube-dns Configuration

To create your DNS pod just run the below using the kube-dns.yaml(below).

kubectl create -f kube-dns.yaml
serviceaccount "kube-dns" created
configmap "kube-dns" created
service "kube-dns" created
deployment "kube-dns" created

# Verify status 
kubectl get po --all-namespaces -o wide
NAMESPACE     NAME                              READY     STATUS    RESTARTS   AGE       IP           NODE
kube-system   kube-apiserver-coreos1            1/1       Running   0          2h        172.0.2.11   coreos1
kube-system   kube-apiserver-coreos2            1/1       Running   0          2h        172.0.2.12   coreos2
kube-system   kube-apiserver-coreos3            1/1       Running   0          2h        172.0.2.13   coreos3
kube-system   kube-controller-manager-coreos1   1/1       Running   0          2h        172.0.2.11   coreos1
kube-system   kube-controller-manager-coreos2   1/1       Running   0          2h        172.0.2.12   coreos2
kube-system   kube-controller-manager-coreos3   1/1       Running   0          2h        172.0.2.13   coreos3
kube-system   kube-dns-5d9f945775-qcqx9         3/3       Running   0          12s       10.20.1.4    worker1
kube-system   kube-proxy-coreos1                1/1       Running   0          2h        172.0.2.11   coreos1
kube-system   kube-proxy-coreos2                1/1       Running   0          2h        172.0.2.12   coreos2
kube-system   kube-proxy-coreos3                1/1       Running   0          2h        172.0.2.13   coreos3
kube-system   kube-proxy-worker1                1/1       Running   0          1h        172.0.2.51   worker1
kube-system   kube-scheduler-coreos1            1/1       Running   0          2h        172.0.2.11   coreos1
kube-system   kube-scheduler-coreos2            1/1       Running   0          2h        172.0.2.12   coreos2
kube-system   kube-scheduler-coreos3            1/1       Running   0          2h        172.0.2.13   coreos3

To scale to additional nodes, run the below.

kubectl scale deployment/kube-dns --replicas=2 -n kube-system

To remove the DNS pod, run the below.

kubectl delete -f kube-dns.yaml
# or
kubectl delete deployment kube-dns -n kube-system
kubectl delete service kube-dns -n kube-system
kubectl delete configmap kube-dns -n kube-system
kubectl delete serviceaccount kube-dns -n kube-system

To troubleshot the kube-dns pod logs, just run something like the below.

kubectl logs `kubectl get po --all-namespaces -o wide|grep kube-dns|grep worker1|awk '{print $2}'` -n kube-system kubedns -f

I struggled with the below error for a while.

waiting for services and endpoints to be initialized from apiserver...

Till I noticed that the PodCIDR is being wrongly created causing bad iptable rules on the worker node. redoing the cluster cleared the bad PodCIDR creation.
An example of a bad (or good entry is below) depending on your network, watch for the right Pod CIDR.

journalctl -u kubelet -f
...
Dec 13 20:14:37 coreos3 kubelet-wrapper[9895]: I1213 20:14:37.950577    9895 kubelet_network.go:276] Setting Pod CIDR:  -> 10.0.0.0/24

Kube DNS configuration files

cat /etc/kubernetes/ssl/worker-kubeconfig.yaml

apiVersion: v1
kind: Config
clusters:
- name: local
  cluster:
    server: https://10.3.0.1:443
    certificate-authority: /etc/kubernetes/ssl/ca.pem
users:
- name: kubelet
  user:
    client-certificate: /etc/kubernetes/ssl/worker.pem
    client-key: /etc/kubernetes/ssl/worker-key.pem
contexts:
- context:
    cluster: local
    user: kubelet

Note: Make sure to append the ca.pem at the end of the worker.pem. an example if below.

-----BEGIN CERTIFICATE-----
MIIFQTCCBCmgAwIBAgIJAIeb0H3YfptEMA0GCSqGSIb3DQEBCwUAMFwxCzAJBgNV
...[..] snip
-----END CERTIFICATE-----
# And below add your CA cert. (remove this line)
-----BEGIN CERTIFICATE-----
MIIDfTCCAmWgAwIBAgIJAKboSpp9s2ZLMA0GCSqGSIb3DQEBCwUAMFwxCzAJBgNV
...[..] snip
-----END CERTIFICATE-----

cat kube-dns.yaml
Note: Replace example.com with your domain and IP Address.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-dns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
data:
  stubDomains: |
    {"example.com": ["1.2.3.4"]}
  upstreamNameservers: |
    ["8.8.8.8", "8.8.4.4"]
--- 
apiVersion: v1
kind: Service
metadata: 
  labels: 
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: KubeDNS
  name: kube-dns
  namespace: kube-system
spec: 
  clusterIP: "10.3.0.10"
  ports: 
    - 
      name: dns
      port: 53
      protocol: UDP
      targetPort: 53
    - 
      name: dns-tcp
      port: 53
      protocol: TCP
      targetPort: 53
  selector: 
    k8s-app: kube-dns
  sessionAffinity: None
  type: ClusterIP
--- 
apiVersion: extensions/v1beta1
kind: Deployment
metadata: 
  labels: 
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
  name: kube-dns
  namespace: kube-system
spec: 
  replicas: 1
  selector: 
    matchLabels: 
      k8s-app: kube-dns
  strategy: 
    rollingUpdate: 
      maxSurge: 10%
      maxUnavailable: 0
    type: RollingUpdate
  template: 
    metadata: 
      annotations: 
        scheduler.alpha.kubernetes.io/critical-pod: ""
      creationTimestamp: ~
      labels: 
        k8s-app: kube-dns
    spec: 
      containers: 
        - 
          args: 
            - "--domain=cluster.local."
            - "--dns-port=10053"
            - "--kube-master-url=https://10.3.0.1:443"
            - "--config-dir=/kube-dns-config"
            - "--kubecfg-file=/etc/kubernetes/ssl/worker-kubeconfig.yaml"
            - "--v=9"
          env: ~
          image: "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7"
          livenessProbe: 
            failureThreshold: 5
            httpGet: 
              path: /healthcheck/kubedns
              port: 10054
              scheme: HTTP
            initialDelaySeconds: 60
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 5
          name: kubedns
          ports: 
            - 
              containerPort: 10053
              name: dns-local
              protocol: UDP
            - 
              containerPort: 10053
              name: dns-tcp-local
              protocol: TCP
            - 
              containerPort: 10055
              name: metrics
              protocol: TCP
          readinessProbe: 
            failureThreshold: 3
            httpGet: 
              path: /readiness
              port: 8081
              scheme: HTTP
            initialDelaySeconds: 3
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 5
          resources: 
            limits: 
              memory: 170Mi
            requests: 
              cpu: 100m
              memory: 70Mi
          volumeMounts: 
            - 
              mountPath: /kube-dns-config
              name: kube-dns-config
            - 
              mountPath: /etc/ssl/certs
              name: ssl-certs-host
              readOnly: true
            - 
              mountPath: /etc/kubernetes/ssl
              name: kube-ssl
              readOnly: true
            - 
              mountPath: /etc/kubernetes/ssl/worker-kubeconfig.yaml
              name: kubeconfig
              readOnly: true
        - 
          args: 
            - "-v=2"
            - "-logtostderr"
            - "-configDir=/etc/k8s/dns/dnsmasq-nanny"
            - "-restartDnsmasq=true"
            - "--"
            - "-k"
            - "--cache-size=1000"
            - "--log-facility=-"
            - "--no-resolv"
            - "--server=/cluster.local/127.0.0.1#10053"
            - "--server=/in-addr.arpa/127.0.0.1#10053"
            - "--server=/ip6.arpa/127.0.0.1#10053"
            - "--log-queries"
          image: "gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7"
          livenessProbe: 
            failureThreshold: 5
            httpGet: 
              path: /healthcheck/dnsmasq
              port: 10054
              scheme: HTTP
            initialDelaySeconds: 60
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 30
          name: dnsmasq
          ports: 
            - 
              containerPort: 53
              name: dns
              protocol: UDP
            - 
              containerPort: 53
              name: dns-tcp
              protocol: TCP
          resources: 
            requests: 
              cpu: 150m
              memory: 20Mi
          volumeMounts: 
            - 
              mountPath: /etc/k8s/dns/dnsmasq-nanny
              name: kube-dns-config
        - 
          args: 
            - "--v=2"
            - "--logtostderr"
            - "--probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A"
            - "--probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A"
          image: "gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7"
          livenessProbe: 
            failureThreshold: 5
            httpGet: 
              path: /metrics
              port: 10054
              scheme: HTTP
            initialDelaySeconds: 60
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 30
          name: sidecar
          ports: 
            - 
              containerPort: 10054
              name: metrics
              protocol: TCP
          resources: 
            requests: 
              cpu: 10m
              memory: 20Mi
      dnsPolicy: Default
      restartPolicy: Always
      serviceAccount: kube-dns
      serviceAccountName: kube-dns
      terminationGracePeriodSeconds: 30
      tolerations: 
        - 
          key: CriticalAddonsOnly
          operator: Exists
      volumes: 
        - 
          configMap: 
            defaultMode: 420
            name: kube-dns
            optional: true
          name: kube-dns-config
        - 
          hostPath: 
            path: /usr/share/ca-certificates
          name: ssl-certs-host
        - 
          hostPath: 
            path: /etc/kubernetes/ssl
          name: kube-ssl
        - 
          hostPath: 
            path: /etc/kubernetes/ssl/worker-kubeconfig.yaml
          name: kubeconfig

Below is a quick simple example on how to use Træfik / Nginx as an Ingres Controller. Once I get around I hope to update the below configuration with a more complex/useful example.

Simple Kubernetes Træfik Ingress Configuration

You will need to create the below pod configuration files.

This creates two Nginx instances(replicas).
cat nginx-deployment.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80

cat nginx-ingres.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginxingress
spec:
  rules:
  - host: coreos1.domain.com
    http:do
      paths:
      - path: /
        backend:
          serviceName: nginxsvc
          servicePort: 80

cat nginx-svc.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    name: nginxsvc
  name: nginxsvc
spec:
  ports:    
    - port: 80
  selector:
    app: nginx
  type: ClusterIP

A very simple Traefik example is below, another example is available here using Docker

cat traefik.toml

[web]
address = ":8181"
ReadOnly = true

[kubernetes]
# Kubernetes server endpoint
endpoint = "http://localhost:8080"
namespaces = ["default","kube-system"]

Download Traefik version 1.4.2 was the latest at time of this writing.

wget -O traefik https://github.com/containous/traefik/releases/download/v1.4.2/traefik_linux-amd64
chmod u+x traefik

To Create the configuration, run the below.

kubectl create -f nginx-deployment.yaml
kubectl create -f nginx-svc.yaml
kubectl create -f nginx-ingres.yaml

Then start traefik
./traefik -c traefik.toml &

Note: Ideally, Traefik itself should run in a pod. I hope to update the example soon with such an example.

Tip: To scale from 2 to 3 pods.

kubectl scale --replicas=3 deployment nginx-deployment

To quick test

curl http://coreos1.domain.com

# Verify the new pods
kubectl get po --all-namespaces -o wide

To access the Traefik dashboard.

http://coreos1.domain.com:8181

To verify the service

kubectl describe svc nginxsvc

To remove / destroy the configuration.

kubectl delete -f nginx-ingres.yaml
kubectl delete -f nginx-svc.yaml
kubectl delete -f nginx-deployment.yaml

Adding Kube-DNS

How-to coming soon.

Check out the next part – Part 6 – Automate the Kubernetes deployment(coming soon).

You might also like – Other articles related to Docker Kubernetes / micro-service.

Like what you’re reading? please provide feedback, any feedback is appreciated.

4
Leave a Reply

avatar
3000
2 Comment threads
2 Thread replies
0 Followers
 
Most reacted comment
Hottest comment thread
3 Comment authors
Eli KleinmanZainal AbidinJoakim Recent comment authors
  Subscribe  
newest oldest most voted
Notify of
Joakim
Guest
Joakim

Thanks for the guide. However traefik is not suited as ingress controller in Kubernetes due to it not having propper TLS cert handling for the ingress resources

https://github.com/containous/traefik/issues/378
https://github.com/containous/traefik/issues/2438

for more info

Zainal Abidin
Guest
Zainal Abidin

Please provide kube-dashboard example