DevTech101

DevTech101
1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 5.00 out of 5)
Loading...
In the previous post I went through how to configure Etcd and Flannel, below I am continuing with the Kubelet manifests configuration. I divided the Kubernetes configuration into parts outlined below (still working progress). Note: An up-to-date example is available on my GitHub project page, or generate your own Kubernetes configuration with the Kubernetes generator available here on my GitHub page.

This is part 3 – Configure Kubernetes manifests for controller, api, scheduler and proxy.

Copy Kubernetes certificates For the most part I will re-use the SSL certificates generated in part 1 , but for the kubelet service I going to use the same certificates but with in a a different location and name. Note: I created the /etc/kubernetes/ssl, since this directory is the default used/mounted/specified in the /usr/lib/coreos/kubelet-wrapper. ultimately I could of added /var/lib/etcd/ssl to the mount options and re-use the certificates, I decided to leave the defaults and copy the certificates with a new name of worker*. for the manifests I used the /var/lib/etcd/ssl. more on that is below. Create required kubernetes directory’s.
mkdir -p /etc/kubernetes/ssl
mkdir -p /etc/kubernetes/manifests
cp /var/lib/etcd/ssl/* /etc/kubernetes/ssl
mv /etc/kubernetes/ssl/etcd-node.pem /etc/kubernetes/ssl/worker.pem
mv /etc/kubernetes/ssl/etcd-node-key.pem /etc/kubernetes/ssl/worker-key.pem

Create the kubernetes manifests(controller, api, proxy, scheduler)

Next, we are going to create the kubernetes manifests. The way kubernetes works, once you start the kubernetes service(kubelet) with the proper parameters(more on the parameters in part 4), the kubelet process will automatically start/run all the manifests/services, in our case we are using hyperkube for the controller, api, proxy, scheduler. There are many ways to grab a sample manifest, however below are the manifests I used which worked for me. however if you would like to get a sample manifest you can do so by fowling the below. Download a copy of matchbox with git.
git clone https://github.com/coreos/matchbox.git
Next, run the below and you will get a copy of all the manifests.
cd matchbox
./scripts/dev/get-bootkube
./bin/bootkube render --asset-dir=examples/assets --api-servers=https://node1.example.com:443 --api-server-alt-names=DNS=node1.example.com --etcd-servers=https://node1.example.com:2379
Writing asset: examples/assets/manifests/kube-scheduler.yaml
Writing asset: examples/assets/manifests/kube-scheduler-disruption.yaml
Writing asset: examples/assets/manifests/kube-controller-manager-disruption.yaml
Writing asset: examples/assets/manifests/kube-dns-deployment.yaml
Writing asset: examples/assets/manifests/pod-checkpointer.yaml
Writing asset: examples/assets/manifests/kube-system-rbac-role-binding.yaml
Writing asset: examples/assets/manifests/kube-controller-manager.yaml
[..] snip

# Note: You can also run get-coreos to get the coreos kernel but is not needed for the manifests.
./scripts/get-coreos stable 1576.1.0 ./examples/assets
Below are the manifests I used. Lets create the kube-controller-manager. cat /etc/kubernetes/manifests/kube-controller-manager.yaml
apiVersion: v1
kind: Pod
metadata:
  name: kube-controller-manager
  namespace: kube-system
spec:
      hostNetwork: true
      containers:
      - name: kube-controller-manager
        image: quay.io/coreos/hyperkube:v1.8.0_coreos.0
        command:
        - ./hyperkube
        - controller-manager
        - --master=http://127.0.0.1:8080
        - --leader-elect=true
        - --allocate-node-cidrs=true
        - --cluster-cidr=10.0.0.0/21
        - --service-account-private-key-file=/var/lib/etcd/ssl/etcd-node-key.pem
        - --root-ca-file=/var/lib/etcd/ssl/ca.pem
        livenessProbe:
          httpGet:
            host: 127.0.0.1
            path: /healthz
            port: 10252  # Note: Using default port. Update if --port option is set differently.
          initialDelaySeconds: 15
          timeoutSeconds: 5
        volumeMounts:
        - mountPath: /var/lib/etcd/ssl
          name: secrets
          readOnly: true
        - mountPath: /etc/ssl/certs
          name: ssl-host
          readOnly: true
        - mountPath: /var/log/kube-controller-manager.log
          name: logfile
          readOnly: false
      volumes:
      - hostPath:
          path: /var/lib/etcd/ssl
        name: secrets
      - hostPath:
          path: /usr/share/ca-certificates
        name: ssl-host
      - hostPath:
          path: /var/log/kube-controller-manager.log
        name: logfile
Lets create the kube-apiserver. cat /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
  name: kube-apiserver
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-apiserver
    image: quay.io/coreos/hyperkube:v1.8.0_coreos.0
    command:
    - /hyperkube
    - apiserver
    - --bind-address=0.0.0.0
    - --etcd-servers=https://10.0.2.11:2379,https://10.0.2.12:2379,https://10.0.2.13:2379
    - --allow-privileged=true
    - --service-cluster-ip-range=10.3.0.0/21
    - --secure-port=443
    - --insecure-port=8080
    - --advertise-address=10.0.2.12
    - --storage-backend=etcd3
    - --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota
    - --etcd-certfile=/var/lib/etcd/ssl/etcd-node.pem
    - --etcd-keyfile=/var/lib/etcd/ssl/etcd-node-key.pem
    - --tls-cert-file=/var/lib/etcd/ssl/etcd-node.pem
    - --tls-private-key-file=/var/lib/etcd/ssl/etcd-node-key.pem
    - --kubelet-client-certificate=/var/lib/etcd/ssl/etcd-node.pem
    - --kubelet-client-key=/var/lib/etcd/ssl/etcd-node-key.pem
    - --service-account-key-file=/var/lib/etcd/ssl/etcd-node-key.pem
    - --etcd-cafile=/var/lib/etcd/ssl/ca.pem
    - --tls-ca-file=/var/lib/etcd/ssl/ca.pem
    - --client-ca-file=/var/lib/etcd/ssl/ca.pem
    - --runtime-config=extensions/v1beta1/networkpolicies=true,extensions/v1beta1=true
    - --anonymous-auth=false
    - --audit-log-path=/var/log/kubernetes/kube-apiserver-audit.log
    - --audit-log-maxage=30
    - --audit-log-maxbackup=3
    - --audit-log-maxsize=100
    - --v=3
    livenessProbe:
      httpGet:
        host: 127.0.0.1
        port: 8080
        path: /healthz
      initialDelaySeconds: 15
      timeoutSeconds: 15
    ports:
    - containerPort: 443
      hostPort: 443
      name: https
    - containerPort: 8080
      hostPort: 8080
      name: local
    volumeMounts:
    - mountPath: /var/lib/etcd/ssl
      name: ssl-certs-kubernetes
      readOnly: true
    - mountPath: /etc/ssl/certs
      name: ssl-certs-host
      readOnly: true
    - mountPath: /var/log/kubernetes
      name: var-log-kubernetes
      readOnly: false
  volumes:
  - hostPath:
      path: /var/lib/etcd/ssl
    name: ssl-certs-kubernetes
  - hostPath:
      path: /usr/share/ca-certificates
    name: ssl-certs-host
  - hostPath:
      path: /var/log/kubernetes
    name: var-log-kubernetes
Lets create the kube-scheduler. cat /etc/kubernetes/manifests/kube-scheduler.yaml
apiVersion: v1
kind: Pod
metadata:
  name: kube-scheduler
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-scheduler
    image: quay.io/coreos/hyperkube:v1.8.0_coreos.0
    command:
    - ./hyperkube
    - scheduler
    - --master=http://127.0.0.1:8080
    - --address=0.0.0.0
    - --leader-elect=true
    - --v=3
    livenessProbe:
        httpGet:
          host: 127.0.0.1
          path: /healthz
          port: 10251  # Note: Using default port. Update if --port option is set differently.
        initialDelaySeconds: 15
        timeoutSeconds: 15
    nodeSelector:
      node-role.kubernetes.io/master: ""
    securityContext:
      runAsNonRoot: true
      runAsUser: 65534
    volumeMounts:
    - mountPath: /var/log/kube-scheduler.log
      name: logfile
  volumes:
  - hostPath:
      path: /var/log/kube-scheduler.log
    name: logfile
Lets create the kube-proxy. cat /etc/kubernetes/manifests/kube-proxy.yaml
apiVersion: v1
kind: Pod
metadata:
  name: kube-proxy
  namespace: kube-system
  labels:
    tier: node
    k8s-app: kube-proxy
spec:
      hostNetwork: true
      containers:
      - name: kube-proxy
        image: quay.io/coreos/hyperkube:v1.8.0_coreos.0
        command:
        - ./hyperkube
        - proxy
        - --master=http://127.0.0.1:8080
        - --logtostderr=true
        - --proxy-mode=iptables
        - --hostname-override=10.0.2.12
        - --cluster-cidr=10.0.0.0/21
        - --v=3
        env:
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
        securityContext:
          privileged: true
        volumeMounts:
        - mountPath: /etc/ssl/certs
          name: ssl-certs-host
          readOnly: true
        - name: etc-kubernetes
          mountPath: /var/lib/etcd/ssl
          readOnly: true
      tolerations:
      - key: "node-role.kubernetes.io/master"
        operator: "Exists"
        effect: "NoSchedule"
      volumes:
      - hostPath:
          path: /usr/share/ca-certificates
        name: ssl-certs-host
      - name: etc-kubernetes
        hostPath:
          path: /etc/kubernetes
Create the kubelet.env and worker-kubeconfig.yaml cat /etc/kubernetes/kubelet.env
KUBELET_IMAGE_URL=quay.io/coreos/hyperkube
KUBELET_VERSION=v1.8.0_coreos.0
cat /etc/kubernetes/worker-kubeconfig.yaml
apiVersion: v1
kind: Config
clusters:
- name: local
  cluster:
    server: https://10.0.2.12:443
    certificate-authority: /etc/kubernetes/ssl/ca.pem
users:
- name: kubelet
  user:
    client-certificate: /etc/kubernetes/ssl/worker.pem
    client-key: /etc/kubernetes/ssl/worker-key.pem
contexts:
- context:
    cluster: local
    user: kubelet
ignition config If you are using ignition config, just add the below to your ignition config file.
storage:
  files:
    - path: /etc/kubernetes/kubelet.env
      filesystem: root
      mode: 0644
      contents:
        inline: |
          KUBELET_IMAGE_URL=quay.io/coreos/hyperkube
          KUBELET_VERSION=v1.8.0_coreos.0
    - path: /etc/kubernetes/worker-kubeconfig.yaml
      filesystem: root
      mode: 0644
      contents:
        inline: |
          apiVersion: v1
          kind: Config
          clusters:
          - name: local
            cluster:
              server: https://10.0.2.11:443
              certificate-authority: /etc/kubernetes/ssl/ca.pem
          users:
          - name: kubelet
            user:
              client-certificate: /etc/kubernetes/ssl/worker.pem
              client-key: /etc/kubernetes/ssl/worker-key.pem
          contexts:
          - context:
              cluster: local
              user: kubelet
    - path: /etc/kubernetes/manifests/kube-apiserver.yaml
      filesystem: root
      mode: 0644
      contents:
        inline: |
          apiVersion: v1
          kind: Pod
          metadata:
            name: kube-apiserver
            namespace: kube-system
          spec:
            hostNetwork: true
            containers:
            - name: kube-apiserver
              image: quay.io/coreos/hyperkube:v1.8.0_coreos.0
              command:
              - /hyperkube
              - apiserver
              - --bind-address=0.0.0.0
              - --etcd-servers=https://10.0.2.11:2379,https://10.0.2.12:2379,https://10.0.2.13:2379
              - --allow-privileged=true
              - --service-cluster-ip-range=10.3.0.0/21
              - --secure-port=443
              - --insecure-port=8080
              - --advertise-address=10.0.2.12
              - --storage-backend=etcd3
              - --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota
              - --etcd-certfile=/var/lib/etcd/ssl/etcd-node.pem
              - --etcd-keyfile=/var/lib/etcd/ssl/etcd-node-key.pem
              - --tls-cert-file=/var/lib/etcd/ssl/etcd-node.pem
              - --tls-private-key-file=/var/lib/etcd/ssl/etcd-node-key.pem
              - --kubelet-client-certificate=/var/lib/etcd/ssl/etcd-node.pem
              - --kubelet-client-key=/var/lib/etcd/ssl/etcd-node-key.pem
              - --service-account-key-file=/var/lib/etcd/ssl/etcd-node-key.pem
              - --etcd-cafile=/var/lib/etcd/ssl/ca.pem
              - --tls-ca-file=/var/lib/etcd/ssl/ca.pem
              - --client-ca-file=/var/lib/etcd/ssl/ca.pem
              - --runtime-config=extensions/v1beta1/networkpolicies=true,extensions/v1beta1=true
              - --anonymous-auth=false
              - --audit-log-path=/var/log/kubernetes/kube-apiserver-audit.log
              - --audit-log-maxage=30
              - --audit-log-maxbackup=3
              - --audit-log-maxsize=100
              - --v=3
              livenessProbe:
                httpGet:
                  host: 127.0.0.1
                  port: 8080
                  path: /healthz
                initialDelaySeconds: 15
                timeoutSeconds: 15
              ports:
              - containerPort: 443
                hostPort: 443
                name: https
              - containerPort: 8080
                hostPort: 8080
                name: local
              volumeMounts:
              - mountPath: /var/lib/etcd/ssl
                name: ssl-certs-kubernetes
                readOnly: true
              - mountPath: /etc/ssl/certs
                name: ssl-certs-host
                readOnly: true
              - mountPath: /var/log/kubernetes
                name: var-log-kubernetes
                readOnly: false
            volumes:
            - hostPath:
                path: /var/lib/etcd/ssl
              name: ssl-certs-kubernetes
            - hostPath:
                path: /usr/share/ca-certificates
              name: ssl-certs-host
            - hostPath:
                path: /var/log/kubernetes
              name: var-log-kubernetes
    - path: /etc/kubernetes/manifests/kube-controller-manager.yaml
      filesystem: root
      mode: 0644
      contents:
        inline: |
          apiVersion: v1
          kind: Pod
          metadata:
            name: kube-controller-manager
            namespace: kube-system
          spec:
                hostNetwork: true
                containers:
                - name: kube-controller-manager
                  image: quay.io/coreos/hyperkube:v1.8.0_coreos.0
                  command:
                  - ./hyperkube
                  - controller-manager
                  - --master=http://127.0.0.1:8080
                  - --leader-elect=true
                  - --allocate-node-cidrs=true
                  - --cluster-cidr=10.0.0.0/21
                  - --service-account-private-key-file=/var/lib/etcd/ssl/etcd-node-key.pem
                  - --root-ca-file=/var/lib/etcd/ssl/ca.pem
                  livenessProbe:
                    httpGet:
                      host: 127.0.0.1
                      path: /healthz
                      port: 10252  # Note: Using default port. Update if --port option is set differently.
                    initialDelaySeconds: 15
                    timeoutSeconds: 5
                  volumeMounts:
                  - mountPath: /var/lib/etcd/ssl
                    name: secrets
                    readOnly: true
                  - mountPath: /etc/ssl/certs
                    name: ssl-host
                    readOnly: true
                  - mountPath: /var/log/kube-controller-manager.log
                    name: logfile
                    readOnly: false
                volumes:
                - hostPath:
                    path: /var/lib/etcd/ssl
                  name: secrets
                - hostPath:
                    path: /usr/share/ca-certificates
                  name: ssl-host
                - hostPath:
                    path: /var/log/kube-controller-manager.log
                  name: logfile
    - path: /etc/kubernetes/manifests/kube-proxy.yaml
      filesystem: root
      mode: 0644
      contents:
        inline: |
          apiVersion: v1
          kind: Pod
          metadata:
            name: kube-proxy
            namespace: kube-system
            labels:
              tier: node
              k8s-app: kube-proxy
          spec:
                hostNetwork: true
                containers:
                - name: kube-proxy
                  image: quay.io/coreos/hyperkube:v1.8.0_coreos.0
                  command:
                  - ./hyperkube
                  - proxy
                  - --master=http://127.0.0.1:8080
                  - --logtostderr=true
                  - --proxy-mode=iptables
                  - --hostname-override=10.0.2.12
                  - --cluster-cidr=10.0.0.0/21
                  - --v=3
                  env:
                    - name: NODE_NAME
                      valueFrom:
                        fieldRef:
                          fieldPath: spec.nodeName
                  securityContext:
                    privileged: true
                  volumeMounts:
                  - mountPath: /etc/ssl/certs
                    name: ssl-certs-host
                    readOnly: true
                  - name: etc-kubernetes
                    mountPath: /var/lib/etcd/ssl
                    readOnly: true
                tolerations:
                - key: "node-role.kubernetes.io/master"
                  operator: "Exists"
                  effect: "NoSchedule"
                volumes:
                - hostPath:
                    path: /usr/share/ca-certificates
                  name: ssl-certs-host
                - name: etc-kubernetes
                  hostPath:
                    path: /etc/kubernetes
    - path: /etc/kubernetes/manifests/kube-scheduler.yaml
      filesystem: root
      mode: 0644
      contents:
        inline: |
          apiVersion: v1
          kind: Pod
          metadata:
            name: kube-scheduler
            namespace: kube-system
          spec:
            hostNetwork: true
            containers:
            - name: kube-scheduler
              image: quay.io/coreos/hyperkube:v1.8.0_coreos.0
              command:
              - ./hyperkube
              - scheduler
              - --master=http://127.0.0.1:8080
              - --address=0.0.0.0
              - --leader-elect=true
              - --v=3
              livenessProbe:
                  httpGet:
                    host: 127.0.0.1
                    path: /healthz
                    port: 10251  # Note: Using default port. Update if --port option is set differently.
                  initialDelaySeconds: 15
                  timeoutSeconds: 15
              nodeSelector:
                node-role.kubernetes.io/master: ""
              securityContext:
                runAsNonRoot: true
                runAsUser: 65534
              volumeMounts:
              - mountPath: /var/log/kube-scheduler.log
                name: logfile
            volumes:
            - hostPath:
                path: /var/log/kube-scheduler.log
              name: logfile
You are now ready to move to the next step, Finalizing the kubelet configuration RKT, Flannel+CNI – in part 4. You might also like – Other articles related to Docker Kubernetes / micro-service.
Like what you’re reading? please provide feedback, any feedback is appreciated.
0 0 votes
Article Rating
Subscribe
Notify of
guest
2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
DanielG
DanielG
December 12, 2017 7:55 pm

This is the tutorial I was looking for! I like that you provide the ignition file components.

For this particular phase of the tutorial, it would be nice if you included a “this is how you test this” section like you did at the end of the previous sections.

Also, I thought I would point out that the ignition code on this page doesn’t include the certificates that you are reusing from etcd.

2
0
Would love your thoughts, please comment.x
()
x
%d bloggers like this: