DevTech101

DevTech101
1 Star2 Stars3 Stars4 Stars5 Stars (2 votes, average: 3.00 out of 5)
Loading...

Installing, configuring Prometheus and Grafana

Below I am continuing, with options on installing Prometheus and Grafana. This is Part 2. in Part 1 I am describing what it takes to install Helm, Tiller as well as SSL/TLS configuration. It has been a while, I didn’t had a chance to complete the Prometheus & Grafana installation steps. lets first upgrade helm to v2.11(In my original testing, v2.1.1 had bugs which are now supposed to be fixed).

Upgrading Helm and Tiller

cd /usr/local/bin
curl -o helm-v2.11.0-linux-amd64.tar.gz https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-amd64.tar.gz
tar zxf helm-v2.11.0-linux-amd64.tar.gz
mv linux-amd64/helm helm-v2.11.0
ln -s helm-v2.11.0 helm
Next, lets upgrade the repo.
helm repo update 
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. Happy Helming!
Lets try to look on the pkg list.
helm ls --tls
Error: incompatible versions client[v2.11.0] server[v2.9.1]
So lets upgrade tiller.
helm init --upgrade
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been upgraded to the current version.
Happy Helming!
Hooray, it works again.
helm version --tls
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}

helm ls --tls
NAME            REVISION        UPDATED                         STATUS          CHART                           APP VERSION     NAMESPACE
monitoring      1               Wed Sep  5 11:35:23 2018        DEPLOYED        prometheus-7.0.2                2.3.2           default 

Installing / configuring Prometheus and Grafana

There are many ways you can install Prometheus and Grafana. Below I am focusing on two methods.
  1. Simple easy installation by using the CoreOS Operator helm workflow.
  2. Semi simple, a bit more involved, manual separate stock prometheus and grafana installation.

Simple easy by using CoreOS Operator

This method is relay simple (if all works ok) First we have to add the CoreOS repo.
helm repo add coreos https://s3-eu-west-1.amazonaws.com/coreos-charts/stable/

# You can verify the content i.e prometheus-operator by running the below, the results shuld be something like the below..
helm search coreos/prometheus-operator     
NAME                          CHART VERSION    APP VERSION    DESCRIPTION                                                
coreos/prometheus-operator    0.0.29           0.20.0         Provides easy monitoring definitions for Kubernetes servi...
Ok, lets move on to the installation. The below instillation will install the CoreOS prometheus operator.
helm install coreos/prometheus-operator --wait --name prometheus-operator --namespace monitoring --tls
Next, lets install the stack. To keep things simple, you can just run the below, meaning no persistent volumes and no ingress configuration (good for a lab, etc).
helm install coreos/kube-prometheus --name kube-prometheus --set global.rbacEnable=true --namespace monitoring
OR You can run something like the below. Note: The below configuration is using an ingress controller, as well as setting persistent volumes, more on this below in the manual configuration (creating persistent volumes).
helm install coreos/kube-prometheus --name kube-prometheus --namespace monitoring global.rbacEnable=true \
--set alertmanager.ingress.enabled=true,alertmanager.ingress.hosts[0]=alertmanager.bnh.com,alertmanager.storageSpec.volumeClaimTemplate.spec.storageClassName=rook-block,alertmanager.storageSpec.volumeClaimTemplate.spec.accessModes[0]=ReadWriteOnce,alertmanager.storageSpec.volumeClaimTemplate.spec.resources.requests.storage=2Gi,grafana.adminPassword=password,grafana.ingress.enabled=true,grafana.ingress.hosts[0]=grafana.bnh.com,prometheu
s.ingress.enabled=true,prometheus.ingress.hosts[0]=prometheus.bnh.com,prometheus.storageSpec.volumeClaimTemplate.spec.storageClassName=rook-block,prometheus.storageSpec.volumeClaimTemplate.spec.accessModes[0]=ReadWriteOnce,prometheus.storageSpec.volumeClaimTemplate.spec.resources.requests.storage=2Gi
After a little while, you shuld be up and running, you can check by running the below.
kubectl get pods  -n monitoring
NAME                                                  READY     STATUS    RESTARTS   AGE
alertmanager-kube-prometheus-0                        2/2       Running   0          15m
kube-prometheus-exporter-kube-state-54959bf8d-h5jd6   2/2       Running   0          15m
kube-prometheus-exporter-node-4n5sk                   1/1       Running   0          15m
kube-prometheus-exporter-node-52b4z                   1/1       Running   0          15m
kube-prometheus-exporter-node-5l9cm                   1/1       Running   0          15m
kube-prometheus-exporter-node-6b9n8                   1/1       Running   0          15m
kube-prometheus-exporter-node-6w29f                   1/1       Running   0          15m
kube-prometheus-exporter-node-qrvfb                   0/1       Pending   0          15m
kube-prometheus-grafana-f869c754-dstzv                2/2       Running   0          15m
prometheus-kube-prometheus-0                          3/3       Running   1          15m
prometheus-operator-858c485-jmq69                     1/1       Running   0          21m
Now all you got a do is forward you port. The way I like to do this, is use the port-forward to port forward port 3000 Then use ssh to forward to my local pc to test if all works, I can then go in my browser to localhost:3000
# Note: prometheus-kube-prometheus-0 is your prometheus pod_name.
kubectl --namespace monitoring port-forward prometheus-kube-prometheus-0 3000
ssh -R 3000:localhost:3000 my_pc

# Access like the below in your browser.
http://localhost:3000
While at this, you can also install heapster and the kubernetes dashboard like the below.
# Installing heapster
helm install stable/heapster --name heapster --set rbac.create=true --tls

# Installing the kubernetes dashboard
helm install stable/kubernetes-dashboard --name=kubernetes-dashboard --namespace monitoring --set ingress.enabled=true,rbac.clusterAdminRole=true --tls
Of course all of them can be port-forward or use an ingress controller. Below is a confirmation of the installation.
helm list --tls
NAME                    REVISION        UPDATED                         STATUS          CHART                           APP VERSION     NAMESPACE
kube-prometheus         1               Mon Oct 22 16:43:04 2018        DEPLOYED        kube-prometheus-0.0.105                         monitoring
kubernetes-dashboard    1               Mon Oct 22 16:58:10 2018        DEPLOYED        kubernetes-dashboard-0.7.3      1.10.0          monitoring
my-heapster             1               Mon Oct 22 16:55:20 2018        DEPLOYED        heapster-0.3.1                  1.5.2           default  
prometheus-operator     1               Mon Oct 22 16:37:08 2018        DEPLOYED        prometheus-operator-0.0.29      0.20.0          monitoring
Cleaning up.
helm del --purge kubernetes-dashboard --tls
helm del --purge my-heapster --tls
helm del --purge prometheus-operator --tls
helm del --purge kube-prometheus --tls

# Verify all is removed.
helm list --tls
kubectl get all -n monitoring
Below are a few screen shots of the included CoreOS Dashboards. Kubernetes cluster status Kubernetes nodes Kubernetes capacity planing

Manual stock installation of Prometheus/stable and Grafana/stable

By going this route we have to pre-stage a persistent volume. So lets get that out of the way first. You shuld use a NAS mount or something like CEPH, DRDB, etc.. For demonstration purpose this is good enough, lets create a few directory’s on each node.
# master1:
mkdir -p /mnt/vol10 /mnt/vol11 /mnt/vol12 /mnt/vol13 /mnt/vol14 /mnt/vol15
# master2:
mkdir -p /mnt/vol10 /mnt/vol11 /mnt/vol12 /mnt/vol13 /mnt/vol14 /mnt/vol15
# master3:
mkdir -p /mnt/vol10 /mnt/vol11 /mnt/vol12 /mnt/vol13 /mnt/vol14 /mnt/vol15
Next, lets create the persistent volume, create the below file. cat volume_create.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: monitor-pv10
spec:
  capacity:
    storage: 2Gi
  # volumeMode field requires BlockVolume Alpha feature gate to be enabled.
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /mnt/vol10
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - kmaster1
          - kmaster2
          - kmaster3
          - knode1
          - knode2
          - knode3
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: monitor-pv11
spec:
  capacity:
    storage: 2Gi
  # volumeMode field requires BlockVolume Alpha feature gate to be enabled.
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /mnt/vol11
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - kmaster1
          - kmaster2
          - kmaster3
          - knode1
          - knode2
          - knode3
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: monitor-pv12
spec:
  capacity:
    storage: 2Gi
  # volumeMode field requires BlockVolume Alpha feature gate to be enabled.
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /mnt/vol12
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - kmaster1
          - kmaster2
          - kmaster3
          - knode1
          - knode2
          - knode3
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: monitor-pv13
spec:
  capacity:
    storage: 2Gi
  # volumeMode field requires BlockVolume Alpha feature gate to be enabled.
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /mnt/vol13
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - kmaster1
          - kmaster2
          - kmaster3
          - knode1
          - knode2
          - knode3
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: monitor-pv14
spec:
  capacity:
    storage: 2Gi
  # volumeMode field requires BlockVolume Alpha feature gate to be enabled.
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /mnt/vol14
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - kmaster1
          - kmaster2
          - kmaster3
          - knode1
          - knode2
          - knode3
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: monitor-pv15
spec:
  capacity:
    storage: 2Gi
  # volumeMode field requires BlockVolume Alpha feature gate to be enabled.
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /mnt/vol15
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - kmaster1
          - kmaster2
          - kmaster3
          - knode1
          - knode2
          - knode3
Then run the below to create the pv volume.
kubectl apply -f volume_create.yaml
To verify run the below. Note: This volumes can only be used once, after one use you have to delete and re-create the volume.
kubectl get pv --all-namespaces
NAME           CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM     STORAGECLASS    REASON    AGE
monitor-pv10   2Gi        RWO            Delete           Available             local-storage             1h
monitor-pv11   2Gi        RWO            Delete           Available             local-storage             1h
monitor-pv12   2Gi        RWO            Delete           Available             local-storage             1h
monitor-pv13   2Gi        RWO            Delete           Available             local-storage             1h
monitor-pv14   2Gi        RWO            Delete           Available             local-storage             1h
monitor-pv15   2Gi        RWO            Delete           Available             local-storage             1h
We are now ready to move to the next step.

Installing Prometheus

Now, lets first install the stable/prometheus, you do so by running the below.
helm install stable/prometheus --tls --name monitoring \
--set global.rbacEnable=true \
--set server.persistentVolume.storageClass=local-storage \
--set server.persistentVolume.size=1Gi \
--set alertmanager.persistentVolume.storageClass=local-storage \
--set alertmanager.persistentVolume.size=1Gi
Verify that all services are up, before continuing to grafana.
kubectl get pods  -n monitoring
Note: You can access prometheus by forwarding the port 9090, something like the below.
export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 80
ssh -R 80:localhost:80 my_pc

# The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
monitoring-prometheus-server.default.svc.cluster.local
To see how the prometheus server allocated the pv volumes , run the below.
kubectl get pv --all-namespaces
NAME          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                        STORAGECLASS    REASON    AGE
monitor-pv10   2Gi        RWO            Delete           Bound       default/monitoring-prometheus-server         local-storage             24m
monitor-pv11   2Gi        RWO            Delete           Bound       default/monitoring-prometheus-alertmanager   local-storage         24m

Installing Grafana

Lets fetch the stable/grafana, we do so, so we can modify the installation options while still using/installing the stable/grafana from the repo.
helm fetch stable/grafana --untar
Now lets inspect the grafana/values.yaml, and make some modifications. Lets modify the persistence: section, I used something like the blow. Note: Without these change, all metrics will be lost once the pod reboots.
# look fro persistence:
persistence:
  accessModes:
  - ReadWriteOnce
  enabled: true
  size: 2Gi
  storageClassName: local-storage
  # annotations: {}
  # subPath: ""
  # existingClaim:
To set a grafana admin user and password, make sure to set something like the below.
adminUser: admin
adminPassword: Your_Pa$worD
To get the local kubernetes prometheus cluster url, run something like the below (this is needed in the next section).
echo http://$(kubectl get service --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}").default.svc.cluster.local

# In my case output looks like the below.
http://monitoring-prometheus-server.default.svc.cluster.local
Lets also modify the datasources: section, I used something like the blow. Note: Without the below change, no datasources will be configured in Grafana once it comes up.
datasources:
  datasources.yaml:
    apiVersion: 1
    datasources:
    - name: Prometheus
      type: prometheus
      url: http://monitoring-prometheus-server.default.svc.cluster.local
      access: proxy
      isDefault: true
To add the grafana kubernetes dashboards plugins, also modify the plugins: section, I used something like the blow.
# adding the kubernets dashboard plugins
plugins:
   #- digrich-bubblechart-panel
   #- grafana-clock-panel
   - grafana-kubernetes-app
We are now ready to install the grafana chart, you do so by running the below.
helm install -f grafana/values.yaml  --debug stable/grafana --tls --name grafana-charts
Cleaning up. Removing Prometheus and Grafana.
helm del --purge prometheus --tls
helm del --purge grafana-charts --tls
Helpful links. List of Grafana properties Install Prometheus Operator Install Prometheus and Grafana Kubernetes monitoring with Prometheus You might also like – Other related articles to Docker and Kubernetes / micro-service. Like what you’re reading? please provide feedback, any feedback is appreciated.
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
%d bloggers like this: