Monitoring is a practice of keeping an eye on any service. Monitoring includes log checking, metric monitoring etc. For this to be done we have tools like Prometheus and Grafana.
Prometheus and Grafana
Prometheus is a tool that monitor metrics of a system and also provide some visuals to show stats. For this to be done exposure of metrics is need which is done by exporters available for many services. Prometheus creates time series database most of the time.
Grafana provides great variety of visuals like variety of charts, graphs etc… It is mainly used for it’s interactive visualization, that is why the combination of grafana and prometheus is preferred over any other. Grafana relies in pure data that it get from prometheus.
We are going to integrate Prometheus and Grafana and perform the following task this way:
- Deploy them as pods on top of Kubernetes by creating any of the following resources Deployment, ReplicaSet, Pods or Services.
- And make their data to be remain persistent.
- And both of them should be exposed to outside world.
Dockerfile
We are going to build our custom image for prometheus and grafana.
Prometheus :-
FROM centos:8
RUN yum install wget -y
RUN wget https://github.com/prometheus/prometheus/releases/download/v2.19.0/prometheus-2.19.0.linux-amd64.tar.gz
RUN tar -xvf prometheus-2.19.0.linux-amd64.tar.gz
WORKDIR prometheus-2.19.0.linux-amd64/
EXPOSE 9090
CMD ./prometheus
Grafana :-
FROM centos:8
RUN yum install wget -y
RUN wget https://dl.grafana.com/oss/release/grafana-7.0.3.linux-amd64.tar.gz
RUN tar -zxvf grafana-7.0.3.linux-amd64.tar.gz
WORKDIR grafana-7.0.3/bin/
EXPOSE 3000
CMD ./grafana-server
To build the image :-
docker build -t <image-name>:<tag> .
We need to deploy these services on kubernetes and make their data permanent. So, first we need to create a PVC (PersistentVolumeClaim).
PVC
PVC stands for PersistentVolumeClaim. We as a k8s user request for the storage from PVC behind the scene uses PV resources for we can say that uses it to claim the storage for us. We can get both static and dynamic type of storage from PVC.
For more of PVC : https://kubernetes.io/docs/concepts/storage/persistent-volumes/
We are here mounting the directory which contains the main data that should be remained persistent as later on maybe due to some fault or anything our pods get down but data will remain persistent.
PVC for grafana :-
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: graf-pvc
labels:
env: production
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
PVC for prometheus :-
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: prom-pvc
labels:
env: production
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Deploying on k8s
Prometheus :-
apiVersion: apps/v1
kind: Deployment
metadata:
name: prom-deploy
labels:
env: production
spec:
replicas: 1
selector:
matchLabels:
env: production
template:
metadata:
name: prom-pod
labels:
env: production
spec:
containers:
- name: prom-con
image: mykgod/prometheus
volumeMounts:
- name: prom-persistent-storage
mountPath: "/prometheus-2.19.0.linux-amd64/data/"
ports:
- containerPort: 9090
name: prom-pod
volumes:
- name: prom-persistent-storage
persistentVolumeClaim:
claimName: prom-pvc
Grafana :-
apiVersion: apps/v1
kind: Deployment
metadata:
name: graf-deploy
labels:
env: production
spec:
replicas: 1
selector:
matchLabels:
env: production
template:
metadata:
name: graf-pod
labels:
env: production
spec:
containers:
- name: graf-con
image: mykgod/grafana
volumeMounts:
- name: graf-persistent-storage
mountPath: /grafana-7.0.3/data
ports:
- containerPort: 3000
name: graf-pod
volumes:
- name: graf-persistent-storage
persistentVolumeClaim:
claimName: graf-pvc
Service
We have exposed our deploy through following service.
apiVersion: v1
kind: Service
metadata:
name: service-monitor-1
spec:
selector:
env: production
type: NodePort
ports:
- port: 9090
protocol: TCP
name: port-prom
- port: 3000
protocol: TCP
name: port-graf
Testing
Now, it is the time to test our services…
kubectl get all
check the port exposed to the services by above command and open web browser to see the webUI of grafana and preometheus.
Now, let’s do some changes to promestheus.yml file lying in our pod.
kubectl get pods
kubectl exec -it <prometheus_pod_name> -- bash


Here I added another system for metrics monitoring.
kill -HUP 1
# this will kill the process 1 (prometheus) and restart it
Now, we can see our metrics through prometheus and grafana.
We have our precious monitoring data into our prom and graf server….so let’s delete a pod, any prom or graf or both.
kubectl get pods
kubectl delete pods <prom_pod_name>
kubectl delete pods <graf_pod_name>
Refresh the browser pages and we can see that changes are still there and it is working fine. And also we can check our pods rebuild.
Conclusion
We successfully deployed grafana and prometheus on top of kubernetes cluster and made these services and their data permanent using PVCs and exposed the services to the outside world.



