Grafana server
Grafana is your monitoring dashboard
Prometheus has its own dashboard for handling all its functionality and notifications, but it is not as powerful or popular as Grafana for visualizing the metrics gathered from its sources. To give your monitoring stack a Grafana-based metrics dashboard, follow this part to deploy it as a Kustomize project based on the official installation documentation for this tool.
Kustomize project folders for Grafana
Generate the folder structure of the Kustomize project for your Grafana server setup:
$ mkdir -p $HOME/k8sprjs/monitoring/components/server-grafana/resourcesGrafana server persistent storage claim
You have to make available for your Grafana server the 2GiB LVM persistent volume created in the first part of this monitoring stack deployment procedure. As usual, configure a persistent volume claim for connecting Grafana to it:
Create a file named
server-grafana.persistentvolumeclaim.yamlunderresources/:$ touch $HOME/k8sprjs/monitoring/components/server-grafana/resources/server-grafana.persistentvolumeclaim.yamlDeclare the
PersistentVolumeClaimfor Grafana inserver-grafana.persistentvolumeclaim.yaml:# Grafana server claim of persistent storage apiVersion: v1 kind: PersistentVolumeClaim metadata: name: server-grafana spec: accessModes: - ReadWriteOnce storageClassName: local-path volumeName: monitoring-ssd-grafana-data resources: requests: storage: 1.9G
Grafana server StatefulSet
Grafana needs to store some data, so you should deploy this observability tool with a StatefulSet resource:
Produce a
server-grafana.statefulset.yamlfile under theresourcespath:$ touch $HOME/k8sprjs/monitoring/components/server-grafana/resources/server-grafana.statefulset.yamlDeclare the
StatefulSetfor your Grafana server inserver-grafana.statefulset.yaml:# Grafana server StatefulSet for a regular pod apiVersion: apps/v1 kind: StatefulSet metadata: name: server-grafana spec: replicas: 1 serviceName: server-grafana template: spec: automountServiceAccountToken: false securityContext: fsGroup: 472 supplementalGroups: - 0 containers: - name: server image: grafana/grafana-dev:12.4.0-21524955964 imagePullPolicy: IfNotPresent ports: - name: server containerPort: 3000 protocol: TCP readinessProbe: failureThreshold: 3 successThreshold: 1 initialDelaySeconds: 10 periodSeconds: 30 timeoutSeconds: 2 httpGet: path: /robots.txt port: 3000 scheme: HTTP livenessProbe: failureThreshold: 3 successThreshold: 1 initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 1 tcpSocket: port: 3000 resources: requests: cpu: 250m memory: 128Mi volumeMounts: - mountPath: /var/lib/grafana name: grafana-storage volumes: - name: grafana-storage persistentVolumeClaim: claimName: server-grafanaThis
StatefulSetfor your Grafana server has a number of particularities in itsspec.template.specblock, some inherited from the YAML specified in Grafana’s Kubernetes deployment documentation:automountServiceAccountToken
This parameter is set tofalsehere because Grafana will get the information it needs from Prometheus, not directly from the Kubernetes API.securityContext
This is the samesecurityContextspecified in the official Grafana documentation at the pod level of this GrafanaStatefulSet:fsGroup
Ensures that the Grafana filesystem used in the pod is going to be owned by the group identified by the GID472.supplementalGroups
Is a list of GIDs applied to the first process run in each container of the pod. Notice that in this case it is therootgroup (GID0).
servercontainer
Only one container is going to run in this pod.The
imageis the Alpine-based version of Grafana Open Source (there is also an Enterprise edition of Grafana). The particularity here is that, by recommendation from the Grafana documentation, it is better to use the image from thegrafana/grafana-devbranch rather than the main one and pick a specific tagged version like12.4.0-21524955964.This is to avoid unexpected issues when a new commit enters in the main Grafana branch that could potentially disrupt the Grafana instance running in your setup. In other words, Grafana recommends to keep total control of the version run in your Kubernetes cluster. Where possible (when version tags are available), you have already seen this criteria applied in previous deployments of this guide.
In
ports, just one, namedserver, is configured to have the standard Grafana port3000open.The
readinessProbeandlivenessProbesections are mostly like the ones you have already seen in other. On one hand, thereadinessProbechecks the/robots.txtpathserved by Grafana on theport 3000through aHTTPconnectionscheme. On the other, thelivenessProbeprobe is configured to get a response from thetcpSocketfound also at theport 3000.
volumeMountssection
Only mounts the storage for Grafana’s data./var/lib/grafana
Default path where Grafana stores its own data.
Grafana server Service
Declare the Service object for your Grafana server like this:
Create a file named
server-grafana.service.yamlunderresources:$ touch $HOME/k8sprjs/monitoring/components/server-grafana/resources/server-grafana.service.yamlDeclare the
Servicefor your Grafana server inserver-grafana.service.yaml:# Grafana server headless service apiVersion: v1 kind: Service metadata: name: server-grafana annotations: prometheus.io/scrape: 'true' prometheus.io/port: '3000' spec: type: ClusterIP clusterIP: None ports: - name: server port: 3000 targetPort: server protocol: TCPThe main thing to notice in this particular headless service is that Grafana also has metrics that Prometheus can scrape. Therefore, the required annotation
prometheus.iolabels are set in themetadatablock.
Service’s absolute internal FQDN
As a component of the monitoring stack, this headless service is going to be placed under the monitoring namespace. This means that its absolute Fully Qualified Domain Name (FQDN) will be:
server-grafana.monitoring.svc.homelab.cluster.Grafana server Kustomize project
With all the previous components declared, put them together in a Kustomize subproject for your Grafana setup:
Create a
kustomization.yamlfile in theserver-grafanafolder:$ touch $HOME/k8sprjs/monitoring/components/server-grafana/kustomization.yamlDeclare the
Kustomizationobject for your Grafana server setup inkustomization.yaml:# Grafana server setup apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization labels: - pairs: app: server-grafana includeSelectors: true includeTemplates: true resources: - resources/server-grafana.persistentvolumeclaim.yaml - resources/server-grafana.service.yaml - resources/server-grafana.statefulset.yaml replicas: - name: server-grafana count: 1 images: - name: grafana/grafana-dev newTag: 12.4.0-21524955964The only particularity to highlight from this
kustomization.yamlis the fact that there is neither aconfigMapGeneratornor asecretGeneratorblock in it. This implies that Grafana is deployed with a default configuration.
Validating the Kustomize YAML output
Like in previous guides, you should validate this Kustomize project’s complete YAML output:
Execute
kubectl kustomizeredirecting its output to aserver-grafana.k.output.yamlfile (or to a text editor):$ kubectl kustomize $HOME/k8sprjs/monitoring/components/server-grafana > server-grafana.k.output.yamlThe resulting
server-grafana.k.output.yamlshould look as the yaml next:apiVersion: v1 kind: Service metadata: annotations: prometheus.io/port: "3000" prometheus.io/scrape: "true" labels: app: server-grafana name: server-grafana spec: clusterIP: None ports: - name: server port: 3000 protocol: TCP targetPort: server selector: app: server-grafana type: ClusterIP --- apiVersion: v1 kind: PersistentVolumeClaim metadata: labels: app: server-grafana name: server-grafana spec: accessModes: - ReadWriteOnce resources: requests: storage: 1.9G storageClassName: local-path volumeName: monitoring-ssd-grafana-data --- apiVersion: apps/v1 kind: StatefulSet metadata: labels: app: server-grafana name: server-grafana spec: replicas: 1 selector: matchLabels: app: server-grafana serviceName: server-grafana template: metadata: labels: app: server-grafana spec: automountServiceAccountToken: false containers: - image: grafana/grafana-dev:12.4.0-21524955964 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 tcpSocket: port: 3000 timeoutSeconds: 1 name: server ports: - containerPort: 3000 name: server protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /robots.txt port: 3000 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 30 successThreshold: 1 timeoutSeconds: 2 resources: requests: cpu: 250m memory: 128Mi volumeMounts: - mountPath: /var/lib/grafana name: grafana-storage securityContext: fsGroup: 472 supplementalGroups: - 0 volumes: - name: grafana-storage persistentVolumeClaim: claimName: server-grafanaBeyond checking the correctness of all the values, names and paths, there is nothing special to say about this particular output.
Do not deploy this Grafana server project on its own
This Grafana server cannot be deployed on its own because is missing several elements, such as the persistent volume required by the PVC, or the Traefik ingress that enables access to it. All the missing elements will be declared and put together with all the other components in the final part of this chapter G035.
Relevant system paths
Folders in kubectl client system
$HOME/k8sprjs/monitoring$HOME/k8sprjs/monitoring/components$HOME/k8sprjs/monitoring/components/server-grafana$HOME/k8sprjs/monitoring/components/server-grafana/resources
Files in kubectl client system
$HOME/k8sprjs/monitoring/components/server-grafana/kustomization.yaml$HOME/k8sprjs/monitoring/components/server-grafana/resources/server-grafana.persistentvolumeclaim.yaml$HOME/k8sprjs/monitoring/components/server-grafana/resources/server-grafana.service.yaml$HOME/k8sprjs/monitoring/components/server-grafana/resources/server-grafana.statefulset.yaml