This post installs Grafana (UI) on your Kubernetes cluster and connects it to your existing backends:
- Mimir (metrics)
- Loki (logs)
- Tempo (traces)
Note: Grafana can connect to Mimir even if Mimir has no data yet. If you haven’t configured Prometheus remote_write → Mimir, queries like up will return No data. Use vector(1) to confirm the datasource works even when storage is empty.
This post is based on
Lab context
- Kubernetes (bare metal)
- Ingress controller: Traefik
- Namespace:
observability - StorageClass (example):
csi-rbd-sc(Ceph RBD CSI)
Existing services (from previous posts):
- Mimir query endpoint (Prometheus API):
http://lgtm-mimir-gateway.observability.svc:80/prometheus - Mimir remote_write endpoint:
http://lgtm-mimir-gateway.observability.svc:80/api/v1/push - Loki gateway:
http://lgtm-loki-gateway.observability.svc:80 - Tempo API:
http://lgtm-tempo.observability.svc:3200
What you’ll get
- Grafana deployed via Helm (
grafana/grafana) - Persistent Grafana storage (PVC) so it survives pod restarts
- Optional Traefik Ingress (
grafana.maksonlee.com) - Datasources auto-provisioned:
- Mimir (Prometheus)
- Loki
- Tempo
- Curl-based verification commands (no browser required)
Prerequisites (already done in previous posts)
You should already have:
- Namespace
observability - Mimir/Loki/Tempo installed and Running
Quick checks:
kubectl get ns observability
kubectl -n observability get svc lgtm-mimir-gateway
kubectl -n observability get svc lgtm-loki-gateway
kubectl -n observability get svc lgtm-tempo
- Helm repo (skip if you already added it)
You likely already added the Grafana repo when installing Mimir/Loki/Tempo.
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
# Optional: list chart versions
helm search repo grafana/grafana --versions | head
- Create the Grafana admin Secret (recommended)
Don’t hardcode admin credentials in values.yaml. Create a Secret and reference it via admin.existingSecret.
Create an idempotent Secret:
kubectl -n observability create secret generic lgtm-grafana-admin \
--from-literal=admin-user='admin' \
--from-literal=admin-password='REPLACE_ME_STRONG' \
--dry-run=client -o yaml | kubectl apply -f -
Verify:
kubectl -n observability get secret lgtm-grafana-admin
- Create
grafana-values.yaml
Create ~/lgtm/grafana-values.yaml (or wherever you store LGTM values files).
replicas: 1
service:
type: ClusterIP
port: 80
targetPort: 3000
admin:
existingSecret: lgtm-grafana-admin
userKey: admin-user
passwordKey: admin-password
# Persist Grafana data (sqlite DB, plugins, etc.)
persistence:
enabled: true
type: pvc
storageClassName: csi-rbd-sc # change if your StorageClass differs
accessModes:
- ReadWriteOnce
size: 10Gi
# Optional: expose via Traefik Ingress
ingress:
enabled: true
ingressClassName: traefik
hosts:
- grafana.maksonlee.com
path: /
pathType: Prefix
# If you have TLS configured in Traefik, enable this:
# tls:
# - secretName: grafana-tls
# hosts:
# - grafana.maksonlee.com
# If Traefik is not terminating TLS for grafana.maksonlee.com yet,
# keep root_url as http://... to avoid redirect loops.
grafana.ini:
server:
domain: grafana.maksonlee.com
root_url: https://grafana.maksonlee.com
# Provision datasources (Mimir/Loki/Tempo)
# NOTE: this must be a MAP, not a string block (no "|-")
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Mimir
uid: mimir
type: prometheus
access: proxy
url: http://lgtm-mimir-gateway.observability.svc:80/prometheus
isDefault: true
editable: true
- name: Loki
uid: loki
type: loki
access: proxy
url: http://lgtm-loki-gateway.observability.svc:80
editable: true
- name: Tempo
uid: tempo
type: tempo
access: proxy
url: http://lgtm-tempo.observability.svc:3200
editable: true
- Install Grafana
helm -n observability upgrade --install lgtm-grafana grafana/grafana \
-f ~/lgtm/grafana-values.yaml
Optional: pin a chart version for reproducibility:
helm search repo grafana/grafana --versions | head
# then add: --version <X.Y.Z>
Optional: render-check the chart when debugging values:
helm template -n observability lgtm-grafana grafana/grafana \
-f ~/lgtm/grafana-values.yaml > /tmp/grafana-rendered.yaml
- Verify the deployment
Pods / Service / Ingress
kubectl -n observability get pods | grep grafana
kubectl -n observability get svc | grep grafana
kubectl -n observability get ingress | grep grafana
You should see:
- a Grafana pod Running
lgtm-grafanaService (ClusterIP)lgtm-grafanaIngress pointing to your MetalLB IP (example:192.168.0.98)
Port-forward + curl
Start the port-forward:
kubectl -n observability port-forward svc/lgtm-grafana 3000:80
You should see:
Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000
In another terminal, verify Grafana responds (login page):
curl -I http://127.0.0.1:3000/login
Expected (example):
HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8
Optional: verify you get HTML content:
curl -fsS http://127.0.0.1:3000/login | head
Did this guide save you time?
Support this site