This post installs Grafana Mimir (metrics backend) on Kubernetes, using Ceph RGW (S3-compatible) for long-term object storage.
Why Mimir first (and why no Kafka here)?
Helm chart v6 can run Mimir in a newer “ingest storage” architecture that introduces Kafka as a core dependency. That’s great at scale, but it adds more moving parts and resource usage. In this homelab-first series (and because we install the rest of LGTM in separate posts), we keep Mimir in the classic architecture: Kafka disabled, ingest storage disabled, and everything sized to single replicas. The goal is: install → verify → move on without operating Kafka yet.
Why install Mimir instead of “just using kube-prometheus-stack”?
Because kube-prometheus-stack gives you Prometheus (storage + query), but Prometheus local TSDB is not designed to be your long-term, highly-available, object-storage-backed metrics platform. In this series, Prometheus is the collector (scrapes your cluster) and Mimir is the long-term backend (durable storage in Ceph S3, better retention, and a single query endpoint that Grafana can use). We will later enable remote_write from kube-prometheus-stack → Mimir, so you get the best of both: Prometheus for scraping, Mimir for storage/query at scale.
We install Mimir first (instead of the full LGTM stack) to keep the initial setup simple and verifiable.
This post is based on
Lab context
- Kubernetes (bare metal)
- Ingress controller: Traefik
- MetalLB LB IP:
192.168.0.98 - Namespace:
observability - Ceph RGW (S3)
- Endpoint (HAProxy + Let’s Encrypt):
https://ceph.maksonlee.com:443
What you’ll get
- Mimir deployed via Helm (
grafana/mimir-distributed) - Mimir stores data in Ceph S3 buckets:
- blocks →
lgtm-mimir-blocks - ruler →
lgtm-mimir-ruler - alertmanager →
lgtm-mimir-alertmanager
- blocks →
- Mimir gateway endpoints:
- remote_write:
http://lgtm-mimir-gateway.observability.svc:80/api/v1/push- query (Prometheus datasource):
http://lgtm-mimir-gateway.observability.svc:80/prometheus
- query (Prometheus datasource):
Note: Mimir does not create buckets automatically. Create buckets first.
- Create the namespace
kubectl create namespace observability –dry-run=client -o yaml | kubectl apply -f –
- Add Helm repos
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
- Prepare Ceph S3 (create buckets + credentials Secret)
Create a Ceph RGW user for LGTM object storage
You need an RGW S3 user (access key + secret key). This user will own the buckets used by Mimir (and later Loki/Tempo).
Run on a Ceph admin host.
sudo cephadm shell -- radosgw-admin user create \
--uid="lgtm" \
--display-name="LGTM Stack"
This command prints JSON that contains something like:
keys[0].access_keykeys[0].secret_key
Copy those values somewhere safe (you’ll use them in s3cmd and the Kubernetes Secret).
Install s3cmd on a client machine
Use any Linux host that can reach ceph.maksonlee.com:443.
sudo apt update
sudo apt install -y s3cmd
Configure s3cmd for Ceph RGW (path-style)
s3cmd --configure
Use values like:
- Access Key / Secret Key: your Ceph RGW user keys
- S3 Endpoint:
ceph.maksonlee.com:443 - Use HTTPS:
Yes - Region:
us-east-1
Recommended ~/.s3cfg for RGW (path-style):
host_base = ceph.maksonlee.com:443
host_bucket = ceph.maksonlee.com:443
use_https = True
signature_v2 = False
Create buckets (Mimir)
s3cmd mb s3://lgtm-mimir-blocks
s3cmd mb s3://lgtm-mimir-ruler
s3cmd mb s3://lgtm-mimir-alertmanager
Verify:
s3cmd ls | grep lgtm-mimir
Create a reusable Kubernetes Secret for Ceph S3 credentials
We store S3 credentials in a single Secret (in observability) so we can reuse it across the LGTM posts (Mimir / Loki / Tempo).
kubectl -n observability create secret generic ceph-s3-credentials \
--from-literal=AWS_ACCESS_KEY_ID='REPLACE_ME' \
--from-literal=AWS_SECRET_ACCESS_KEY='REPLACE_ME' \
--dry-run=client -o yaml | kubectl apply -f -
Verify (shows keys, not values):
kubectl -n observability describe secret ceph-s3-credentials
- Install Mimir (Ceph S3 + homelab scale, classic architecture)
Why “path-style” matters for Ceph RGW behind one hostname/cert
If you use virtual-host style:
- <bucket>.ceph.maksonlee.com
TLS/DNS often fails unless you have wildcard DNS + wildcard cert matching bucket subdomains.
With a single endpoint like ceph.maksonlee.com:443, path-style is the safe default:
- https://ceph.maksonlee.com/<bucket>/<object>
So we set:
- bucket_lookup_type: path
Create mimir-values.yaml (uses the Secret, no keys in the file)
Create/replace ~/lgtm/mimir-values.yaml:
global:
# Inject the shared S3 creds Secret into all Mimir pods as env vars
extraEnvFrom:
- secretRef:
name: ceph-s3-credentials
# Bump this string if you rotate the Secret and want a rollout on helm upgrade
podAnnotations:
cephS3CredentialsVersion: "0"
minio:
enabled: false
# Chart v6: keep classic architecture (no ingest storage / no Kafka)
kafka:
enabled: false
# Homelab sizing: single replica components
distributor:
replicas: 1
querier:
replicas: 1
query_frontend:
replicas: 1
query_scheduler:
replicas: 1
compactor:
replicas: 1
ruler:
replicas: 1
ingester:
replicas: 1
zoneAwareReplication:
enabled: false
store_gateway:
replicas: 1
zoneAwareReplication:
enabled: false
alertmanager:
replicas: 1
zoneAwareReplication:
enabled: false
mimir:
structuredConfig:
# Single-tenant mode (no X-Scope-OrgID needed)
multitenancy_enabled: false
no_auth_tenant: homelab
# Chart v6 classic mode (required when kafka.enabled=false)
ingest_storage:
enabled: false
ingester:
push_grpc_method_enabled: true
ring:
replication_factor: 1
store_gateway:
sharding_ring:
replication_factor: 1
alertmanager:
sharding_ring:
replication_factor: 1
# Common S3 config (Ceph RGW endpoint)
common:
storage:
backend: s3
s3:
endpoint: ceph.maksonlee.com:443
region: us-east-1
# Read from env (provided by global.extraEnvFrom Secret)
access_key_id: "${AWS_ACCESS_KEY_ID}"
secret_access_key: "${AWS_SECRET_ACCESS_KEY}"
# HTTPS with public cert (Let's Encrypt)
insecure: false
# Force path-style for a single hostname/cert
bucket_lookup_type: path
# Separate buckets (required)
blocks_storage:
s3:
bucket_name: lgtm-mimir-blocks
ruler_storage:
s3:
bucket_name: lgtm-mimir-ruler
alertmanager_storage:
s3:
bucket_name: lgtm-mimir-alertmanager
Install
helm -n observability upgrade --install lgtm-mimir grafana/mimir-distributed \
--version 6.0.5 \
-f ~/lgtm/mimir-values.yaml
- Verify the deployment
Pods and Services
kubectl -n observability get pods | grep mimir
kubectl -n observability get svc | grep mimir
Helm notes (endpoints)
helm -n observability get notes lgtm-mimir
You should see:
- remote_write:
http://lgtm-mimir-gateway.observability.svc:80/api/v1/push
- read/query:
http://lgtm-mimir-gateway.observability.svc:80/prometheus
Confirm the rendered config contains your S3 settings
List ConfigMaps:
kubectl -n observability get configmap | grep mimir
Dump it (replace <CONFIGMAP_NAME>):
kubectl -n observability get configmap <CONFIGMAP_NAME> -o yaml | sed -n '1,260p'
You should find:
endpoint: ceph.maksonlee.com:443bucket_lookup_type: path- bucket names under
blocks_storage,ruler_storage,alertmanager_storage ${AWS_ACCESS_KEY_ID}/${AWS_SECRET_ACCESS_KEY}in config (not raw keys)
Quick health checks via port-forward
kubectl -n observability port-forward svc/lgtm-mimir-gateway 8080:80
In another terminal:
curl -fsS http://127.0.0.1:8080/ready; echo
curl -fsS http://127.0.0.1:8080/prometheus/api/v1/status/buildinfo | head
Did this guide save you time?
Support this site