Install Grafana Tempo on Kubernetes with Ceph RGW S3 Storage

This post installs Grafana Tempo (traces backend) on Kubernetes, using Ceph RGW (S3-compatible) for long-term trace storage.

If you already installed Mimir (metrics) and Loki (logs), Tempo is the last backend you need before you wire everything into Grafana and start doing real logs ↔ traces ↔ metrics correlation.


Deployment scale note (Homelab-sized Tempo)

This is a homelab / first bring-up installation:

  • Single Tempo pod (StatefulSet: lgtm-tempo-0)
  • No HA / no horizontal scale
  • S3-backed (Ceph RGW) so you still get durable trace block storage

If you want full production scale, you typically move to a distributed Tempo deployment (multiple components, multiple replicas, HA rings, caching, scaling knobs, etc.). This post intentionally keeps it simple: install → verify → move on.


This post is based on


Lab context

  • Kubernetes (bare metal)
  • Ingress controller: Traefik
  • Namespace: observability
  • Ceph RGW (S3)
  • Endpoint (HAProxy + Let’s Encrypt): https://ceph.maksonlee.com:443

What you’ll get

  • Tempo deployed via Helm (grafana/tempo)
  • Tempo stores trace blocks in Ceph S3:
    • traceslgtm-tempo-traces
  • Homelab-friendly install:
    • single Tempo pod (lgtm-tempo-0)
  • S3 keys stored in a Kubernetes Secret (no hardcoding keys in the values file)
  • Path-style S3 requests enabled (best default for Ceph RGW behind one hostname/cert)

Tempo endpoints

After install, you will have a lgtm-tempo Service in the observability namespace.

Common endpoints:

  • Tempo HTTP API (Grafana datasource)
    http://lgtm-tempo.observability.svc:3200
  • OTLP ingest (recommended)
    OTLP gRPC: lgtm-tempo.observability.svc:4317
    OTLP HTTP: http://lgtm-tempo.observability.svc:4318

Optional ingest/query protocols (also exposed by the Service):

  • Jaeger Thrift UDP: 6831/6832
  • Jaeger gRPC: 14250
  • Jaeger Thrift HTTP: 14268
  • Zipkin: 9411

Note: Tempo does not create buckets automatically. Create the bucket first.


  1. Create the namespace (skip if already exists)
kubectl create namespace observability --dry-run=client -o yaml | kubectl apply -f -

  1. Add Helm repos
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update

  1. Prepare Ceph S3 (create Tempo bucket)

Tempo needs one bucket for trace blocks.

Because you already configured s3cmd in the Mimir post, here you only create Tempo’s bucket.

s3cmd mb s3://lgtm-tempo-traces
s3cmd ls | grep lgtm-tempo

  1. Store S3 credentials in Kubernetes (skip if you already have it)

Tempo can read S3 keys from environment variables. We store them in a Kubernetes Secret and inject them into the Tempo pod.

If you already created this Secret for Mimir/Loki, reuse it.

Create (or apply) the Secret:

kubectl -n observability create secret generic ceph-s3-credentials \
  --from-literal=AWS_ACCESS_KEY_ID='REPLACE_ME' \
  --from-literal=AWS_SECRET_ACCESS_KEY='REPLACE_ME' \
  --dry-run=client -o yaml | kubectl apply -f -

Verify (shows keys, not values):

kubectl -n observability describe secret ceph-s3-credentials

  1. Install Tempo (Ceph S3 + Secret-based creds + path-style)

Why “path-style” matters for Ceph RGW

If you use virtual-host style requests like:

https://<bucket>.ceph.maksonlee.com/object

you typically need wildcard DNS + wildcard TLS cert that matches bucket subdomains.

With a single endpoint like ceph.maksonlee.com:443, path-style is the safe default:

https://ceph.maksonlee.com/<bucket>/object

So we explicitly set:

forcepathstyle: true

Why we set config.expand-env=true

We want to reference:

  • ${AWS_ACCESS_KEY_ID}
  • ${AWS_SECRET_ACCESS_KEY}

inside the Tempo S3 config, and have Tempo expand them at runtime.

Tempo only expands env vars if -config.expand-env=true is passed, so we set that via Helm.

Create tempo-values.yaml

# Tempo -> Ceph RGW (S3-compatible)
# Homelab-sized monolithic Tempo (single replica)

replicas: 1

tempo:
  # Inject S3 credentials into the pod environment
  extraEnvFrom:
    - secretRef:
        name: ceph-s3-credentials

  # IMPORTANT:
  # grafana/tempo chart expects extraArgs as a MAP (object), not a LIST.
  # If you set it as a list, Helm can generate args like "-0=...":
  #   flag provided but not defined: -0
  extraArgs:
    config.expand-env: "true"

  storage:
    trace:
      backend: s3
      s3:
        bucket: lgtm-tempo-traces

        endpoint: ceph.maksonlee.com:443
        region: us-east-1

        # Pull keys from env vars (injected from Secret)
        access_key: ${AWS_ACCESS_KEY_ID}
        secret_key: ${AWS_SECRET_ACCESS_KEY}

        # HTTPS with public cert (Let's Encrypt)
        insecure: false

        # Force path-style for Ceph RGW behind a single hostname/cert
        forcepathstyle: true

Install

helm -n observability upgrade --install lgtm-tempo grafana/tempo -f tempo-values.yaml

  1. Verify the deployment

Pods

kubectl -n observability get pods | grep tempo

Expected:

  • lgtm-tempo-0 1/1 Running

Logs

kubectl -n observability logs -f lgtm-tempo-0

You want to see:

  • Tempo starting successfully
  • OTLP receivers listening (4317/4318)
  • “Tempo started”

If you see this warning, it is fine:

  • metrics-generator … disabled

That only means the metrics-generator feature is off (not required to store/query traces).

Service ports

kubectl -n observability get svc | grep tempo
kubectl -n observability get svc lgtm-tempo -o wide

You should see:

  • 3200/TCP (Tempo HTTP API)
  • 4317/TCP (OTLP gRPC)
  • 4318/TCP (OTLP HTTP)
  • plus optional Jaeger/Zipkin ports

  1. Quick health checks (Tempo API)

Port-forward the Tempo HTTP API:

kubectl -n observability port-forward svc/lgtm-tempo 3200:3200

Then:

curl -fsS http://127.0.0.1:3200/ready; echo
curl -fsS http://127.0.0.1:3200/api/status/buildinfo | head

Did this guide save you time?

Support this site

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top