This post adds OpenTelemetry metrics to your Backstage backend and makes them scrapable by Prometheus Operator (kube-prometheus-stack) via a ServiceMonitor.
This follows Backstage’s OpenTelemetry setup approach (instrumentation file + --require).
This post is based on:
What you’ll get
- Backstage backend exposes Prometheus metrics at:
http://<pod-ip>:9464/metrics - Prometheus Operator discovers it through a
ServiceMonitor
- Add dependencies
Install the same OpenTelemetry packages used in your change set.
yarn --cwd packages/backend add \
@opentelemetry/auto-instrumentations-node \
@opentelemetry/exporter-prometheus \
@opentelemetry/exporter-trace-otlp-http \
@opentelemetry/sdk-node
- Create
packages/backend/src/instrumentation.js
Create this file exactly.
const { isMainThread } = require('node:worker_threads');
if (isMainThread) {
const { NodeSDK } = require('@opentelemetry/sdk-node');
const {
getNodeAutoInstrumentations,
} = require('@opentelemetry/auto-instrumentations-node');
const { PrometheusExporter } = require('@opentelemetry/exporter-prometheus');
const prometheusExporter = new PrometheusExporter({
host: '0.0.0.0',
port: 9464,
endpoint: '/metrics',
});
const sdk = new NodeSDK({
metricReader: prometheusExporter,
instrumentations: [getNodeAutoInstrumentations()],
});
sdk.start();
}
- Local dev: ensure your backend start command preloads the instrumentation
This step is for local development (running the backend directly via Backstage CLI). Your backend start command must include --require ./src/instrumentation.js so the instrumentation loads before the backend code runs.
packages/backend/package.json
Add/update the start script inside your existing "scripts" object (don’t replace the whole object).
"start": "backstage-cli package start --require ./src/instrumentation.js"
- Ensure your Docker images include
instrumentation.js
- .dockerignore
Your repo ignores packages/*/src, so you must whitelist this one file.
diff --git a/.dockerignore b/.dockerignore
index 05edb62..0ebde81 100644
--- a/.dockerignore
+++ b/.dockerignore
@@ -3,6 +3,7 @@
.yarn/install-state.gz
node_modules
packages/*/src
+!packages/backend/src/instrumentation.js
packages/*/node_modules
plugins
*.local.yaml
- packages/backend/Dockerfile
Copy the file into the image and preload it in CMD.
diff --git a/packages/backend/Dockerfile b/packages/backend/Dockerfile
index ee80245..ee3191b 100644
--- a/packages/backend/Dockerfile
+++ b/packages/backend/Dockerfile
@@ -67,4 +67,10 @@ COPY --chown=node:node examples ./examples
COPY --chown=node:node packages/backend/dist/bundle.tar.gz app-config*.yaml ./
RUN tar xzf bundle.tar.gz && rm bundle.tar.gz
-CMD ["node", "packages/backend", "--config", "app-config.yaml", "--config", "app-config.production.yaml"]
+# Copy OpenTelemetry instrumentation entrypoint (Prometheus exporter, etc.)
+# NOTE: Your .dockerignore must whitelist this file:
+# !packages/backend/src/instrumentation.js
+COPY --chown=node:node packages/backend/src/instrumentation.js ./instrumentation.js
+
+# Load instrumentation before the backend starts (Backstage OpenTelemetry tutorial pattern)
+CMD ["node", "--require", "./instrumentation.js", "packages/backend", "--config", "app-config.yaml", "--config", "app-config.production.yaml"]
- Dockerfile.multi
Dockerfile.multi is used for your multi-stage Docker build (copy what you need into the final image, then run a slim runtime stage).
diff --git a/Dockerfile.multi b/Dockerfile.multi
index 966f51f..76901b9 100644
--- a/Dockerfile.multi
+++ b/Dockerfile.multi
@@ -8,6 +8,11 @@ COPY .yarnrc.yml ./
COPY packages packages
+# IMPORTANT:
+# We delete package sources below, but we need this file later in the final image.
+# Copy it out now before the "find ... rm -rf" runs.
+COPY packages/backend/src/instrumentation.js /app/instrumentation.js
+
# Comment this out if you don't have any internal plugins
COPY plugins plugins
@@ -104,10 +109,14 @@ COPY --chown=node:node app-config*.yaml ./
# This will include the examples, if you don't need these simply remove this line
COPY --chown=node:node examples ./examples
+# Copy OpenTelemetry instrumentation entrypoint into WORKDIR (/app)
+COPY --from=packages --chown=node:node /app/instrumentation.js ./instrumentation.js
+
# This switches many Node.js dependencies to production mode.
ENV NODE_ENV=production
# This disables node snapshot for Node 20 to work with the Scaffolder
ENV NODE_OPTIONS="--no-node-snapshot"
-CMD ["node", "packages/backend", "--config", "app-config.yaml", "--config", "app-config.production.yaml"]
+# Load instrumentation before the backend starts
+CMD ["node", "--require", "./instrumentation.js", "packages/backend", "--config", "app-config.yaml", "--config", "app-config.production.yaml"]
- Test the image locally (before touching Kubernetes)
Build and run with both ports:
docker build -f packages/backend/Dockerfile -t homelab-backstage:otel .
docker run --rm -p 7007:7007 -p 9464:9464 homelab-backstage:otel
Verify:
curl -I http://127.0.0.1:7007/
curl -s http://127.0.0.1:9464/metrics | head
- Kubernetes: expose the metrics port + add ServiceMonitor
Update kubernetes/homelab-backstage.yaml
This adds:
- container port
9464 - service port
metrics --require ./instrumentation.jsin args (because this YAML overrides DockerfileCMD)
diff --git a/kubernetes/homelab-backstage.yaml b/kubernetes/homelab-backstage.yaml
index 9875145..b128a6c 100644
--- a/kubernetes/homelab-backstage.yaml
+++ b/kubernetes/homelab-backstage.yaml
@@ -37,6 +37,9 @@ spec:
ports:
- name: http
containerPort: 7007
+ - name: metrics
+ containerPort: 9464
+
envFrom:
- secretRef:
name: backstage-env
@@ -48,8 +51,11 @@ spec:
subPath: app-config.k8s.yaml
# Start Backstage with your existing configs + the k8s override
+ # IMPORTANT: add --require ./instrumentation.js because this YAML overrides Dockerfile CMD.
command: ['node']
args:
+ - '--require'
+ - './instrumentation.js'
- 'packages/backend'
- '--config'
- 'app-config.yaml'
@@ -80,6 +86,9 @@ spec:
- name: http
port: 80
targetPort: 7007
+ - name: metrics
+ port: 9464
+ targetPort: 9464
---
apiVersion: networking.k8s.io/v1
Add kubernetes/backstage-servicemonitor.yaml
Create this file exactly.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: backstage
namespace: observability
labels:
release: kps
spec:
namespaceSelector:
matchNames:
- backstage
selector:
matchLabels:
app: homelab-backstage
endpoints:
- port: metrics
path: /metrics
interval: 30s
Add it to kubernetes/kustomization.yaml
diff --git a/kubernetes/kustomization.yaml b/kubernetes/kustomization.yaml
index 7b3d823..0422fdc 100644
--- a/kubernetes/kustomization.yaml
+++ b/kubernetes/kustomization.yaml
@@ -10,6 +10,7 @@ resources:
- eso-vault-sa.yaml
- secretstore-vault-backstage.yaml
- externalsecret-backstage-env.yaml
+ - backstage-servicemonitor.yaml
generatorOptions:
disableNameSuffixHash: false
- Verify Prometheus is scraping it
- Quick in-cluster scrape test
kubectl -n observability run curl --rm -i --restart=Never \
--image=curlimages/curl \
--command -- sh -lc \
'curl -sS http://homelab-backstage.backstage.svc:9464/metrics | head'
If you see target_info and other metrics, scraping works.
- Prometheus UI → Targets
Go to Status → Targets and confirm the target created from ServiceMonitor/backstage is UP.
Did this guide save you time?
Support this site