This post shows how I deployed ThingsBoard Community Edition (microservices, hybrid DB) on my existing 3-node bare-metal Kubernetes cluster, using:
- Kubernetes v1.34 on Ubuntu 24.04 (kubeadm)
- kube-vip for API HA
- MetalLB (L2) for LoadBalancer IPs
- Traefik as the Ingress controller with wildcard
*.maksonlee.comfrom cert-manager - Ceph RBD as the default StorageClass
- ThingsBoard CE in microservices mode with hybrid database:
- PostgreSQL for entities
- Cassandra for time-series
- Kafka + Zookeeper + Valkey
- HTTPS UI:
https://tb.maksonlee.comvia Traefik - MQTT over TLS:
mqtts://mqtt.maksonlee.com:8883via MetalLB - with TLS terminated inside the MQTT transport (ready for mTLS later)
Everything runs on three bare-metal Ubuntu 24.04 nodes on my LAN.
Why MQTT over TLS doesn’t go through Traefik
In theory I could run MQTT over TLS through Traefik as a TCP router on port 8883, but I intentionally keep MQTT out of the HTTP ingress path:
- MQTT is raw TCP, not HTTP – it’s simpler to expose it directly via a
LoadBalancerService on port 8883 instead of adding TCP entryPoints and routers in Traefik. - TLS (and later mTLS) terminates inside ThingsBoard – the
tb-mqtt-transportpod sees the full TLS handshake and client certificates, which makes X.509 / mTLS device auth easier to manage without extra Traefik config. - Separate failure domains – Traefik issues (ingress, middlewares, HTTP routing) don’t affect MQTT; MetalLB sends 8883 traffic straight to the MQTT transport.
If you prefer, you can front MQTTs with Traefik TCP routers and TLS passthrough, but this guide keeps the MQTT path as a simple LoadBalancer directly to tb-mqtt-transport.
- Cluster & Network Overview
Kubernetes cluster
Three bare-metal nodes, each control-plane + worker:
k8s-1.maksonlee.com 192.168.0.99
k8s-2.maksonlee.com 192.168.0.100
k8s-3.maksonlee.com 192.168.0.101
Key settings:
- OS: Ubuntu Server 24.04 on all nodes
- Container runtime: containerd (
SystemdCgroup = true) - Kubernetes: v1.34 from
pkgs.k8s.io - CNI: Calico
- Pod CIDR:
10.244.0.0/16 - Service CIDR:
10.96.0.0/12(kubeadm default)
Control-plane endpoint:
- kube-vip VIP:
192.168.0.97 - DNS:
k8s.maksonlee.com → 192.168.0.97:6443
MetalLB + Traefik
MetalLB:
- Mode: L2
- IP pool:
192.168.0.98-192.168.0.98
Traefik:
- Namespace:
traefik - Service type:
LoadBalancer spec.loadBalancerIP: 192.168.0.98ingressClassName: traefik- Default TLS certificate: wildcard
*.maksonlee.comvia cert-manager (Cloudflare DNS-01)
Later I share the same IP 192.168.0.98 with a dedicated MQTTs LoadBalancer Service using MetalLB’s allow-shared-ip annotation.
Ceph RBD as default StorageClass
From a previous post I already had an external Ceph Squid v19 cluster (ceph.maksonlee.com) and the Ceph CSI RBD driver configured. My default StorageClass is:
kubectl get storageclass
NAME PROVISIONER DEFAULT
csi-rbd-sc rbd.csi.ceph.com Yes
Any PVC that omits storageClassName lands on Ceph RBD.
DNS
On my LAN DNS (or /etc/hosts on clients and nodes):
192.168.0.97 k8s.maksonlee.com
192.168.0.98 app1.maksonlee.com app2.maksonlee.com tb.maksonlee.com mqtt.maksonlee.com
192.168.0.99 k8s-1.maksonlee.com k8s-1
192.168.0.100 k8s-2.maksonlee.com k8s-2
192.168.0.101 k8s-3.maksonlee.com k8s-3
192.168.0.98 is a shared MetalLB IP:
80 / 443→ Traefik (HTTP / HTTPS ingress)8883→ ThingsBoard MQTTs (TCP) via MetalLB
- Clone ThingsBoard CE K8S Repo (Minikube Flavor)
On k8s-1 (admin node with kubectl):
cd ~
git clone https://github.com/thingsboard/thingsboard-ce-k8s.git
cd thingsboard-ce-k8s/minikube
This directory contains:
thirdparty.yml– Postgres, Cassandra, Kafka, Zookeeper, Valkeythingsboard.yml– TB transports + Web UItb-node.yml– TB core nodetb-transport-configmap.yml,tb-node-configmap.yml,tb-kafka-configmap.yml,tb-cache-configmap.ymlroutes.yml– original Ingress (nginx-style)- Scripts:
k8s-install-tb.shk8s-deploy-resources.shk8s-deploy-thirdparty.sh
- DB configs:
postgres/hybrid/
I run ThingsBoard in hybrid mode and rely on Ceph RBD as the default StorageClass.
- Configure Hybrid DB (Postgres + 3-Node Cassandra)
Set .env
Edit .env in thingsboard-ce-k8s/minikube:
vi .env
Key values:
DATABASE=hybrid
CASSANDRA_REPLICATION_FACTOR=3
DATABASE=hybrid→ TB uses:- PostgreSQL (entities)
- Cassandra (time-series)
CASSANDRA_REPLICATION_FACTOR=3→ matches the Cassandra StatefulSet (3 replicas) and my 3-node cluster.
Important: this does not change the number of Cassandra pods. It only sets RF.
The Cassandra StatefulSet still defaults to 1 replica if you don’t touch cassandra.yml.
Update cassandra.yml for 3 replicas + smaller CPU request
The upstream minikube/cassandra.yml has:
replicas: 1limits.cpu: 1000mandrequests.cpu: 1000m
On my 3-node lab, I run 3 Cassandra pods and cut the CPU request down to 250m so the scheduler will actually place all three.
I only show the diff:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cassandra
namespace: thingsboard
labels:
app: cassandra
spec:
- serviceName: cassandra
- replicas: 1
+ serviceName: cassandra
+ replicas: 3
selector:
matchLabels:
app: cassandra
@@ -...@@
containers:
- name: cassandra
image: cassandra:5.0.4
@@ -...@@
- resources:
- limits:
- cpu: "1000m"
- memory: 2Gi
- requests:
- cpu: "1000m"
- memory: 2Gi
+ resources:
+ limits:
+ cpu: "1000m"
+ memory: 2Gi
+ requests:
+ cpu: "250m"
+ memory: 2Gi
Apply it before running the installer:
kubectl apply -f cassandra.yml
Now the installer will bring up a 3-node ring (cassandra-0/1/2), and RF=3 actually makes sense.
- Install Third-Party Stack and Initialize DB
The install script handles third-party services plus DB initialization for ThingsBoard.
From thingsboard-ce-k8s/minikube:
./k8s-install-tb.sh --loadDemo
What this does:
kubectl apply -f tb-namespace.yml- Sets current context namespace to
thingsboard kubectl apply -f thirdparty.yml(Postgres, Cassandra, Kafka, Zookeeper, Valkey)- Waits for:
statefulset/zookeeperstatefulset/tb-kafkastatefulset/tb-valkey
- Launches
tb-db-setuppod fromdatabase-setup.yml, runs:INSTALL_TB=trueLOAD_DEMO=true
- Deletes
tb-db-setuppod when finished
Watch pods:
kubectl get pods -n thingsboard -w
Target state (third-party only, before TB microservices):
NAME READY STATUS RESTARTS AGE
postgres-… 1/1 Running 0 …
cassandra-0 1/1 Running 0 …
cassandra-1 1/1 Running 0 …
cassandra-2 1/1 Running 0 …
tb-kafka-0 1/1 Running 0 …
tb-valkey-0 1/1 Running 0 …
zookeeper-0 1/1 Running 0 …
zookeeper-1 1/1 Running 0 …
zookeeper-2 1/1 Running 0 …
Check PVCs:
kubectl get pvc -n thingsboard
They should all be bound to csi-rbd-sc (Ceph RBD), e.g.:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS
postgres-pv-claim Bound pvc-… 5Gi RWO csi-rbd-sc
cassandra-data-cassandra-0 Bound pvc-… 1Gi RWO csi-rbd-sc
cassandra-data-cassandra-1 Bound pvc-… 1Gi RWO csi-rbd-sc
cassandra-data-cassandra-2 Bound pvc-… 1Gi RWO csi-rbd-sc
… (Kafka, ZK, Valkey PVCs) …
- Deploy ThingsBoard Microservices
Next, deploy TB node, transports, and ingress-related resources.
./k8s-deploy-resources.sh
This script:
- Ensures namespace
thingsboardexists - Sets current context namespace to
thingsboard - Applies DB ConfigMap from
hybrid/tb-node-db-configmap.yml - Applies:
tb-cache-configmap.ymltb-kafka-configmap.ymltb-node-configmap.ymltb-transport-configmap.ymlthingsboard.ymltb-node.ymlroutes.yml(we’ll replace this in the next step)
Check the pods:
kubectl get pods -n thingsboard
Expected (after everything is up):
NAME READY STATUS RESTARTS AGE
cassandra-0 1/1 Running 0 …
cassandra-1 1/1 Running 0 …
cassandra-2 1/1 Running 0 …
postgres-56c4dbcd55-mvf4x 1/1 Running 0 …
tb-coap-transport-0 1/1 Running 0 …
tb-http-transport-0 1/1 Running 0 …
tb-js-executor-… 1/1 Running 0 …
tb-kafka-0 1/1 Running 0 …
tb-mqtt-transport-0 1/1 Running 0 …
tb-node-0 1/1 Running 0 …
tb-valkey-0 1/1 Running 0 …
tb-web-ui-… 1/1 Running 0 …
zookeeper-0 1/1 Running 0 …
zookeeper-1 1/1 Running 0 …
zookeeper-2 1/1 Running 0 …
- Replace Upstream Ingress with Traefik Version
The upstream routes.yml is written for nginx ingress (regex annotations). On this cluster I use Traefik with wildcard *.maksonlee.com, so I replaced it with a simple Traefik-friendly Ingress for tb.maksonlee.com.
Final routes.yml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tb-ingress
namespace: thingsboard
spec:
ingressClassName: traefik
rules:
- host: tb.maksonlee.com
http:
paths:
# Device HTTP transport API
- path: /api/v1/
pathType: Prefix
backend:
service:
name: tb-http-transport
port:
number: 8080
# Core REST API
- path: /api/
pathType: Prefix
backend:
service:
name: tb-node
port:
number: 8080
# Swagger UI
- path: /swagger
pathType: Prefix
backend:
service:
name: tb-node
port:
number: 8080
# Webjars
- path: /webjars
pathType: Prefix
backend:
service:
name: tb-node
port:
number: 8080
# OpenAPI v2/v3
- path: /v2/
pathType: Prefix
backend:
service:
name: tb-node
port:
number: 8080
- path: /v3/
pathType: Prefix
backend:
service:
name: tb-node
port:
number: 8080
# Rule node static resources
- path: /static/rulenode/
pathType: Prefix
backend:
service:
name: tb-node
port:
number: 8080
- path: /assets/help/
pathType: Prefix
backend:
service:
name: tb-node
port:
number: 8080
# OAuth2 callbacks
- path: /oauth2/
pathType: Prefix
backend:
service:
name: tb-node
port:
number: 8080
- path: /login/oauth2/
pathType: Prefix
backend:
service:
name: tb-node
port:
number: 8080
# Everything else → Web UI
- path: /
pathType: Prefix
backend:
service:
name: tb-web-ui
port:
number: 8080
Apply:
kubectl apply -f routes.yml
kubectl get ingress -n thingsboard
Result:
NAME CLASS HOSTS ADDRESS PORTS AGE
tb-ingress traefik tb.maksonlee.com 192.168.0.98 80 …
Quick HTTP(S) checks:
curl -k -I https://tb.maksonlee.com/
curl -k -I https://tb.maksonlee.com/api/
curl -k -I "https://tb.maksonlee.com/api/v1/"
curl -k -I https://tb.maksonlee.com/swagger
curl -k -I https://tb.maksonlee.com/webjars/
curl -k -I https://tb.maksonlee.com/oauth2/
curl -k -I https://tb.maksonlee.com/login/oauth2/
- Create MQTT TLS Secret (
tb-mqtts-tls)
The MQTT pod will mount a TLS secret named tb-mqtts-tls in namespace thingsboard.
I created this before patching thingsboard.yml, so the pod can mount it immediately.
Recommended: cert-manager Certificate
Assuming you already have a working ClusterIssuer (e.g. letsencrypt-prod):
tb-mqtts-cert.yaml:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: tb-mqtts-cert
namespace: thingsboard
spec:
secretName: tb-mqtts-tls
dnsNames:
- mqtt.maksonlee.com
issuerRef:
kind: ClusterIssuer
name: letsencrypt-prod
Apply and verify:
kubectl apply -f tb-mqtts-cert.yaml
kubectl -n thingsboard get certificate tb-mqtts-cert
kubectl -n thingsboard get secret tb-mqtts-tls
The secret should be type: kubernetes.io/tls with tls.crt + tls.key.
Alternative: manual TLS secret
If you already have a cert/key pair:
kubectl -n thingsboard create secret tls tb-mqtts-tls \
--cert=/path/to/tls.crt \
--key=/path/to/tls.key
- Modify
thingsboard.ymlfor MQTT TLS & COAP Service
The upstream thingsboard.yml:
- Runs
tb-mqtt-transportonly on plain 1883 - Exposes COAP as a
LoadBalanceron UDP 5683
On this cluster I wanted:
- MQTT over TLS (port 8883) terminated inside
tb-mqtt-transport - COAP kept internal (
ClusterIP) so it doesn’t consume my only MetalLB IP
I only show the diff for minikube/thingsboard.yml (header comments removed).
Add TLS volume, port 8883, and SSL env to tb-mqtt-transport
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: tb-mqtt-transport
namespace: thingsboard
@@ -70,6 +70,10 @@ spec:
- name: tb-mqtt-transport-config
configMap:
name: tb-mqtt-transport-config
items:
- key: conf
path: tb-mqtt-transport.conf
- key: logback
path: logback.xml
+ # TLS secret for MQTT over TLS
+ - name: mqtts-tls
+ secret:
+ secretName: tb-mqtts-tls
containers:
- name: server
imagePullPolicy: Always
image: thingsboard/tb-mqtt-transport:4.2.1
ports:
- containerPort: 1883
name: mqtt
+ - containerPort: 8883
+ name: mqtts
env:
- name: TB_SERVICE_ID
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MQTT_BIND_ADDRESS
value: "0.0.0.0"
- name: MQTT_BIND_PORT
value: "1883"
- name: MQTT_TIMEOUT
value: "10000"
+
+ # Enable MQTT over TLS inside the pod
+ - name: MQTT_SSL_ENABLED
+ value: "true"
+ - name: MQTT_SSL_BIND_ADDRESS
+ value: "0.0.0.0"
+ - name: MQTT_SSL_BIND_PORT
+ value: "8883"
+ - name: MQTT_SSL_CREDENTIALS_TYPE
+ value: "PEM"
+ - name: MQTT_SSL_PEM_CERT
+ value: "/etc/tls/tls.crt"
+ - name: MQTT_SSL_PEM_KEY
+ value: "/etc/tls/tls.key"
@@ -112,6 +132,9 @@ spec:
volumeMounts:
- mountPath: /config
name: tb-mqtt-transport-config
+ - mountPath: /etc/tls
+ name: mqtts-tls
+ readOnly: true
readinessProbe:
periodSeconds: 20
tcpSocket:
port: 1883
The tb-mqtt-transport Service remains ClusterIP on port 1883 for internal use; the external 8883 will come from a separate LoadBalancer Service.
Make COAP internal only (ClusterIP)
apiVersion: v1
kind: Service
metadata:
name: tb-coap-transport
namespace: thingsboard
spec:
- type: LoadBalancer
+ type: ClusterIP
selector:
app: tb-coap-transport
ports:
- port: 5683
name: coap
protocol: UDP
Apply the changes and restart MQTT transport:
kubectl apply -f thingsboard.yml
kubectl rollout restart statefulset tb-mqtt-transport -n thingsboard
kubectl rollout status statefulset tb-mqtt-transport -n thingsboard
Sanity check inside the pod:
kubectl -n thingsboard exec -it tb-mqtt-transport-0 -- ls -l /etc/tls
You should see tls.crt and tls.key.
- Share MetalLB IP for Traefik + MQTTs
I want:
tb.maksonlee.com:443→ Traefik → TB Web UI / RESTmqtt.maksonlee.com:8883→ MetalLB →tb-mqtt-transport:8883
Both using 192.168.0.98.
MetalLB allows multiple Services to share one IP if they use the same metallb.universe.tf/allow-shared-ip annotation value.
Annotate the Traefik Service
Edit the Traefik Service in namespace traefik:
kubectl edit svc traefik -n traefik
Ensure metadata and spec include:
metadata:
name: traefik
namespace: traefik
annotations:
metallb.universe.tf/allow-shared-ip: ip-192-168-0-98
spec:
type: LoadBalancer
loadBalancerIP: 192.168.0.98
ports:
- name: web
port: 80
targetPort: 80
- name: websecure
port: 443
targetPort: 443
selector:
app.kubernetes.io/name: traefik
Create tb-mqtts-lb Service
Now add a LoadBalancer Service for MQTTs that shares the same IP:
tb-mqtts-lb.yaml:
apiVersion: v1
kind: Service
metadata:
name: tb-mqtts-lb
namespace: thingsboard
annotations:
metallb.universe.tf/allow-shared-ip: ip-192-168-0-98
spec:
type: LoadBalancer
loadBalancerIP: 192.168.0.98
externalTrafficPolicy: Cluster
selector:
app: tb-mqtt-transport
ports:
- name: mqtts
port: 8883
targetPort: 8883
protocol: TCP
Apply and verify:
kubectl apply -f tb-mqtts-lb.yaml
kubectl -n thingsboard get svc tb-mqtts-lb
Expected:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
tb-mqtts-lb LoadBalancer 10.110.249.6 192.168.0.98 8883:3xxxx/TCP …
At this point, externally:
tb.maksonlee.com→192.168.0.98:443→ Traefik → TB HTTP(S)mqtt.maksonlee.com→192.168.0.98:8883→ MetalLB →tb-mqtt-transport:8883
Traefik is not in the MQTT path; TLS terminates inside the TB MQTT pod.
- Test MQTT over TLS
From a machine that trusts Let’s Encrypt (or whatever CA you used):
mosquitto_pub \
-h mqtt.maksonlee.com \
-p 8883 \
-t "v1/devices/me/telemetry" \
-m '{"temp":25}' \
--cafile /etc/ssl/certs/ca-certificates.crt \
-u YOUR_ACCESS_TOKEN
Basic subscribe test:
mosquitto_sub \
-h mqtt.maksonlee.com \
-p 8883 \
-t "v1/devices/me/attributes" \
--cafile /etc/ssl/certs/ca-certificates.crt \
-u YOUR_ACCESS_TOKEN
Right now this uses the standard access-token auth. Because TLS terminates inside tb-mqtt-transport, I can later enable:
- MQTT X.509 client auth
- mTLS using the same Smallstep CA / device cert pipeline I already use for ESP32 + ThingsBoard
…without touching Traefik.
- Summary
In this setup:
- The cluster is a 3-node bare-metal Kubernetes v1.34 cluster with:
- kube-vip API VIP (
k8s.maksonlee.com→192.168.0.97) - MetalLB L2 with a single IP (
192.168.0.98) - Traefik Ingress with wildcard
*.maksonlee.comvia cert-manager
- kube-vip API VIP (
- Ceph RBD (
csi-rbd-sc) is the defaultStorageClass, and all TB stateful data (Postgres, 3-node Cassandra ring, Kafka, Zookeeper, Valkey, logs) sits on RBD. - ThingsBoard CE runs in microservices + hybrid DB mode:
- PostgreSQL for entities
- Cassandra (RF=3) for time-series
- Kafka/Zookeeper/Valkey for messaging and caching
- Web UI and API via: https://tb.maksonlee.com
- MQTT over TLS via: mqtts://mqtt.maksonlee.com:8883
with TLS terminated inside the TB MQTT transport, using a cert-manager-managed secret (tb-mqtts-tls) and a shared MetalLB IP (192.168.0.98) between Traefik and MQTT.
Did this guide save you time?
Support this site