…and switch from Ubuntu’s apt cephadm (v19) to a manually downloaded cephadm (v20)
This post documents a practical, works-in-real-life upgrade path for a single-node Ceph lab (MON/MGR/OSD/RGW/Dashboard all on one Ubuntu 24.04 host) from Ceph Squid (19.x) to Ceph Tentacle (20.x) using cephadm-managed containers.
It also shows how to switch from Ubuntu’s apt cephadm (19.x) to an upstream, manually downloaded cephadm (20.2.0) after the cluster finishes upgrading.
Lab/test only: a single node provides no redundancy. Warnings like “pools have no replicas configured” are expected.
This post is based on
This upgrade guide assumes you already have the same single-node Squid (v19) cephadm lab from my earlier post:
That post covers: single-node bootstrap on Ubuntu 24.04 (MON/MGR/OSD/RGW/Dashboard), replica=1 pool defaults, and HAProxy + Let’s Encrypt (DNS-01) for S3 (443) and Dashboard (8443).
Target
- Ceph: Tentacle 20.2.0
- OS: Ubuntu 24.04
- Topology: single host (example hostname:
ceph) - Deployment: cephadm (containers)
Why this “mixed tooling” situation happens on Ubuntu 24.04
On Ubuntu 24.04, it’s common that:
- your cluster daemons run as container images (so you can upgrade to Ceph 20.x),
- but the host’s
cephadmpackage from Ubuntu stays on 19.x.
That’s not ideal long-term. After a major upgrade, you should align host-side tooling (cephadm) to the same major release.
- Single-node requirement: you must have a standby MGR
ceph orch upgrade requires active + standby mgr. On single-host clusters, this is the #1 blocker.
Check current mgr status:
sudo cephadm shell -- ceph mgr stat
sudo cephadm shell -- ceph orch ps --daemon-type mgrIf you see:
num_standby: 0, or- applying two mgr fails with
Error EINVAL: Cannot place more than one mgr per host
…then you need to allow mgr co-location on a single host.
Allow co-located mgr daemons on a single host
Set this once:
sudo cephadm shell -- ceph config set mgr mgr_standby_modules false
sudo cephadm shell -- ceph config get mgr mgr_standby_modulesExpected output: false
Deploy 2 mgr daemons on the same host
Do this inside cephadm shell (avoids confusion about file paths between host and container):
sudo cephadm shell
cat <<'EOF' | ceph orch apply -i -
service_type: mgr
placement:
hosts:
- ceph
count_per_host: 2
EOF
exit- Start the upgrade to Tentacle (20.2.0)
Start by version:
sudo cephadm shell -- ceph orch upgrade start --ceph-version 20.2.0- Monitor upgrade progress
sudo cephadm shell -- ceph orch upgrade status
sudo cephadm shell -- ceph versions
sudo cephadm shell -- ceph -sWatch upgrade activity/logs (Ctrl-C is fine; it just stops watching):
sudo cephadm shell -- ceph -W cephadm- Single-node caveat: upgrade may stall at the only OSD
On a single-node/single-OSD lab, cephadm safety checks may refuse to stop the only OSD and you’ll see something like:
- Upgrade: unsafe to stop osd(s) at this time (…) PGs are or would become offline
This is expected in a 1-OSD cluster.
Practical lab workaround: redeploy the OSD to the target image
Find the daemon name:
sudo cephadm shell -- ceph orch ps --daemon-type osdRedeploy it (example osd.0):
sudo cephadm shell -- ceph orch daemon redeploy osd.0 --image quay.io/ceph/ceph:v20.2.0Then re-check:
sudo cephadm shell -- ceph orch upgrade status
sudo cephadm shell -- ceph versions- Verify completion
sudo cephadm shell -- ceph orch upgrade status
sudo cephadm shell -- ceph versions
sudo cephadm shell -- ceph -sExpected:
There are no upgrades in progress currently.ceph versionsshows 20.2.0 for mon/mgr/osd/rgwceph mgr statshowsnum_standby: 1(two mgr daemons: one active + one standby)
- Switch Ubuntu apt
cephadm(v19) to a manually downloadedcephadm(v20)
Once daemons are fully on Tentacle, update the host-side cephadm tool.
- Protect container runtime packages (optional)
This prevents accidental apt autoremove surprises:
sudo apt-mark manual docker.io containerd runc- Purge Ubuntu’s
cephadm
sudo apt purge -y cephadm
sudo apt autoremove --dry-run- Install
cephadm20.2.0 (manual download)
Download and install the Tentacle cephadm binary:
cd /tmp
curl --silent --remote-name --location https://download.ceph.com/rpm-tentacle/el9/noarch/cephadm
chmod +x cephadm
sudo install -m 0755 cephadm /usr/local/sbin/cephadmVerify:
command -v cephadm
cephadm version
sudo cephadm shell -- ceph -sFinal checklist
sudo cephadm shell -- ceph orch upgrade status
sudo cephadm shell -- ceph versions
sudo cephadm shell -- ceph mgr stat
cephadm versionYou want:
- no upgrade in progress
- all daemons on 20.2.0
num_standby: 1- host
cephadmis 20.2.0
Did this guide save you time?
Support this site