In my previous post,
I set up:
- A single-node Ceph Squid v19 cluster on Ubuntu 24.04
- RADOS Gateway (RGW) S3 on port 8081
- HAProxy with SSL termination on ceph.maksonlee.com:443 for S3 and ceph.maksonlee.com:8443 for the Dashboard
That setup is perfect for testing S3 object storage.
But Ceph is more than S3. It can also expose a POSIX-compatible shared filesystem called CephFS, backed by the same RADOS cluster.
In this post, we’ll add CephFS support to that existing single-node Ceph cluster and mount it from a separate client.
What Is CephFS?
CephFS is Ceph’s distributed file system:
- It stores file data as objects in the same RADOS cluster used by RBD and RGW.
- It has Metadata Servers (MDS) to manage directories, filenames, permissions, and hierarchy.
- Clients see it as a normal POSIX filesystem: you can
mkdir,cp,rm, and run applications on it.
If RGW is “S3-compatible object storage”, then CephFS is “a shared network filesystem with POSIX semantics” — useful for:
- Shared home directories
- Build artifacts
- Scratch space for clusters and labs
In this post, we only care about a simple lab: one node, one OSD, and one MDS.
Why Use CephFS Instead of Native NFS
On a single server, exporting an ext4/XFS directory over NFS is usually faster and simpler than CephFS. CephFS has extra layers (RADOS + MDS), so there is some overhead.
CephFS starts to make sense when you move beyond “one NAS box”:
- Scale-out: add OSDs and MDS daemons to grow capacity and performance, instead of scaling a single NFS server vertically.
- No single storage node: data is replicated across the Ceph cluster; losing one node or disk doesn’t instantly take the filesystem down.
- Unified storage: the same cluster provides RBD (block), RGW (S3), and CephFS (file) on the same OSDs and policies.
So: NFS wins for “one box + a few clients”; CephFS wins when you want scale-out, resilience, and one storage backend for block/object/file. In this post we’re still on a single node, but the workflow is the same as what you’d use later on a multi-node cluster.
Lab Assumptions
This guide assumes you’ve already followed the All-in-One Ceph S3 post and have:
- Hostname:
ceph - IP:
192.168.0.81 - DNS:
ceph.maksonlee.com - Single-node Ceph Squid v19 cluster installed via
cephadm - One OSD on
/dev/sdb - RGW running on port
8081 - Dashboard exposed at
https://ceph.maksonlee.com:8443via HAProxy
We will reuse the same cluster and add:
- A CephFS filesystem named
cephfs - An MDS daemon on
ceph - A CephFS client user
- A mount on
/mnt/cephfsfrom a separate Ubuntu client (test.maksonlee.comin my lab).
- Create a CephFS Filesystem
On the Ceph node (ceph), create the filesystem and MDS:
sudo cephadm shell -- ceph fs volume create cephfs --placement="1 ceph"
Verify:
sudo cephadm shell -- ceph fs ls
sudo cephadm shell -- ceph orch ps --service_name mds.cephfs
- Create a CephFS Client User
Create a CephFS client with RW access:
sudo cephadm shell -- ceph fs authorize cephfs client.cephfs / rw
Export keyring and secret to /etc/ceph:
sudo cephadm shell -- ceph auth get client.cephfs \
| sudo tee /etc/ceph/ceph.client.cephfs.keyring > /dev/null
sudo cephadm shell -- ceph auth print-key client.cephfs \
| sudo tee /etc/ceph/client.cephfs.secret > /dev/null
sudo chmod 600 /etc/ceph/ceph.client.cephfs.keyring /etc/ceph/client.cephfs.secret
- Prepare the Client
- Install Ceph client tools
On the client:
sudo apt update
sudo apt install -y ceph-common
- Copy ceph.conf and client credentials
In my lab:
- Ceph node:
ceph.maksonlee.com - Ceph SSH user:
administrator - Client host:
test.maksonlee.com - Client user:
ubuntu
On the Ceph node:
sudo scp /etc/ceph/ceph.conf \
ubuntu@test.maksonlee.com:/home/ubuntu/
sudo scp /etc/ceph/ceph.client.cephfs.keyring \
ubuntu@test.maksonlee.com:/home/ubuntu/
sudo scp /etc/ceph/client.cephfs.secret \
ubuntu@test.maksonlee.com:/home/ubuntu/
On the client:
ls
# ceph.conf ceph.client.cephfs.keyring client.cephfs.secret
sudo mv ~/ceph.conf /etc/ceph/ceph.conf
sudo mv ~/ceph.client.cephfs.keyring /etc/ceph/ceph.client.cephfs.keyring
sudo mv ~/client.cephfs.secret /etc/ceph/client.cephfs.secret
sudo chmod 600 /etc/ceph/ceph.client.cephfs.keyring /etc/ceph/client.cephfs.secret
Adjust usernames/hosts to match your environment.
- Mount CephFS (Kernel Client)
On the client, create a mount point:
sudo mkdir -p /mnt/cephfs
Mount using the kernel client:
sudo mount -t ceph ceph.maksonlee.com:/ /mnt/cephfs \
-o name=cephfs,secretfile=/etc/ceph/client.cephfs.secret
Quick test:
echo "hello from cephfs" | sudo tee /mnt/cephfs/hello.txt
ls -l /mnt/cephfs
df -h /mnt/cephfs
- Make It Persistent with fstab
Edit /etc/fstab on the client and add this single line:
ceph.maksonlee.com:/ /mnt/cephfs ceph name=cephfs,secretfile=/etc/ceph/client.cephfs.secret,_netdev 0 2
If you prefer a one-liner to append it:
sudo sh -c 'echo "ceph.maksonlee.com:/ /mnt/cephfs ceph name=cephfs,secretfile=/etc/ceph/client.cephfs.secret,_netdev 0 2" >> /etc/fstab'
Then test:
sudo umount /mnt/cephfs
sudo mount -a
df -h /mnt/cephfs
Did this guide save you time?
Support this site