Skip to content

Ceph

Unified, software-defined distributed storage providing block, object, and file storage at exabyte scale — the industry standard for OpenStack and Kubernetes.

Overview

Ceph is a unified distributed storage system that provides block (RBD), object (RGW/S3), and file (CephFS) storage from a single cluster. It uses the CRUSH algorithm for deterministic data placement, eliminating central lookup tables. Ceph is the default storage backend for OpenStack and widely used with Kubernetes (via Rook).

Key Facts

Attribute Detail
Repository github.com/ceph/ceph
Stars ~14k+ ⭐
Latest Version v20.2.1 "Tentacle" (April 6, 2026)
Language C++, Python
License LGPL 2.1 / 3.0
Governance Community + Red Hat / IBM

Evaluation

Pros Cons
Unified: block + object + file Complex to deploy and operate
CRUSH: no SPOF, linear scaling High resource requirements (RAM, CPU)
Self-healing, auto-rebalancing Tuning required for optimal performance
Industry standard (OpenStack, K8s) Erasure coding historically slower (improved in Tentacle)
Exabyte-scale proven OSD recovery can saturate network
Cephadm for lifecycle management
FastEC in v20: better EC performance

Key Features (Tentacle v20.2)

Feature Detail
FastEC Major erasure coding performance improvement for small I/O
SMB Manager Integrated Samba/CephFS SMB shares with AD support
SeaStore preview Next-gen OSD object store for NVMe devices
Multi-cluster dashboard Manage multiple Ceph clusters from one UI
OAuth 2.0 Dashboard authentication
NVMe/TCP gateways NVMe-oF target support
RBD transient locks Better exclusive lock handling

Storage Interfaces

Interface Protocol Use Case
RBD Block (RADOS Block Device) VM disks, K8s PVs, databases
RGW Object (S3/Swift API) Backups, media, data lakes
CephFS File (POSIX) Shared filesystems, NFS replacement

Sources