Skip to content

Ceph — Benchmarks

Scope

Performance characteristics, scaling limits, and resource consumption for Ceph.

RADOS Bench Results

Operation 3 OSD (HDD) 3 OSD (SSD) 12 OSD (NVMe)
Seq Write 300-500 MB/s 1-2 GB/s 5-10 GB/s
Seq Read 400-600 MB/s 1.5-3 GB/s 8-15 GB/s
Random 4K Write 500-1,000 IOPS 10k-30k IOPS 100k+ IOPS
Random 4K Read 1,000-2,000 IOPS 20k-50k IOPS 200k+ IOPS

RBD (Block) Performance

Block Size Throughput IOPS Latency (P99)
4K random N/A 10k-50k 1-5ms
64K sequential 500MB-2GB/s N/A 2-10ms
1M sequential 1-5 GB/s N/A 5-20ms

Scaling Limits

Dimension Limit Notes
OSDs per cluster 10,000+ Tested by CERN
Total capacity Exabytes Linear scaling
Pools 64 (default pg_num) Tune pg_num
PGs per OSD 100-200 (recommended) Performance degrades beyond 300
Objects per PG Millions No hard limit