SigNoz — Benchmarks
Performance characteristics, capacity planning data, and scale limits for SigNoz.
vs ELK Stack
| Metric |
SigNoz (ClickHouse) |
ELK Stack |
Advantage |
| Log ingestion speed |
Baseline |
~2.5x slower |
SigNoz 2.5x faster |
| Resource consumption |
Baseline |
~2x more |
SigNoz 50% less |
| Aggregate query speed |
Baseline |
~13x slower |
SigNoz up to 13x faster |
| Ingestion capacity |
10+ TB/day |
Similar |
Comparable |
| Compression ratio |
10–30x (columnar) |
1.5x (Lucene) |
SigNoz 7–20x better |
Source: SigNoz vendor benchmarks. Cross-validated against ClickHouse engineering blog data on columnar efficiency.
High Cardinality Handling
| Aspect |
Detail |
| Approach |
Columnar storage — no inverted index explosion |
| Impact |
Adding a dimension with billions of unique values is trivial |
| Best for |
Logs and traces with rich metadata |
| Caution |
Avoid high-cardinality attributes as metric labels |
Capacity Planning
Resource Matrix (from SigNoz Official Docs)
| Component |
Small (< 10 GB/day) |
Medium (10–50 GB/day) |
Large (50–200 GB/day) |
| OTel Collectors |
1 replica, 1 CPU, 2 GB |
2 replicas, 2 CPU, 4 GB |
4+ replicas, 4 CPU, 8 GB |
| Query Service |
1 replica, 0.5 CPU, 1 GB |
2 replicas, 1 CPU, 2 GB |
2 replicas, 2 CPU, 4 GB |
| ClickHouse |
1 node, 4 CPU, 16 GB |
2 shards × 2 replicas, 8 CPU, 32 GB |
4+ shards × 2 replicas, 16 CPU, 64 GB |
| ZooKeeper / Keeper |
1 node, 0.5 CPU, 1 GB |
3 nodes, 1 CPU, 2 GB |
3 nodes, 2 CPU, 4 GB |
| PostgreSQL |
1 node, 0.5 CPU, 1 GB |
Managed DB (RDS) |
Managed DB (RDS) |
Cloud Instance Recommendations
| Cloud |
General Purpose (Collectors, QS) |
Compute-Optimized (ClickHouse) |
| AWS |
T3 family+ (Intel), T4g+ (ARM) |
C5+ (Intel), C6g/C7g+ (ARM) |
| GCP |
E2 family+ |
C3 / C3D+ |
Storage Sizing
| Signal |
Daily Volume |
15-Day Retention |
30-Day Retention |
| Logs (10:1 compression) |
50 GB raw/day |
~75 GB disk |
~150 GB disk |
| Traces (15:1 compression) |
20 GB raw/day |
~20 GB disk |
~40 GB disk |
| Metrics (30:1 compression) |
5 GB raw/day |
~2.5 GB disk |
~5 GB disk |
Scale Limits
| Dimension |
Practical Limit |
Notes |
| Daily ingestion |
10+ TB/day |
Requires multi-shard ClickHouse |
| Active time series |
10M+ |
ClickHouse handles high cardinality well |
| Concurrent queries |
50–100 |
Depends on ClickHouse node count |
| Trace span retention |
15–90 days typical |
Storage cost-limited |
| Log retention |
15–90 days typical |
ClickHouse TTL-managed |
- System tables growth: ClickHouse's
query_log and zookeeper_log can grow rapidly. Monitor and set TTLs.
- ClickHouse parts merges: Under very high ingestion, ensure sufficient CPU for background merges.
- ZooKeeper latency: In multi-shard setups, ZooKeeper latency directly impacts replication lag.
Caveats
- Benchmarks are from SigNoz vendor testing and ClickHouse engineering publications.
- Actual performance varies significantly based on data patterns, cardinality, and query complexity.
- Managed ClickHouse providers may exhibit different resource profiles.
Sources