Benchmarks
Test Conditions
- Stack versions: Mimir 3.0.x, Loki 3.7.x, Tempo 2.x, Pyroscope 1.20.x, Grafana 12.4.x
- Date: April 2026
- Note: Benchmarks reflect both published Grafana Labs data and community reports. Individual component benchmarks are covered in detail in the Grafana Benchmarks folder.
Per-Component Throughput
| Component |
Metric |
Benchmark |
Conditions |
| Mimir |
Active series |
1B+ |
Documented by Grafana Labs |
| Mimir |
Ingestion rate |
30M+ samples/sec |
Multi-tenant, microservices mode |
| Mimir |
Storage efficiency |
~1.3 bytes/sample |
Compressed TSDB blocks |
| Loki |
Ingestion rate |
1 TB+/day |
Label-indexed, object storage |
| Loki |
Compression ratio |
10–20:1 |
Snappy/GZIP on chunks |
| Loki |
Query (label-filtered) |
Sub-second |
When label cardinality < 10k |
| Tempo |
Ingestion rate |
100M+ spans/day |
Parquet format, object storage |
| Tempo |
Trace ID lookup |
< 200ms |
Direct lookup |
| Tempo |
TraceQL search |
1–30s |
Depends on time range and selectivity |
| Pyroscope |
Profiles ingested |
Millions/hour |
Low overhead (<1% CPU typically) |
Stack-Level Benchmarks
End-to-End Latency (Ingestion → Queryable)
| Signal |
Typical Latency |
Notes |
| Metrics (recent) |
< 15 seconds |
Queried from ingester memory |
| Metrics (historical) |
< 5 seconds (cold) |
Object storage + cache |
| Logs (recent) |
< 10 seconds |
Queried from ingester memory |
| Logs (historical) |
1–30 seconds |
Depends on time range and label selectivity |
| Traces (by ID) |
< 500ms |
Bloom filter lookup |
| Traces (search) |
2–60 seconds |
Object storage scan |
Scale Limits (Documented Production)
| Dimension |
Scale |
Source |
| Salesforce |
70M metrics/min, 120k alerts/min |
GrafanaCON |
| Maersk |
Enterprise-wide centralized observability |
GrafanaCON case study |
| Grafana Cloud |
Multi-billion active series globally |
Grafana Labs |
Cost Comparison: LGTM vs Alternatives
At 1M Active Series + 100 GB/day Logs + 50M Spans/day
| Stack |
Estimated Monthly Cost |
Operational Burden |
Vendor Lock-in |
| Self-hosted LGTM |
$1,000–3,000 |
High (4+ backends) |
Low |
| Grafana Cloud Pro |
$1,500–4,000 |
Low (managed) |
Low-Medium |
| SigNoz (self-hosted) |
$500–1,500 |
Medium (single binary) |
Low |
| Datadog |
$5,000–17,000 |
Very Low (SaaS) |
High |
| New Relic |
$2,500–8,000 |
Very Low (SaaS) |
Medium |
| ELK + Prometheus + Jaeger |
$2,000–6,000 |
Very High (3 stacks) |
Low |
Costs are rough estimates for mid-2026. Vary significantly by cloud provider, retention, and configuration.
Why LGTM Is Cheaper
- Object storage costs pennies — S3 Standard: ~$0.023/GB/month vs EBS gp3: ~$0.08/GB/month
- Label-only indexing (Loki) — 10–100x less storage than Elasticsearch full-text indexing
- No index (Tempo) — traces stored as Parquet on object storage, no expensive index cluster
- Compression — Loki achieves 10–20:1 compression on log chunks
- Open source — no per-host, per-user, or per-GB licensing fees
Caveats
- These benchmarks assume microservices mode for production loads
- Loki performance degrades severely with high label cardinality (> 50k active streams)
- Tempo TraceQL search is slower than indexed alternatives (Jaeger/Elasticsearch) but dramatically cheaper
- Object storage latency varies by cloud provider and region — always co-locate in the same AZ
- Memcached caching is essential for production query performance — without it, queries hit object storage directly
Sources
| URL |
Source Kind |
Authority |
Date |
| https://grafana.com/docs/mimir/latest/references/architecture/ |
docs |
primary |
2026-04-10 |
| https://grafana.com/docs/loki/latest/get-started/overview/ |
docs |
primary |
2026-04-10 |
| https://grafana.com/docs/tempo/latest/getting-started/ |
docs |
primary |
2026-04-10 |
| https://grafana.com/pricing/ |
docs |
primary |
2026-04-10 |
| https://grafana.com/about/events/grafanacon/ |
conference |
primary |
2026-04-10 |