Skip to content

Benchmarks

Test Conditions

  • Grafana Version tested: v12.x (current stable line, 2026)
  • LGTM Stack Components: Mimir, Loki, Tempo (latest stable)
  • Date: April 2026
  • Note: Grafana itself is primarily a visualization layer. Performance is almost always bottlenecked by the underlying data source, not Grafana's rendering engine. The benchmarks below reflect both Grafana UI limits and backend throughput.

Grafana Server Performance

Dashboard Rendering

Metric Observation Source
Panels per dashboard No hard limit; practical limit ~25–30 before browser sluggishness Grafana Docs
Recommended panel count 8–12 (overview), 15–20 (detailed) Community best practices
Data points per panel Rendering degrades above ~10k points; use maxDataPoints to cap Grafana Docs
Dashboard load time target < 3 seconds for 95th percentile Industry SRE standard
Concurrent viewers Grafana server itself handles hundreds; bottleneck is query load on backends Grafana Docs

Query Performance Guidelines

Query Type Acceptable p99 Concern Threshold
Simple PromQL (1 series) < 200ms > 500ms
Moderate PromQL (10–50 series) < 1s > 2s
Complex PromQL (100+ series, range) < 5s > 10s
LogQL (label-filtered) < 2s > 5s
LogQL (full scan, large window) < 30s > 60s
TraceQL (by trace ID) < 500ms > 2s
TraceQL (attribute search) < 10s > 30s

Mimir Benchmarks (Metrics)

Mimir is designed for hyperscale Prometheus metrics:

Metric Benchmark Conditions
Active series 1 billion+ Documented by Grafana Labs
Ingestion rate 30M+ samples/sec Large-scale production deployments
Query throughput Thousands of concurrent PromQL queries With query-frontend sharding
Storage efficiency 1.2–1.5 bytes per sample (compressed TSDB blocks on object storage) With compaction
Ingester flush interval 2 hours (default TSDB block size) Configurable
Replication factor 3 (default for durability) Configurable

Mimir Cost Efficiency

Scale Estimated Infra Cost (Self-Hosted) Notes
100k active series ~$50–100/mo Monolithic mode, minimal nodes
1M active series ~$200–500/mo Microservices mode recommended
10M active series ~$1,000–3,000/mo Full microservices, HA
100M+ active series $5,000–20,000+/mo Enterprise-grade infra

Loki Benchmarks (Logs)

Metric Benchmark Conditions
Ingestion rate 1 TB+/day Documented in production at scale
Query performance Label-filtered: sub-second; full scan: depends heavily on time range and volume Label cardinality is the primary factor
Compression ratio 10–20:1 (Snappy/GZIP on chunks) Varies by log structure
Storage cost Up to 90% cheaper than Elasticsearch for same data volume Due to label-only indexing
Max active streams Configurable per tenant (default: 5,000) Set via max_global_streams_per_user

Loki Cardinality Guidelines

Label Strategy Active Streams Impact
Ideal (namespace, pod, job) < 10k Optimal performance
Moderate (+ container, node) 10k–50k Acceptable
High cardinality (+ request ID) 50k–500k+ Performance degrades, ingester memory spikes

Critical: Never use user IDs, request IDs, or IP addresses as Loki labels. Use them in log content and filter with LogQL pipe expressions.

Tempo Benchmarks (Traces)

Metric Benchmark Conditions
Ingestion rate 100M+ spans/day Production at Grafana Labs scale
Trace ID lookup < 200ms typical Direct trace ID queries
TraceQL search Seconds to tens of seconds Depends on time range and attribute selectivity
Storage cost Significantly cheaper than Jaeger + Elasticsearch No index required; object storage only
Parquet block size Configurable, typically 100–500 MB Larger blocks improve search, increase flush latency

Comparison: LGTM Cost vs Alternatives

Stack Metrics (1M series) Logs (100 GB/day) Traces (50M spans/day) Total Estimated
Self-hosted LGTM $200–500/mo $300–800/mo $200–500/mo $700–1,800/mo
Grafana Cloud Pro $500–1,000/mo $500–1,500/mo $300–800/mo $1,300–3,300/mo
Datadog $1,500–5,000/mo $2,000–8,000/mo $1,000–4,000/mo $4,500–17,000/mo
New Relic (Full Platform) $1,000–3,000/mo $1,500–5,000/mo included $2,500–8,000/mo

Costs are rough estimates for mid-2026, vary significantly by configuration, data volume, retention, and provider.

Caveats

  • Grafana UI performance depends heavily on the browser — Chrome performs best for large dashboards
  • Backend query performance is 90% determined by the data source, not Grafana
  • Loki and Tempo are optimized for object storage — running them on local disk undermines cost benefits
  • Mimir benchmarks assume proper recording rules for expensive queries
  • All cost estimates assume reasonable retention (15–30 days for logs/traces, 13 months for metrics)

Sources

URL Source Kind Authority Date
https://grafana.com/docs/grafana/latest/best-practices/ docs primary 2026-04-10
https://grafana.com/docs/mimir/latest/references/architecture/ docs primary 2026-04-10
https://grafana.com/docs/loki/latest/get-started/overview/ docs primary 2026-04-10
https://grafana.com/docs/tempo/latest/getting-started/ docs primary 2026-04-10
https://grafana.com/pricing/ docs primary 2026-04-10