Skip to content

Questions

Answered

How does Monoscope differ from OpenObserve?

Both use S3-native storage, but Monoscope adds LLM-powered natural language querying and scheduled AI agents for anomaly detection. Monoscope's backend is Haskell; OpenObserve's is Rust. Monoscope uses its own TimeFusion TSDB; OpenObserve uses its own Parquet-based engine.

What is TimeFusion?

A purpose-built open-source time-series database (MIT license) written in Rust. Uses Apache DataFusion as the query engine, Delta Lake for ACID transactions on S3, and exposes a PostgreSQL wire protocol so any Postgres client can query it. Benchmarked at 500K+ events/sec per instance.

Can I use my own S3 bucket with Monoscope Cloud?

Yes — the BYOS (Bring Your Own Storage) plan at $199/month lets you point Monoscope Cloud at your own S3 bucket for unlimited data retention at no extra cost.

How does Monoscope handle multi-tenancy?

Projects provide tenant isolation. Each project has its own API key, S3 partitioning, and access controls. Multi-tenant workspace support is on the roadmap.

Open

How does the Haskell backend handle production reliability?

Haskell is uncommon for production observability platforms. How does the team handle operational concerns like debugging, profiling, and incident response in a Haskell codebase?

What LLM provider powers the natural language query engine?

The documentation mentions LLM-powered queries but does not specify which provider or model. Is it OpenAI, Anthropic, or self-hosted? Can users configure the LLM backend?

How does TimeFusion handle schema evolution?

As OTel attributes change (new fields added, old fields deprecated), how does the Delta Lake schema adapt? Is there a migration mechanism or is it schema-on-read?

What is the production readiness level at v0.5.0?

Being pre-1.0, what are the known limitations? Is there a stability guarantee or backwards compatibility commitment for the v0.x releases?

How does the AI agent scheduler interact with the LLM at scale?

If many users schedule AI agents that all trigger simultaneously, how is LLM API rate limiting and cost managed?