Logs, metrics, and traces— plus platform intelligence.
Celeris is OpenTelemetry-native, exports to your existing providers, and adds an AI agent that understands the full stack to explain and act—fast.
Celeris AI sees deploys, configs, flags—not just telemetry
OpenTelemetry in. Any provider out.
Standards-first instrumentation. Route signals to your existing providers. Keep your stack, add platform intelligence.
Your Workloads
Auto-instrumented
Celeris OTel Gateway
Collector + Context
Destinations
Your existing stack
Auto-instrumentation Setup
Standards-first. No proprietary agents required.
observability:
otel:
endpoint: "${CELERIS_OTEL_ENDPOINT}"
protocol: "grpc"
auto_instrument: true
service_name: "checkout-api"
→ Celeris auto-injects OTEL SDKs at deploy time. Just set service.name conventions.
See the system, not just charts.
Interactive topology that matches your actual application model. Understand dependencies, health, and changes at a glance.
API Gateway
Owner: Team Platform
SLO: 99.95%
Alerts are the start. Context and action are the finish.
When something breaks, Celeris AI explains the full picture and suggests safe actions—with approval workflows built in.
Alert Timeline
checkout-api p95 exceeded 500ms threshold
payments service error rate at 2.1%
Resolved • Duration: 12m
Impacted Services
Celeris AI Analysis
What Changed
What Broke
Slow queries on orders_db.orders table causing N+1 pattern.
New query path introduced in v42 lacks index for user_id filter.
Evidence
Recommendation
Add index on orders(user_id, created_at) or rollback v42 to restore performance.
Actions
Revert checkout-api to v41
Turn off new-checkout-flow flag
Add 2 replicas to checkout-api
100% trace sampling for 1 hour
Why Celeris AI produces better answers
Because Celeris AI sees the full stack graph, not just raw telemetry:
→ AI responses are grounded in platform truth, not just telemetry.
Everything you need for day-to-day debugging.
Logs, metrics, and traces with correlation baked in. Jump between signals with context preserved.
Already using a provider? Celeris exports everything via OpenTelemetry.
SLOs that connect to owners and releases.
Define service level objectives that map to your application graph. Know who to alert and what changed.
SLO Builder
Team Commerce Platform
On-call: @commerce-oncall
Burn Rate Monitor
⚠ Fast burn detected (4.2x)
Alert Routing
Measure impact like an experiment—using real signals.
Run experiments and analyze outcomes using the same telemetry you already collect. No separate analytics SDK required.
Experiment Setup
new-checkout-flow
2 variants • 50/50 allocation
Results
Treatment improves p95 latency by 8% but shows 12% higher DB cost due to new query pattern. Recommend: enable for EU region only where latency improvement has highest impact.
This is experiment insights, not full product analytics. For deeper analysis, integrate with your analytics provider.
Connect performance to cost.
See cost signals overlaid on your service graph. Know which endpoints drive spend.
This week forecast
Driver: egress from checkout-api
Ask Celeris AI:
Works with your stack.
Export via OpenTelemetry to your existing providers. No lock-in, no migration required.
APM Providers
Datadog, New Relic, etc.
Log Aggregators
Splunk, Elastic, etc.
SIEM
Security logs export
Data Warehouses
Long-term analytics
OpenTelemetry Protocol (OTLP) — industry standard, no proprietary agents
Bring your tools. Add platform intelligence.
OTel-native signals, export anywhere, AI-guided context and action.
Works with your existing observability stack. No migration required.