Compute • Services

Services that scale, secure,
and heal themselves.

Deploy web services, workers, and functions with zero-trust defaults, multi-signal autoscaling, and built-in reliability — guided by Celeris AI.

Get Started
HTTP + gRPC Scale-to-zero or min replicas Zero trust by default Multi-region + traffic control

Web Service

Always-on endpoints for HTTP/gRPC

Clients orders-api Response
GET /v1/orders POST /v2/checkout
42ms p95
320 RPS
3 Replicas
RPS + CPU Scale policy
APIs + websites gRPC internal services Realtime endpoints

If it's user-facing traffic → Web Service.

Worker

Background processing driven by backlog

Queue billing-worker Side effects
12,400 Lag
820/s Throughput
8 Replicas
Backlog + latency Scale policy
Email + billing ETL + indexing Queue pipelines

If it's backlog-driven → Worker.

Function

Short-lived compute for bursts and hooks

Events resize-image() Result
2.1k/min Invocations
120ms Avg duration
60 Concurrency
Scale-to-zero Idle: 0 replicas
Webhooks Image transforms Scheduled tasks

If it's short bursts → Function.

Pick a compute type above to see how it handles your traffic pattern.

Celeris AI

Select a compute type to see how it handles traffic. I'll help you choose the right one for your workload.

Advanced scaling controls
Cold start
Traffic level Launch

Web services handle live traffic

Always-on endpoints that respond to user requests in real time.

Workers drain backlog

Background processors that consume queues at their own pace.

Functions burst on events

Short-lived compute that spins up, executes, and disappears.

1

Deploy anything containerized

Bring your container image from any registry. Go, Node, Python, Java, Rust — if it runs in a container, it runs on Celeris.

Any container registry (ghcr, ECR, GCR, DockerHub)
Build from source with integrated CI
Dockerfile, Buildpacks, or Nixpacks
2

Internal by default, internet when you choose

Services are private by default. Expose them to the internet explicitly via the gateway with automatic TLS and domain routing.

Private endpoints for internal services
Public with automatic TLS termination
Custom domains with wildcard support
3

Autoscale from real signals

Scale based on requests, CPU, memory, latency, Kafka lag, or custom metrics. Replicas adjust smoothly without jarring jumps.

RPS, CPU, memory, latency signals
Queue lag (Kafka, SQS, custom)
Per-region independent scaling
4

Traffic control with revisions

Split traffic between revisions for safe rollouts. Use canary, blue/green, or weighted routing with automatic health gates.

Canary with automatic promotion
Traffic weights (10/90 → 50/50)
Safe rollout gates with error budgets
5

Zero Trust service-to-service

Every service gets an identity. Calls between services are blocked by default until you create an explicit allow rule.

Identity-based mTLS authentication
Default deny between services
Authorize by service, route, method
6

Reliability built-in

Retries, timeouts, circuit breakers, outlier ejection, and failover are configured by default. Your services stay resilient.

Smart retries with exponential backoff
Circuit breakers prevent cascades
Health-based routing and failover
7

Global by design

Deploy the same service to multiple regions. Requests route to the closest region. If one fails, traffic shifts automatically.

Active-active or primary-secondary
Locality-aware routing
Automatic cross-region failover
AI Deploy Assistant

Tell Celeris what you want.
It builds the blueprint.

Describe your service in plain language. Celeris AI asks clarifying questions, simulates outcomes, and generates a deployment-ready configuration.

Celeris Agent

AI-powered provisioning assistant

Hi! I can help you provision services, workers, or functions. What would you like to deploy today?

Try clicking one of the prompts below, or describe what you need.

Ready to deploy
service: "orders-api" type: "web-service" image: "ghcr.io/acme/orders:1.7.2" regions: - "us-east-1" - "eu-west-1" scaling: min: 2 max: 20 metric: "rps" target: 100 ingress: public: true protocols: ["grpc", "http"] rateLimit: "200/key/min" rollout: strategy: "canary" steps: [1, 5, 25, 100] gates: ["smoke", "p99<200ms"]

AI Notes

Active-active deployment recommended for your latency requirements. Circuit breaker will trip at 50% error rate. Estimated cost: $45-80/month depending on traffic.

Autoscaling Lab

Multi-signal scaling that feels controllable

Scale on requests, CPU, latency, queue lag, or custom metrics. See exactly how your service will respond to load.

~$45
Estimated cost/mo
Low
Cold start risk
~2s
Queue drain time
99.9%
SLO achievable
Zero Trust & Policy

Explicit connectivity, identity-based

Every service gets an identity. Traffic between services is blocked until you create an allow rule. Security becomes a product, not a warning.

Policy Rules

Define who can call what. Default: deny all.

allow: gateway → orders-api (all)
allow: orders-api → billing (GET /v2/*)
allow: billing → postgres (scoped creds)

Default deny

Identity-based mTLS

Auto credential rotation

Full audit trail

Reliability Toolkit

Built-in resilience patterns

Retries, timeouts, circuit breakers, outlier ejection, and failover are configured by default. Click each to see how it protects your service.

Global Services

Multi-region without complexity

Deploy the same service to multiple regions. Requests route to the closest. Click a region to simulate failure and watch traffic reroute.

Traffic Steering
Developer Experience

Config as code + PR safety + preview envs

Store service blueprints in Git. Every PR gets a preview environment. Risky changes require approval. Rollback is instant.

Terminal
$ celeris services create orders-api \ --image ghcr.io/acme/orders:1.7.2 \ --regions us-east-1,eu-west-1 \ --min-replicas 2 \ --public Service orders-api created Endpoint: orders-api.acme.celeris.io Regions: us-east-1, eu-west-1 Revision: orders-api-00001 $ celeris deploy --canary 5% Canary rollout started: 5% traffic to orders-api-00002
feat/update-billing-endpoint Preview: pr-128

Preview deployed

pr-128.orders-api.preview.celeris.io

Tests passed (42/42)
Isolated policies applied
Approval required (risky change detected)
Observability & Export

Full-stack context for better AI help

OpenTelemetry native. Export logs, metrics, and traces to your existing tools. AI helps you triage incidents faster.

Datadog Grafana Cloud New Relic Honeycomb Elastic Custom OTLP

Summarize incident

Identify regression

Suggest rollback

Draft remediation PR

Run services like you
built the platform yourself.

Autoscaling, zero trust, reliability, and global delivery — configured in minutes with Celeris AI.