Full-stack observability for AI agents. Trace reasoning chains, catch quality drift, attribute token costs, and detect PII exposure — before your customers do.
When an agent makes a bad decision, you have no idea why. When costs spike, there is no attribution. When quality drifts, nobody notices until users leave.
Everything you need to run AI agents in production with confidence.
Every agent decision is captured as a span. LLM calls, tool invocations, memory retrievals, and custom reasoning steps — all linked in a navigable tree.
Built-in scorers for relevance, toxicity, and latency. Register custom scorers for domain-specific checks. Baseline snapshots detect regressions automatically.
Scan every prompt and response for emails, phone numbers, SSNs, credit cards, and more. Flag or auto-redact before PII reaches your logs. SOC2-ready.
Per-span cost tracking by model, agent, and workflow. See exactly where your budget goes — down to individual tool calls. No more surprise invoices.
Wrap your Anthropic or OpenAI client with one function call. Every API call is traced automatically — zero config, zero code changes to your agent logic.
Zero-code instrumentation. Point your agent's base URL at the Lantern proxy — every LLM call is traced automatically. Works with any language or framework.
Native SDKs for both Python and TypeScript with auto-instrumentation for Anthropic and OpenAI. Three lines of code to full visibility.
Configure Slack, PagerDuty, email, and webhook alert channels. Get notified when quality drops, costs spike, or agents regress. Test channels before going live.
Create teams, invite members, assign roles. Scope agent visibility per team so each group sees only what they need. Full RBAC with owner, admin, and member roles.
See which services, SDK versions, and exporters are sending traces. Filter by source, environment, or agent. Know exactly where your data comes from.
Add Lantern to your existing agent code in under a minute. No infrastructure changes required.
Auto-instrument your Anthropic or OpenAI client. Full trace capture with zero config.
Wrap your LLM client with one function call. Every API call is traced automatically.
Point your base URL at the Lantern proxy. Works with any language — no code changes.
Traces flow to the Lantern dashboard in real time. Click through reasoning chains, inspect spans, and track quality metrics.
No credit card required. Upgrade when your agents are in production.
Choose the plan that fits your compliance, scale, and support needs.
| Feature | Community | Team | Team+ | Enterprise |
|---|---|---|---|---|
| Monthly trace limit | Unlimited (self-hosted) | 1M | 5M | Unlimited |
| Agents | Unlimited | Unlimited | Unlimited | Unlimited |
| TypeScript SDK | ✓ | ✓ | ✓ | ✓ |
| Python SDK | ✓ | ✓ | ✓ | ✓ |
| Auto-instrumentation (Anthropic, OpenAI) | ✓ | ✓ | ✓ | ✓ |
| Dashboard (traces, metrics, sources) | ✓ | ✓ | ✓ | ✓ |
| Custom evaluation scorers | ✓ | ✓ | ✓ | ✓ |
| Self-hosted deployment | ✓ | ✓ | ✓ | ✓ |
| Managed cloud ingest | — | ✓ | ✓ | ✓ |
| PII detection + redaction | — | ✓ | ✓ | ✓ |
| Alerting (Slack, PagerDuty, webhooks) | — | ✓ | ✓ | ✓ |
| Team-scoped RBAC | — | ✓ | ✓ | ✓ |
| Cost forecasting + budgets | — | ✓ | ✓ | ✓ |
| Google + GitHub OAuth | — | ✓ | ✓ | ✓ |
| Quality scorecards + SLA | — | ✓ | ✓ | ✓ |
| Regression detection | — | ✓ | ✓ | ✓ |
| SOC2 / HIPAA / GDPR audit export | — | — | — | ✓ |
| SSO / SAML (Okta, Azure AD) | — | — | — | ✓ |
| Magic Link email auth | — | — | — | ✓ |
| Custom trace retention | — | — | — | ✓ |
| LLM Proxy (zero-code tracing) | — | — | — | ✓ |
| Dedicated support + SLA | — | — | — | ✓ |
Deploy Lantern in under a minute. Self-host for free or start a managed cloud trial.