Now in Public Beta

Know what your agents are doing in production

Full-stack observability for AI agents. Trace reasoning chains, catch quality drift, attribute token costs, and detect PII exposure — before your customers do.

L Lantern
  • 🔍 Traces
  • Scorecards
  • 💰 Costs
  • 🔔 Alerts
  • 🔌 Sources
Traces (24h)
12,847
Success Rate
98.2%
Cost (24h)
$14.32
Recent Traces
support-triage 480ms ok
data-pipeline 835ms ok
code-reviewer 51ms err
research-asst 418ms ok
Trusted by engineering teams building with AI
The Problem

AI agents are black boxes in production

When an agent makes a bad decision, you have no idea why. When costs spike, there is no attribution. When quality drifts, nobody notices until users leave.

Without Lantern

  • Agent reasoning is invisible — you see inputs and outputs, nothing in between
  • No way to trace why an agent chose a specific tool or ignored context
  • Token costs are a monthly surprise on the invoice
  • Quality degrades silently after model updates or prompt changes
  • PII flows through prompts and responses with zero visibility
  • Debugging production issues means reading logs and guessing

With Lantern

  • Every reasoning step is a span — LLM calls, tool use, retrieval, custom logic
  • Click through the full decision tree to see why the agent did what it did
  • Token costs attributed per agent, per workflow, per span, per day
  • Automated quality scoring with baseline regression alerts
  • PII detection scans every prompt and response in real time
  • Traces link directly to the reasoning step that caused the issue
Platform Capabilities

Enterprise-grade agent observability

Everything you need to run AI agents in production with confidence.

🔍

Reasoning Chain Tracing

Every agent decision is captured as a span. LLM calls, tool invocations, memory retrievals, and custom reasoning steps — all linked in a navigable tree.

llm Classify ticket urgency 151ms
tool route_ticket 51ms
retr Similar past incidents 84ms
llm Generate action plan 201ms

Automated Quality Scoring

Built-in scorers for relevance, toxicity, and latency. Register custom scorers for domain-specific checks. Baseline snapshots detect regressions automatically.

Relevance
94%
Toxicity
0.2%
P50
312ms
🔒

PII Detection

Scan every prompt and response for emails, phone numbers, SSNs, credit cards, and more. Flag or auto-redact before PII reaches your logs. SOC2-ready.

EMAIL j***@acme.com REDACTED
SSN ***-**-4589 REDACTED
PHONE (555) ***-**12 FLAGGED
CC **** **** **** 3782 REDACTED
💰

Token Cost Attribution

Per-span cost tracking by model, agent, and workflow. See exactly where your budget goes — down to individual tool calls. No more surprise invoices.

GPT-4o
$8.42
Claude 3.5
$4.18
Embeddings
$1.72
🔌

Auto-Instrumentation

Wrap your Anthropic or OpenAI client with one function call. Every API call is traced automatically — zero config, zero code changes to your agent logic.

import { wrapAnthropic } from '@lantern-ai/sdk';

// One line — full trace capture
const client = wrapAnthropic(new Anthropic(), {
  projectId: 'my-agent'
});
🎯

LLM Proxy

Zero-code instrumentation. Point your agent's base URL at the Lantern proxy — every LLM call is traced automatically. Works with any language or framework.

# No code changes needed
export ANTHROPIC_BASE_URL=\
  https://proxy.lantern.ai/v1
💻

Python & TypeScript SDKs

Native SDKs for both Python and TypeScript with auto-instrumentation for Anthropic and OpenAI. Three lines of code to full visibility.

# Python
pip install lantern-ai

// TypeScript
npm install @lantern-ai/sdk
🔔

Alert Channels

Configure Slack, PagerDuty, email, and webhook alert channels. Get notified when quality drops, costs spike, or agents regress. Test channels before going live.

👥

Team Management

Create teams, invite members, assign roles. Scope agent visibility per team so each group sees only what they need. Full RBAC with owner, admin, and member roles.

📈

Data Source Tracking

See which services, SDK versions, and exporters are sending traces. Filter by source, environment, or agent. Know exactly where your data comes from.

Integration

Three lines of code. Full visibility.

Add Lantern to your existing agent code in under a minute. No infrastructure changes required.

TypeScript

TypeScript SDK

Auto-instrument your Anthropic or OpenAI client. Full trace capture with zero config.

npm install @lantern-ai/sdk
Python

Python SDK

Wrap your LLM client with one function call. Every API call is traced automatically.

pip install lantern-ai

See everything your agents do

Traces flow to the Lantern dashboard in real time. Click through reasoning chains, inspect spans, and track quality metrics.

support-triage success
production480ms$0.0066
data-pipeline success
production835ms$0.0048
code-reviewer error
staging51ms$0.0000
research-assistant success
dev418ms$0.0008
Status
Success
Duration
480ms
Tokens
1,047
Cost
$0.0066
Reasoning Chain (4 spans)
llm call Classify ticket urgency 151ms
tool call route_ticket 51ms
retrieval Similar past incidents 84ms
llm call Generate action plan 201ms
Pricing

Start free, scale with confidence

No credit card required. Upgrade when your agents are in production.

Community
Self-hosted, open source
$0 forever
MIT License
  • Full trace capture (SDK)
  • SQLite + Postgres storage
  • Dashboard (traces, metrics, sources)
  • Custom eval scorers
  • Unlimited agents
  • Unlimited traces (self-hosted)
  • Community support
View on GitHub
Team+
For high-volume agent workloads
$599 / month
Up to 5M traces/month
  • Everything in Team
  • 5x trace volume
  • Unlimited agents
  • 90-day trace retention
  • Priority email support
Contact Sales
Enterprise
For regulated, high-scale workloads
Custom
Unlimited traces
  • Everything in Team+
  • SOC2 / HIPAA / GDPR audit export
  • PagerDuty integration
  • SSO / SAML (Okta, Azure AD)
  • Magic Link email auth
  • Custom trace retention
  • LLM Proxy (zero-code tracing)
  • Dedicated support + SLA
Contact Sales
Feature Comparison

Full breakdown by tier

Choose the plan that fits your compliance, scale, and support needs.

FeatureCommunityTeamTeam+Enterprise
Monthly trace limitUnlimited (self-hosted)1M5MUnlimited
AgentsUnlimitedUnlimitedUnlimitedUnlimited
TypeScript SDK
Python SDK
Auto-instrumentation (Anthropic, OpenAI)
Dashboard (traces, metrics, sources)
Custom evaluation scorers
Self-hosted deployment
Managed cloud ingest
PII detection + redaction
Alerting (Slack, PagerDuty, webhooks)
Team-scoped RBAC
Cost forecasting + budgets
Google + GitHub OAuth
Quality scorecards + SLA
Regression detection
SOC2 / HIPAA / GDPR audit export
SSO / SAML (Okta, Azure AD)
Magic Link email auth
Custom trace retention
LLM Proxy (zero-code tracing)
Dedicated support + SLA

Stop guessing what your agents are doing

Deploy Lantern in under a minute. Self-host for free or start a managed cloud trial.

Start Free Trial View Source on GitHub