v0.2 shipped with guardrails + live refresh

Debug AI agents locally, step by step.

AgentDbg is the local-first debugger for AI agents. Add @trace, run your workflow, and inspect a clean timeline of LLM calls, tool calls, errors, and loop warnings in minutes.

No cloud. No accounts. No telemetry. Everything stays on your machine.

LangChain / LangGraph OpenAI Agents SDK CrewAI Framework-agnostic core

Built for development-time clarity, not observability overhead.

If observability answers aggregate production questions, AgentDbg answers a sharper one while you build: what exactly happened in this run, and where did it go wrong?

Agent failures are expensive when they are opaque.

Most teams still debug agents with print statements, scattered logs, and reruns that do not reproduce the same behavior. AgentDbg gives you one timeline with the full evidence chain.

"Why did it call that tool?"

Inspect tool arguments, results, status, and errors without stitching logs together manually.

"It worked yesterday."

Trace each run as a standalone artifact so regressions are easier to reason about and communicate.

"Why is it still running?"

Loop warnings and run guardrails expose runaway behavior before it turns into a budget surprise.

One run. One timeline. Clear evidence.

See every event in chronological order with expandable payloads and metadata. The viewer live-refreshes while your run is active.

What gets captured

Event Evidence you get
LLM_CALL Model, prompt, response, usage
TOOL_CALL Tool name, args, result, status
ERROR Exception + stack trace
LOOP_WARNING Repeated-pattern warning + evidence IDs

Timeline preview

Real run, local viewer, no cloud dependency.

AgentDbg timeline UI preview

Stop runaway runs before they burn budget.

Guardrails can abort runs when loops are detected or thresholds are exceeded. AgentDbg preserves the timeline up to the abort point so you can fix root cause immediately.

Guardrails in one decorator

@trace(stop_on_loop=True, max_llm_calls=50, max_duration_s=120)

Available controls: stop_on_loop, max_llm_calls, max_tool_calls, max_events, max_duration_s.

Guardrails in action

Guardrails and loop abort behavior in AgentDbg UI

Use your current stack.

Keep the core framework-agnostic. Add integrations only where they reduce instrumentation friction for your team.

LangChain / LangGraph

Callback-based integration for LLM and tool lifecycle events.

OpenAI Agents SDK

TracingProcessor integration for generation spans, function calls, and handoffs.

CrewAI

Execution-hook adapter for runtime visibility with active run context.

From the blog

Practical playbooks with author bylines, table of contents, and interlinked guides for quick-start, troubleshooting, and production workflows.

Complete LangChain Debugging Workflow

Development-to-production checklist for tracing LangChain agents with clear failure evidence.

Customer Support Agent Tutorial

End-to-end implementation of a production-ready support agent with guardrails and escalation paths.

Debugging Pitfalls and Fixes

The most expensive failure patterns in agent systems and the fastest way to diagnose each one.

From install to first useful timeline in under 10 minutes.

Three commands. One decorator. Immediate evidence.

1

Install

Install from PyPI.

pip install agentdbg
2

Instrument

Add @trace to your run entrypoint.

from agentdbg import trace
3

Run + view

Run your app, then open the viewer.

agentdbg view

7

Fixed event types for stable timelines

3

Optional integrations shipped

0

Cloud accounts required

Questions teams ask before they install

No. AgentDbg is a development-time debugger for understanding single-run behavior, not a production monitoring dashboard.

No. Traces are stored locally by default. Redaction is enabled by default before writing payloads.

No. Core instrumentation is framework-agnostic. Integrations are optional adapters for existing stacks.

Yes. Guardrails can stop looping or threshold-breaching runs and preserve full evidence to the abort point.

Debug your next agent run with evidence, not guesswork.

Start local, stay fast, and ship with fewer surprises.