Monitor, debug, improve, and secure your AI agents in production. Catch hallucinations, latency spikes, and security issues—all with one line of code.
Complete visibility and security for every agent execution, from the first LLM call to the final response.
Capture every LLM call, tool use, and decision your agent makes. Understand exactly how your agent processes each request from start to finish.
Detect hallucinations, factual errors, and security issues before they reach users. Built-in detectors for grounding, consistency, and suspicious patterns.
Track success rates, latency percentiles, error patterns, and security anomalies. Get real-time alerts when issues arise in production.
Monitor token usage and API costs down to the penny. Set budgets, get alerts, and optimize spend.
Add a decorator to your agent function. That's it. Auto-instrumentation handles the rest.
Works with OpenAI, Anthropic, LangChain, LlamaIndex, and custom frameworks. One SDK for all your AI agents.
Add the MindReef SDK to your project with pip. Configure with your API key.
$ pip install mindreef
$ export MINDREEF_API_KEY=mr_live_xxx
Add the @trace decorator to your agent function. LLM calls are captured automatically.
from mindreef import trace, patch_openai
patch_openai() # Auto-instrument OpenAI
@trace
async def my_agent(query: str):
response = await openai.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": query}]
)
return response.choices[0].message.content
View every trace in real-time. Drill into spans, catch security issues, detect hallucinations, and optimize performance.
Start small, scale as you grow.
For individuals and small projects
For growing teams and production apps
For large-scale deployments
Start monitoring in under 5 minutes. No credit card required.