Core Concepts
Understanding the fundamental building blocks of MindReef will help you get the most out of the platform. This guide covers the key concepts you'll encounter.
Traces
A trace represents a single end-to-end execution of your AI agent. When a user sends a query to your agent and receives a response, everything that happens in between is captured in one trace.
What's in a Trace?
A trace contains one or more spans arranged in a hierarchy. It has a unique ID, start/end timestamps, status (success, error, or in progress), and metadata like the agent name and user ID.
Spans
A span represents a single operation within a trace. Spans are nested to show the parent-child relationships between operations. For example, an agent trace might contain spans for:
- The top-level agent function
- An LLM call to understand the query
- A tool call to search a database
- Another LLM call to synthesize the response
Span Types
MindReef automatically categorizes spans by type:
- Agent: Top-level agent execution
- LLM: Calls to language models (OpenAI, Anthropic, etc.)
- Tool: Tool or function calls
- Retrieval: Vector database or search operations
- Custom: Any other operation you want to track
Context Propagation
MindReef uses Python's contextvars to automatically propagate trace context through your code. This means nested function calls, async operations, and even thread pools will correctly associate spans with their parent trace.
How It Works
When you use the @trace or @span decorators, MindReef stores the current trace context in a context variable. Child spans automatically inherit this context, creating the correct hierarchy without any manual ID passing.
Agents
In MindReef, an agent is a logical grouping of related traces. Each agent has its own dashboard showing aggregated metrics, recent traces, and alerts. You typically have one agent per distinct AI capability in your application.
Events
Events are discrete occurrences within a span that you want to log. Unlike spans, events don't have duration. They're useful for recording things like user feedback, errors, or significant decisions within a span.
Hallucination Scores
MindReef automatically analyzes LLM outputs and assigns hallucination scores based on how well the output is grounded in the provided context. Scores range from 0 (completely ungrounded) to 1 (fully supported by context).
Next Steps
Now that you understand the core concepts, learn how to:
- Add custom spans for granular instrumentation
- Configure hallucination detection thresholds
- Set up alerts based on metrics