Integrations
MindReef provides auto-instrumentation for popular LLM providers and frameworks. Enable with a single function call to automatically capture all LLM interactions.
OpenAI
Automatically capture all OpenAI API calls including chat completions, embeddings, and function calls.
from mindreef.integrations import patch_openai
# Call once at startup
patch_openai()
# All OpenAI calls are now automatically traced
response = openai.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)
Captured data:
- Model name and parameters
- Input messages and system prompts
- Output content and finish reason
- Token counts (prompt, completion, total)
- Latency and cost estimation
- Function/tool calls and results
Anthropic
Capture Claude API calls with full message history and tool use tracking.
from mindreef.integrations import patch_anthropic
# Call once at startup
patch_anthropic()
# All Anthropic calls are now automatically traced
response = anthropic.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}]
)
Captured data:
- Model name and parameters
- Input messages with content blocks
- Output content and stop reason
- Token usage statistics
- Tool use requests and results
Streaming Support
Both integrations fully support streaming responses. Token counts and content are accumulated and recorded when the stream completes:
from mindreef.integrations import patch_openai
patch_openai()
# Streaming is automatically handled
stream = openai.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Write a story"}],
stream=True
)
for chunk in stream:
print(chunk.choices[0].delta.content, end="")
# Full response is captured when stream completes
Async Support
Async clients are fully supported with the same integration:
from mindreef.integrations import patch_openai
from openai import AsyncOpenAI
patch_openai()
client = AsyncOpenAI()
async def ask(question: str):
response = await client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": question}]
)
return response.choices[0].message.content
Disabling Auto-Instrumentation
You can selectively disable instrumentation if needed:
from mindreef.integrations import patch_openai, unpatch_openai
# Enable
patch_openai()
# Disable
unpatch_openai()
Coming Soon
We're actively working on integrations for:
- LangChain
- LlamaIndex
- CrewAI
- Cohere
- Google Vertex AI
Contact us if you need a specific integration.