Anthropic Integration
Integrate Brokle with Anthropic Claude models for comprehensive AI observability
Anthropic Integration
Integrate Brokle with Anthropic's Claude models to capture traces, monitor performance, and track costs across all your Claude API calls.
Supported Features
| Feature | Supported | Notes |
|---|---|---|
| Messages API | ✅ | Full support |
| Streaming | ✅ | With TTFT metrics |
| Tool Use | ✅ | Function calls traced |
| Vision | ✅ | Image inputs supported |
| System Prompts | ✅ | Captured in traces |
| Token Counting | ✅ | Input/output tokens |
| Cost Tracking | ✅ | Automatic calculation |
Quick Start
Install Dependencies
pip install brokle anthropicnpm install brokle brokle-anthropic @anthropic-ai/sdkWrap the Client
from brokle import Brokle, wrap_anthropic
import anthropic
# Initialize Brokle
brokle = Brokle(api_key="bk_...")
# Wrap Anthropic client
client = wrap_anthropic(anthropic.Anthropic(), brokle=brokle)import { Brokle } from 'brokle';
import { wrapAnthropic } from 'brokle-anthropic';
import Anthropic from '@anthropic-ai/sdk';
// Initialize Brokle
const brokle = new Brokle({ apiKey: 'bk_...' });
// Wrap Anthropic client
const client = wrapAnthropic(new Anthropic(), { brokle });Make Traced Calls
# All calls are automatically traced
message = client.messages.create(
model="claude-3-opus-20240229",
max_tokens=1024,
messages=[
{"role": "user", "content": "What is AI observability?"}
]
)
print(message.content[0].text)
# Ensure traces are sent
brokle.flush()// All calls are automatically traced
const message = await client.messages.create({
model: 'claude-3-opus-20240229',
max_tokens: 1024,
messages: [
{ role: 'user', content: 'What is AI observability?' }
]
});
console.log(message.content[0].text);
// Ensure traces are sent
await brokle.shutdown();Model Support
Claude 3 Family
| Model | Model ID | Context | Best For |
|---|---|---|---|
| Opus | claude-3-opus-20240229 | 200K | Complex tasks, analysis |
| Sonnet | claude-3-sonnet-20240229 | 200K | Balanced performance |
| Haiku | claude-3-haiku-20240307 | 200K | Fast, simple tasks |
Claude 3.5 Family
| Model | Model ID | Context | Best For |
|---|---|---|---|
| Sonnet | claude-3-5-sonnet-20241022 | 200K | Best overall performance |
Streaming
Streaming is fully supported with time-to-first-token (TTFT) metrics:
# Streaming with automatic tracing
with client.messages.stream(
model="claude-3-sonnet-20240229",
max_tokens=1024,
messages=[
{"role": "user", "content": "Write a haiku about coding"}
]
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)
# Get final message
message = stream.get_final_message()// Streaming with automatic tracing
const stream = await client.messages.stream({
model: 'claude-3-sonnet-20240229',
max_tokens: 1024,
messages: [
{ role: 'user', content: 'Write a haiku about coding' }
]
});
for await (const event of stream) {
if (event.type === 'content_block_delta') {
process.stdout.write(event.delta.text);
}
}
const message = await stream.finalMessage();Streaming traces capture:
| Metric | Description |
|---|---|
time_to_first_token | Time until first chunk received |
streaming_duration | Total time for all chunks |
chunks_count | Number of streaming events |
aggregated_output | Complete response text |
Tool Use
Claude's function calling is automatically traced:
tools = [
{
"name": "get_weather",
"description": "Get current weather for a location",
"input_schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name"
}
},
"required": ["location"]
}
}
]
message = client.messages.create(
model="claude-3-sonnet-20240229",
max_tokens=1024,
tools=tools,
messages=[
{"role": "user", "content": "What's the weather in Paris?"}
]
)
# Tool use is captured in the trace
for block in message.content:
if block.type == "tool_use":
print(f"Tool: {block.name}")
print(f"Input: {block.input}")const tools = [
{
name: 'get_weather',
description: 'Get current weather for a location',
input_schema: {
type: 'object',
properties: {
location: {
type: 'string',
description: 'City name'
}
},
required: ['location']
}
}
];
const message = await client.messages.create({
model: 'claude-3-sonnet-20240229',
max_tokens: 1024,
tools,
messages: [
{ role: 'user', content: "What's the weather in Paris?" }
]
});
// Tool use is captured in the trace
message.content.forEach(block => {
if (block.type === 'tool_use') {
console.log('Tool:', block.name);
console.log('Input:', block.input);
}
});Tool traces include:
- Tool definitions
- Tool call inputs
- Tool result handling
Vision
Image inputs are supported:
import base64
# From file
with open("image.png", "rb") as f:
image_data = base64.standard_b64encode(f.read()).decode("utf-8")
message = client.messages.create(
model="claude-3-sonnet-20240229",
max_tokens=1024,
messages=[
{
"role": "user",
"content": [
{
"type": "image",
"source": {
"type": "base64",
"media_type": "image/png",
"data": image_data
}
},
{
"type": "text",
"text": "What's in this image?"
}
]
}
]
)import fs from 'fs';
const imageData = fs.readFileSync('image.png').toString('base64');
const message = await client.messages.create({
model: 'claude-3-sonnet-20240229',
max_tokens: 1024,
messages: [
{
role: 'user',
content: [
{
type: 'image',
source: {
type: 'base64',
media_type: 'image/png',
data: imageData
}
},
{
type: 'text',
text: "What's in this image?"
}
]
}
]
});Image data is stored in traces by default. Enable masking if you don't want to store image content.
System Prompts
System prompts are captured separately for easy analysis:
message = client.messages.create(
model="claude-3-sonnet-20240229",
max_tokens=1024,
system="You are a helpful AI assistant specializing in Python.",
messages=[
{"role": "user", "content": "How do I read a file?"}
]
)The trace captures:
- System prompt as a separate field
- User messages
- Assistant response
- Token breakdown (system vs user vs output)
Multi-Turn Conversations
Track conversation context:
# Create a session for the conversation
with brokle.start_as_current_span(name="conversation") as session:
session.update_trace(
user_id="user_123",
session_id="session_456"
)
messages = []
# First turn
messages.append({"role": "user", "content": "Hi, I'm learning Python"})
response = client.messages.create(
model="claude-3-sonnet-20240229",
max_tokens=1024,
messages=messages
)
messages.append({"role": "assistant", "content": response.content[0].text})
# Second turn
messages.append({"role": "user", "content": "How do I handle exceptions?"})
response = client.messages.create(
model="claude-3-sonnet-20240229",
max_tokens=1024,
messages=messages
)const messages = [];
// First turn
messages.push({ role: 'user', content: "Hi, I'm learning Python" });
const span = brokle.startSpan({
name: 'conversation',
attributes: {
userId: 'user_123',
sessionId: 'session_456'
}
});
let response = await client.messages.create({
model: 'claude-3-sonnet-20240229',
max_tokens: 1024,
messages
});
messages.push({ role: 'assistant', content: response.content[0].text });
// Second turn
messages.push({ role: 'user', content: 'How do I handle exceptions?' });
response = await client.messages.create({
model: 'claude-3-sonnet-20240229',
max_tokens: 1024,
messages
});
span.end();Cost Tracking
Brokle automatically calculates costs based on Anthropic's pricing:
| Model | Input (per 1M tokens) | Output (per 1M tokens) |
|---|---|---|
| Claude 3 Opus | $15.00 | $75.00 |
| Claude 3.5 Sonnet | $3.00 | $15.00 |
| Claude 3 Sonnet | $3.00 | $15.00 |
| Claude 3 Haiku | $0.25 | $1.25 |
Costs are tracked per trace and aggregated in the dashboard.
Error Handling
Errors are automatically captured:
try:
message = client.messages.create(
model="claude-3-sonnet-20240229",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello"}]
)
except anthropic.RateLimitError as e:
# Error captured in trace:
# - status: "error"
# - error_type: "RateLimitError"
# - error_message: "Rate limit exceeded"
print(f"Rate limited: {e}")
except anthropic.APIError as e:
print(f"API error: {e}")Async Support
Full async support in Python:
from brokle import AsyncBrokle, wrap_anthropic
import anthropic
brokle = AsyncBrokle(api_key="bk_...")
client = wrap_anthropic(anthropic.AsyncAnthropic(), brokle=brokle)
async def chat():
message = await client.messages.create(
model="claude-3-sonnet-20240229",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}]
)
return message.content[0].text
# With async streaming
async def stream_chat():
async with client.messages.stream(
model="claude-3-sonnet-20240229",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}]
) as stream:
async for text in stream.text_stream:
print(text, end="")Configuration Options
from brokle import Brokle, wrap_anthropic
brokle = Brokle(
api_key="bk_...",
environment="production",
sample_rate=1.0,
debug=False
)
client = wrap_anthropic(
anthropic.Anthropic(),
brokle=brokle,
# Integration-specific options
capture_input=True, # Capture message content
capture_output=True, # Capture response content
capture_system=True, # Capture system prompts
)Best Practices
1. Add User Context
with brokle.start_as_current_span(name="claude_chat") as span:
span.update_trace(user_id="user_123")
message = client.messages.create(...)2. Use Descriptive Names
with brokle.start_as_current_span(name="customer_support_response") as span:
span.set_attribute("ticket_id", ticket.id)
message = client.messages.create(...)3. Handle Shutdown
import atexit
atexit.register(brokle.shutdown)Troubleshooting
Missing Traces
- Verify both API keys are set
- Check
brokle.flush()is called - Enable debug:
Brokle(debug=True)
Token Count Mismatch
Anthropic returns exact token counts. If mismatched, ensure you're using the official SDK version.
Streaming Not Captured
Use the official streaming methods:
- Python:
client.messages.stream() - JavaScript:
client.messages.stream()