Core Concepts
Understand the fundamental concepts behind Brokle's AI observability platform
Core Concepts
Before diving into implementation, it helps to understand the foundational concepts that power Brokle. These concepts form the building blocks for monitoring, evaluating, and improving your AI applications.
The Observability Stack
Brokle provides a complete observability stack for AI applications:
Key Concepts
Traces
End-to-end request journeys through your AI application
Spans
Individual operations within a trace - LLM calls, retrievals, tools
Sessions
Group related traces for conversation and user journey tracking
Evaluations
Quality scoring and feedback for AI outputs
Cost Analytics
Track spending across models, providers, and features
How They Connect
Understanding how these concepts relate helps you design effective observability:
Hierarchy
- Sessions group multiple traces from the same user or conversation
- Traces capture complete request-response cycles
- Spans represent individual operations within a trace
- Evaluations attach quality metrics to spans or traces
- Cost Analytics aggregate spending across all levels
Core Principles
1. Minimal Code Changes
Brokle integrates with your existing code without major refactoring:
from brokle import Brokle, wrap_openai
import openai
client = Brokle(api_key="bk_...")
openai_client = wrap_openai(openai.OpenAI(), brokle=client)
# Your existing code works unchanged
response = openai_client.chat.completions.create(...)2. OpenTelemetry Compatible
Brokle is built on OpenTelemetry standards, ensuring:
- Interoperability with existing observability tools
- Standardized trace and span formats
- Easy integration with your infrastructure
3. Privacy First
- Sensitive data masking options
- On-premise deployment available
- No data leaves your infrastructure (self-hosted)
4. Real-Time Insights
- Live trace streaming
- Instant cost calculations
- Real-time quality scoring
All concepts support both automatic instrumentation (via integrations) and manual instrumentation (via SDK) for maximum flexibility.
Quick Start Path
Based on your needs, here's where to start:
| Goal | Start With |
|---|---|
| Monitor LLM calls | Traces |
| Debug complex workflows | Spans |
| Track conversations | Sessions |
| Measure AI quality | Evaluations |
| Control spending | Cost Analytics |