Changelog
Stay up to date with the latest features, improvements, and fixes in Brokle.
Version 2.5.0
Released on November 28, 2025
Span-Level Cost Attribution
Track costs at the individual span level. See exactly how much each LLM call costs within complex chains and agents.
Prompt Playground Improvements
New side-by-side comparison mode for A/B testing prompts. Test multiple variants simultaneously and compare outputs.
LangGraph Integration
First-class support for LangGraph state machines. Visualize graph execution paths and debug complex agent workflows.
Trace Loading Performance
3x faster trace loading for large spans. Improved pagination and lazy loading for traces with thousands of spans.
Enhanced Search Filters
New filters for model name, cost range, and token count. Save and share custom filter presets with your team.
Python SDK Async Support
Improved async/await support in the Python SDK. Better integration with FastAPI and async frameworks.
Streaming Response Capture
Fixed an issue where streaming responses could be truncated in certain edge cases.
Evaluation Score Display
Fixed incorrect rounding of evaluation scores in the dashboard.
Time Zone Handling
Fixed time zone display issues in trace timestamps for non-UTC users.
Previous Releases
Version 2.4.0
Released on October 15, 2025
Create custom evaluation functions with Python. Define your own quality metrics and scoring logic.
Full support for Claude 3.5 Sonnet including function calling and tool use tracing.
Fixed SDK initialization issues and improved error handling for rate-limited APIs.
Version 2.3.0
Released on September 1, 2025
Visual diff view for comparing prompt versions. See exactly what changed between deployments.
Comments on traces, shared annotations, and team activity feed for better collaboration.
Fixed dashboard refresh issues and improved WebSocket connection stability.
Version 2.2.0
Released on July 20, 2025
Native OTLP ingestion endpoint. Send traces using any OpenTelemetry-compatible SDK.
New analytics dashboard with provider breakdown, daily spend trends, and cost alerts.
50% reduction in SDK overhead. Improved batching and compression for high-volume tracing.
Product Roadmap
See what we're working on next. Have a feature request? Let us know!
Real-time Alerting
Set up alerts for cost thresholds, error rates, latency spikes, and quality score drops.
AI-Powered Debugging
Automatic root cause analysis for failed traces. Get suggestions for fixing prompt issues.
Dataset Management
Create and manage evaluation datasets directly from production traces.
Fine-tuning Integration
Export curated traces for fine-tuning. Direct integration with OpenAI and Anthropic fine-tuning APIs.
Multi-Model Routing Analytics
Detailed analytics for model routing decisions. Optimize cost vs quality tradeoffs.
Compliance Reports
Automated compliance reporting and audit trail exports.
Have a feature request? We'd love to hear from you!
Submit Feature Request