This page is not yet available in Spanish. We are working on its translation.
If you have any questions or feedback about our current translation project, feel free to reach out to us!

Overview

LLM Observability allows you to monitor, troubleshoot, and improve your agentic applications. With the LLM Observability SDK for Python, you can monitor the health and quality of your single- or multi-agentic systems built on OpenAI Agents SDK, LangGraph, or CrewAI.

For your agentic applications, LLM Observability allows you to:

  • Monitor error rate, latency build up, or cost
  • Visualize agent decisions, such as tools used or agent a task was handed off to
  • Trace and troubleshoot end-to-end requests of agent executions
Join the Preview!

LLM Observability's Graph-based Visualization for Agentic Systems is in Preview.

Request Access