---
title: Agent Monitoring
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: Docs > LLM Observability > Monitoring > Agent Monitoring
---

# Agent Monitoring

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site). ().
{% /alert %}

{% /callout %}

## Overview{% #overview %}

LLM Observability allows you to monitor, troubleshoot, and improve your agentic applications. With the LLM Observability SDK for Python, you can monitor the health and quality of your single- or multi-agentic systems built on [OpenAI Agents SDK](https://docs.datadoghq.com/llm_observability/setup/auto_instrumentation?tab=python#openai-agents), [LangGraph](https://docs.datadoghq.com/llm_observability/setup/auto_instrumentation?tab=python#langgraph), or [CrewAI](https://docs.datadoghq.com/llm_observability/setup/auto_instrumentation?tab=python#crew-ai).

For your agentic applications, LLM Observability allows you to:

- Monitor error rate, latency build up, or cost
- Visualize agent decisions, such as tools used or agent a task was handed off to
- Trace and troubleshoot end-to-end requests of agent executions

## Further reading{% #further-reading %}

- [Monitor your OpenAI agents with Datadog LLM Observability](https://www.datadoghq.com/blog/openai-agents-llm-observability/)
- [Monitor, troubleshoot, and improve AI agents with Datadog](https://www.datadoghq.com/blog/monitor-ai-agents/)
- [Monitor agents built on Amazon Bedrock with Datadog LLM Observability](https://www.datadoghq.com/blog/llm-observability-bedrock-agents/)
