---
title: Cost
description: Monitor your LLM tokens and costs.
breadcrumbs: Docs > LLM Observability > Monitoring > Cost
---

# Cost

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site). ().
{% /alert %}

{% /callout %}

{% image
   source="https://datadog-docs.imgix.net/images/llm_observability/Cost_LLMO.642ee5a44d5f9d78be42ca7c253848bc.png?auto=format"
   alt="Cost view for an app in LLM Observability." /%}

Datadog LLM Observability automatically calculates an estimated cost for each LLM request, using providers' public pricing models and token counts annotated on LLM/embedding spans.

By aggregating this information across traces and applications, you can gain insights into the user patterns of your LLM models and their impact on overall spending.

Use cases:

- See and understand where LLM spend is coming from, at the model, request, and application levels
- Track changes in token usage and cost over time to proactively guard against higher bills in the future
- Correlate LLM cost with overall application performance, model versions, model providers, and prompt details in a single view

## Setting up cost monitoring{% #setting-up-cost-monitoring %}

Datadog provides two ways to monitor your LLM costs:

- Automatic: Use supported LLM providers' public pricing rates
- Manual: For custom pricing rates, self-hosted models, or unsupported providers, manually supply your own cost values.

### Automatic{% #automatic %}

If your LLM requests involve any of the listed supported providers, Datadog automatically calculates the cost of each request based on the following:

- Token counts attached to the LLM/embedding span, provided by either [auto-instrumentation](https://docs.datadoghq.com/llm_observability/instrumentation/auto_instrumentation) or manual user annotation
- The model provider's public pricing rates

### Manual{% #manual %}

To manually supply cost information, follow the instrumentation steps described in the [SDK Reference](https://docs.datadoghq.com/llm_observability/instrumentation/sdk/?tab=python#monitoring-costs) or [API](https://docs.datadoghq.com/llm_observability/instrumentation/api/#metrics). When setting your costs up manually (e.g. setting `total_cost`), the unit must be in dollar units; however, the unit will be stored as nanodollars.

{% alert level="info" %}
If you provide partial cost information, Datadog tries to estimate missing information. For example, if you supply a total cost but not input/output cost values, Datadog uses provider pricing and token values annotated on your span to compute the input/output cost values. This can cause a mismatch between your manually provided total cost and the sum of Datadog's computed input/output costs. Datadog always displays your provided total cost as-is, even if these values differ.
{% /alert %}

## Supported providers{% #supported-providers %}

Datadog automatically calculates the cost of LLM requests made to the following supported providers using the publicly available pricing information from their official documentation.

{% alert level="info" %}
Datadog only supports monitoring costs for text-based models.
{% /alert %}

Datadog supports estimated costs for [800+ models](https://github.com/pydantic/genai-prices/tree/main?tab=readme-ov-file#providers), from OpenAI, Hugging Face, Gemini, Anthropic to models served by OpenRouter.

## Metrics{% #metrics %}

You can find cost metrics in [LLM Observability Metrics](https://docs.datadoghq.com/llm_observability/monitoring/metrics/#llm-cost-metrics). The unit for LLM Observability estimated cost metrics is **nanodollars**.

The cost metrics include a `source` tag to indicate where the value originated:

- `source:auto` — automatically calculated
- `source:user` — manually provided

## View costs in LLM Observability{% #view-costs-in-llm-observability %}

View your app in LLM Observability and select **Cost** on the left. The *Cost view* features:

- A high-level overview of your LLM usage over time including **Total Cost**, **Cost Change**, **Total Tokens**, and **Token Change**
- **Breakdown by Token Type**: A breakdown of token usage, along with associated costs
- **Breakdown by Provider/Model** or **Prompt ID/Version**: Cost and token usage broken down by LLM provider and model, or by individual prompts or prompt versions (powered by [Prompt Tracking](https://docs.datadoghq.com/llm_observability/monitoring/prompt_tracking))
- **Most Expensive LLM Calls**: A list of your most expensive requests

{% image
   source="https://datadog-docs.imgix.net/images/llm_observability/cost_tracking_trace.4e2dbd697bee43eca51ea91d3e35a5ba.png?auto=format"
   alt="Cost data in trace detail." /%}

Cost data is also available within your application's traces and spans, allowing you to understand cost at both the request (trace) and operation level (span). Click any trace or span to open a detailed side-panel view that includes cost metrics for the full trace and for each individual LLM call. At the top of the trace view, the banner shows aggregated cost information for the full trace, including estimated cost and total tokens. Hovering over these values reveals a breakdown of input and output token/costs.

Selecting an individual LLM span shows similar cost metrics specific to that LLM request.

To query cost-related data in Traces page, use the left side **Cost** facets.

Alternatively, query the following span attributes directly:

- `@metrics.input_tokens` / `@metrics.estimated_input_cost`
- `@metrics.output_tokens` / `@metrics.estimated_output_cost`
- `@metrics.total_tokens` / `@metrics.estimated_total_cost`
- `@metrics.non_cached_input_tokens` / `@metrics.estimated_non_cached_input_cost`
- `@metrics.cache_read_input_tokens` / `@metrics.estimated_cache_read_input_cost`
- `@metrics.cache_write_input_tokens` / `@metrics.estimated_cache_write_input_cost`
- `@metrics.reasoning_output_tokens` / `@metrics.estimated_reasoning_output_cost`
