Azure OpenAI

이 페이지는 아직 한국어로 제공되지 않으며 번역 작업 중입니다. 번역에 관한 질문이나 의견이 있으시면 언제든지 저희에게 연락해 주십시오.

Overview

Monitor, troubleshoot, and evaluate your LLM-powered applications, such as chatbots or data extraction tools, using Azure OpenAI.

If you are building LLM applications, use LLM Observability to investigate the root cause of issues, monitor operational performance, and evaluate the quality, privacy, and safety of your LLM applications.

See the LLM Observability tracing view video for an example of how you can investigate a trace.

Azure OpenAI enables development of copilots and generative AI applications using OpenAI’s library of models. Use the Datadog integration to track the performance and usage of the Azure OpenAI API and deployments.

Setup

LLM Observability: End-to-end visibility into your LLM application using Azure OpenAI

You can enable LLM Observability in different environments. Follow the appropriate setup based on your scenario:

Installation for Python

If you do not have the Datadog Agent:
  1. Install the ddtrace package:
  pip install ddtrace
  1. Start your application with the following command, enabling Agentless mode:
  DD_SITE=<YOUR_DATADOG_SITE> DD_API_KEY=<YOUR_API_KEY> DD_LLMOBS_ENABLED=1 DD_LLMOBS_AGENTLESS_ENABLED=1 DD_LLMOBS_ML_APP=<YOUR_ML_APP_NAME> ddtrace-run python <YOUR_APP>.py
If you already have the Datadog Agent installed:
  1. Make sure the Agent is running and that APM and StatsD are enabled. For example, use the following command with Docker:
docker run -d \
  --cgroupns host \
  --pid host \
  -v /var/run/docker.sock:/var/run/docker.sock:ro \
  -v /proc/:/host/proc/:ro \
  -v /sys/fs/cgroup/:/host/sys/fs/cgroup:ro \
  -e DD_API_KEY=<DATADOG_API_KEY> \
  -p 127.0.0.1:8126:8126/tcp \
  -p 127.0.0.1:8125:8125/udp \
  -e DD_DOGSTATSD_NON_LOCAL_TRAFFIC=true \
  -e DD_APM_ENABLED=true \
  gcr.io/datadoghq/agent:latest
  1. If you haven’t already, install the ddtrace package:
  pip install ddtrace
  1. Start your application using the ddtrace-run command to automatically enable tracing:
   DD_SITE=<YOUR_DATADOG_SITE> DD_API_KEY=<YOUR_API_KEY> DD_LLMOBS_ENABLED=1 DD_LLMOBS_ML_APP=<YOUR_ML_APP_NAME> ddtrace-run python <YOUR_APP>.py

Note: If the Agent is running on a custom host or port, set DD_AGENT_HOST and DD_TRACE_AGENT_PORT accordingly.

If you are running LLM Observability in a serverless environment (Azure Functions):

Enable LLM Observability by setting the following environment variables:

DD_SITE=<YOUR_DATADOG_SITE> DD_API_KEY=<YOUR_API_KEY> DD_LLMOBS_ENABLED=1 DD_LLMOBS_ML_APP=<YOUR_ML_APP_NAME>

Note: In serverless environments, Datadog automatically flushes spans when the Azure function finishes running.

Automatic Azure OpenAI tracing

The Azure OpenAI integration is automatically enabled when LLM Observability is configured. This captures latency, errors, input and output messages, as well as token usage for Azure OpenAI calls.

The following methods are traced for both synchronous and asynchronous Azure OpenAI operations:

  • AzureOpenAI().completions.create()
  • AsyncAzureOpenAI().completions.create()
  • AzureOpenAI().chat.completions.create()
  • AsyncAzureOpenAI().chat.completions.create()

No additional setup is required for these methods.

Validation

Validate that LLM Observability is properly capturing spans by checking your application logs for successful span creation. You can also run the following command to check the status of the ddtrace integration:

ddtrace-run --info

Look for the following message to confirm the setup:

Agent error: None
Debugging

If you encounter issues during setup, enable debug logging by passing the --debug flag:

ddtrace-run --debug

This displays any errors related to data transmission or instrumentation, including issues with Azure OpenAI traces.

Infrastructure Monitoring: Metrics and visibility on your Azure OpenAI resources

If you haven’t already, set up the Microsoft Azure integration first. There are no other installation steps.

Data Collected

Metrics

azure.cognitiveservices_accounts.active_tokens
(gauge)
Total tokens minus cached tokens over a period of time. Applies to PTU and PTU-managed deployments. Use this metric to understand your TPS or TPM-based utilization for PTUs and compare to your benchmarks for target TPS or TPM for your scenarios.
azure.cognitiveservices_accounts.azure_open_ai_requests
(count)
Number of calls made to the Azure OpenAI API over a period of time. Applies to PTU, PTU-Managed, and Pay-as-you-go deployments.
azure.cognitiveservices_accounts.blocked_volume
(count)
Number of calls made to the Azure OpenAI API and rejected by a content filter applied over a period of time. You can add a filter or apply splitting by the following dimensions: ModelDeploymentName, ModelName, and TextType.
azure.cognitiveservices_accounts.generated_completion_tokens
(count)
Number of Generated Completion Tokens from an OpenAI model.
azure.cognitiveservices_accounts.processed_fine_tuned_training_hours
(count)
Number of training hours processed on an OpenAI fine-tuned model.
azure.cognitiveservices_accounts.harmful_volume_detected
(count)
Number of calls made to Azure OpenAI API and detected as harmful (both block model and annotate mode) by content filter applied over a period of time.
azure.cognitiveservices_accounts.processed_prompt_tokens
(count)
Number of prompt tokens processed on an OpenAI model.
azure.cognitiveservices_accounts.processed_inference_tokens
(count)
Number of inference tokens processed on an OpenAI model.
azure.cognitiveservices_accounts.prompt_token_cache_match_rate
(gauge)
Percentage of the prompt tokens that hit the cache.
Shown as percent
azure.cognitiveservices_accounts.provisioned_managed_utilization
(gauge)
Utilization % for a provisoned-managed deployment, calculated as (PTUs consumed / PTUs deployed) x 100. When utilization is greater than or equal to 100%, calls are throttled and error code 429 is returned.
Shown as percent
azure.cognitiveservices_accounts.provisioned_managed_utilization_v2
(gauge)
Utilization % for a provisoned-managed deployment, calculated as (PTUs consumed / PTUs deployed) x 100. When utilization is greater than or equal to 100%, calls are throttled and error code 429 is returned.
Shown as percent
azure.cognitiveservices_accounts.time_to_response
(gauge)
Recommended latency (responsiveness) measure for streaming requests. Applies to PTU and PTU-managed deployments. Calculated as time taken for the first response to appear after a user sends a prompt, as measured by the API gateway.
Shown as millisecond
azure.cognitiveservices_accounts.total_volume_sent_for_safety_check
(count)
Number of calls made to the Azure OpenAI API and detected by a content filter applied over a period of time.

Service Checks

The Azure OpenAI integration does not include any service checks.

Events

The Azure OpenAI integration does not include any events.

Troubleshooting

Need help? Contact Datadog support.