Configuration

이 페이지는 아직 영어로 제공되지 않습니다. 번역 작업 중입니다.
현재 번역 프로젝트에 대한 질문이나 피드백이 있으신 경우 언제든지 연락주시기 바랍니다.

Overview

You can configure your LLM applications on the Settings page to optimize your application’s performance and security.

Evaluations
Enables Datadog to assess your LLM application on dimensions like Quality, Security, and Safety. By enabling evaluations, you can assess the effectiveness of your application’s responses and maintain high standards for both performance and user safety. For more information about evaluations, see Terms and Concepts.
Topics
Helps identify irrelevant input for the topic relevancy out-of-the-box evaluation, ensuring your LLM application stays focused on its intended purpose.

Connect your account

Connect your OpenAI account to LLM Observability with your OpenAI API key. LLM Observability uses the GPT-4o mini model for Evaluations.

  1. In Datadog, navigate to LLM Observability > Settings > Integrations.
  2. Select Connect on the OpenAI tile.
  3. Follow the instructions on the tile.
    • Provide your OpenAI API key. Ensure that this key has write permission for model capabilities.
  4. Enable Use this API key to evaluate your LLM applications.
The OpenAI configuration tile in LLM Observability. Lists instructions for configuring OpenAI and providing your OpenAI API key.

Connect your Azure OpenAI account to LLM Observability with your OpenAI API key. We strongly recommend using the GPT-4o mini model for Evaluations.

  1. In Datadog, navigate to LLM Observability > Settings > Integrations.
  2. Select Connect on the Azure OpenAI tile.
  3. Follow the instructions on the tile.
    • Provide your Azure OpenAI API key. Ensure that this key has write permission for model capabilities.
    • Provide the Resource Name, Deployment ID, and API version to complete integration.
The Azure OpenAI configuration tile in LLM Observability. Lists instructions for configuring Azure OpenAI and providing your API Key, Resource Name, Deployment ID, and API Version.

Connect your Anthropic account to LLM Observability with your Anthropic API key. LLM Observability uses the Haiku model for Evaluations.

  1. In Datadog, navigate to LLM Observability > Settings > Integrations.
  2. Select Connect on the Anthropic tile.
  3. Follow the instructions on the tile.
    • Provide your Anthropic API key. Ensure that this key has write permission for model capabilities.
The Anthropic configuration tile in LLM Observability. Lists instructions for configuring Anthropic and providing your Anthropic API key.

Connect your Amazon Bedrock account to LLM Observability with your AWS Account. LLM Observability uses the Haiku model for Evaluations.

  1. In Datadog, navigate to LLM Observability > Settings > Integrations.
  2. Select Connect on the Amazon Bedrock tile.
  3. Follow the instructions on the tile.
The Amazon Bedrock configuration tile in LLM Observability. Lists instructions for configuring Amazon Bedrock.

Select and enable evaluations

  1. Navigate to LLM Observability > Settings > Evaluations.
  2. Click on the evaluation you want to enable.
  3. Select OpenAI, Azure OpenAI, Anthropic, or Amazon Bedrock as your LLM provider.
  4. Select the account you want to run the evaluation on.
  5. Assign the LLM application you want to run the evaluation on.

After you click Save, LLM Observability uses the LLM account you connected to power the evaluation you enabled.

For more information about evaluations, see Terms and Concepts.

Estimated Token Usage

LLM Observability provides metrics to help you monitor and manage the token usage associated with evaluations that power LLM Observability. The following metrics allow you to track the LLM resources consumed to power evaluations:

  • ml_obs.estimated_usage.llm.input.tokens
  • ml_obs.estimated_usage.llm.output.tokens
  • ml_obs.estimated_usage.llm.total.tokens

Each of these metrics has ml_app, model_server, model_provider, model_name, and evaluation_name tags, allowing you to pinpoint specific applications, models, and evaluations contributing to your usage.

Provide topics for topic relevancy

Providing topics allows you to use the topic relevancy evaluation.

  1. Go to LLM Observability > Applications.
  2. Select the application you want to add topics for.
  3. At the bottom of the left sidebar, select Configuration.
  4. Add topics in the pop-up modal.

Topics can contain multiple words and should be as specific and descriptive as possible. For example, for an LLM application that was designed for incident management, add “observability”, “software engineering”, or “incident resolution”. If your application handles customer inquiries for an e-commerce store, you can use “Customer questions about purchasing furniture on an e-commerce store”.

Further Reading