このページは日本語には対応しておりません。随時翻訳に取り組んでいます。翻訳に関してご質問やご意見ございましたら、お気軽にご連絡ください。
Overview
Monitor, troubleshoot, and evaluate your LLM-powered applications, such as chatbots or data extraction tools, using Google Gemini.
If you are building LLM applications, use Datadog’s LLM Observability to investigate the root cause of issues, monitor operational performance, and evaluate the quality, privacy, and safety of your LLM applications.
See the LLM Observability tracing view video for an example of how you can investigate a trace.
Setup
LLM Observability: Get end-to-end visibility into your LLM application using Google Gemini
You can enable LLM Observability in different environments. Follow the appropriate setup based on your scenario:
Installation for Python
If you do not have the Datadog Agent:
- Install the
ddtrace
package:
- Start your application with the following command, enabling Agentless mode:
DD_SITE=<YOUR_DATADOG_SITE> DD_API_KEY=<YOUR_API_KEY> DD_LLMOBS_ENABLED=1 DD_LLMOBS_AGENTLESS_ENABLED=1 DD_LLMOBS_ML_APP=<YOUR_ML_APP_NAME> ddtrace-run python <YOUR_APP>.py
If you already have the Datadog Agent installed:
- Make sure the Agent is running and that APM and StatsD are enabled. For example, use the following command with Docker:
docker run -d \
--cgroupns host \
--pid host \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
-v /proc/:/host/proc/:ro \
-v /sys/fs/cgroup/:/host/sys/fs/cgroup:ro \
-e DD_API_KEY=<DATADOG_API_KEY> \
-p 127.0.0.1:8126:8126/tcp \
-p 127.0.0.1:8125:8125/udp \
-e DD_DOGSTATSD_NON_LOCAL_TRAFFIC=true \
-e DD_APM_ENABLED=true \
gcr.io/datadoghq/agent:latest
- If you haven’t already, install the
ddtrace
package:
- Start your application using the
ddtrace-run
command to automatically enable tracing:
DD_SITE=<YOUR_DATADOG_SITE> DD_API_KEY=<YOUR_API_KEY> DD_LLMOBS_ENABLED=1 DD_LLMOBS_ML_APP=<YOUR_ML_APP_NAME> ddtrace-run python <YOUR_APP>.py
Note: If the Agent is running on a custom host or port, set DD_AGENT_HOST
and DD_TRACE_AGENT_PORT
accordingly.
If you are running LLM Observability in a serverless environment:
Enable LLM Observability by setting the following environment variables:
DD_SITE=<YOUR_DATADOG_SITE> DD_API_KEY=<YOUR_API_KEY> DD_LLMOBS_ENABLED=1 DD_LLMOBS_ML_APP=<YOUR_ML_APP_NAME>
Note: In serverless environments, Datadog automatically flushes spans when the serverless function finishes running.
Automatic Google Gemini tracing
The Google Gemini integration provides automatic tracing for the Google AI Python SDK’s content generation calls. This captures latency, errors, input and output messages, as well as token usage for Google Gemini operations.
The following methods are traced for both synchronous and asynchronous Google Gemini operations:
- Generating content (including streamed calls):
model.generate_content()
, model.generate_content_async()
- Chat messages:
chat.send_message()
, chat.send_message_async()
No additional setup is required for these methods.
Validation
Validate that LLM Observability is properly capturing spans by checking your application logs for successful span creation. You can also run the following command to check the status of the ddtrace
integration:
Look for the following message to confirm the setup:
Debugging
If you encounter issues during setup, enable debug logging by passing the --debug
flag:
This displays any errors related to data transmission or instrumentation, including issues with Google Gemini traces.
Data Collected
Metrics
The Google Gemini integration does not include any metrics.
Service Checks
The Google Gemini integration does not include any service checks.
Events
The Google Gemini integration does not include any events.
Troubleshooting
Need help? Contact Datadog support.