- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
Supported OS
Monitor, troubleshoot, and evaluate your LLM-powered applications (e.g. chatbot, data extraction tool, etc) built using LangChain.
Use LLM Observability to investigate the root cause of issues, monitor operational performance, and evaluate the quality, privacy, and safety of your LLM applications.
See the LLM Observability tracing view video for an example of how you can investigate a trace.
Get cost estimation, prompt and completion sampling, error tracking, performance metrics, and more out of LangChain Python library requests using Datadog metrics, APM, and logs.
You can enable LLM Observability in different environments. Follow the appropriate setup based on your scenario:
Install the ddtrace
package:
pip install ddtrace
Start your application with the following command, enabling Agentless mode:
DD_SITE=<YOUR_DATADOG_SITE> DD_API_KEY=<YOUR_API_KEY> DD_LLMOBS_ENABLED=1 DD_LLMOBS_AGENTLESS_ENABLED=1 DD_LLMOBS_ML_APP=<YOUR_ML_APP_NAME> ddtrace-run python <YOUR_APP>.py
Make sure the Agent is running and that APM and StatsD are enabled. For example, use the following command with Docker:
docker run -d \
--cgroupns host \
--pid host \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
-v /proc/:/host/proc/:ro \
-v /sys/fs/cgroup/:/host/sys/fs/cgroup:ro \
-e DD_API_KEY=<DATADOG_API_KEY> \
-p 127.0.0.1:8126:8126/tcp \
-p 127.0.0.1:8125:8125/udp \
-e DD_DOGSTATSD_NON_LOCAL_TRAFFIC=true \
-e DD_APM_ENABLED=true \
gcr.io/datadoghq/agent:latest
Install the ddtrace
package if it isn’t installed yet:
pip install ddtrace
Start your application using the ddtrace-run
command to automatically enable tracing:
DD_SITE=<YOUR_DATADOG_SITE> DD_API_KEY=<YOUR_API_KEY> DD_LLMOBS_ENABLED=1 DD_LLMOBS_ML_APP=<YOUR_ML_APP_NAME> ddtrace-run python <YOUR_APP>.py
Note: If the Agent is running on a custom host or port, set DD_AGENT_HOST
and DD_TRACE_AGENT_PORT
accordingly.
Install the Datadog-Python and Datadog-Extension Lambda layers as part of your AWS Lambda setup.
Enable LLM Observability by setting the following environment variables:
DD_SITE=<YOUR_DATADOG_SITE> DD_API_KEY=<YOUR_API_KEY> DD_LLMOBS_ENABLED=1 DD_LLMOBS_ML_APP=<YOUR_ML_APP_NAME>
Note: In serverless environments, Datadog automatically flushes spans when the Lambda function finishes running.
LangChain integration is automatically enabled when LLM Observability is configured. This captures latency, errors, input/output messages, and token usage for LangChain operations.
The following methods are traced for both synchronous and asynchronous LangChain operations:
llm.invoke()
, llm.ainvoke()
chat_model.invoke()
, chat_model.ainvoke()
chain.invoke()
, chain.ainvoke()
, chain.batch()
, chain.abatch()
OpenAIEmbeddings.embed_documents()
, OpenAIEmbeddings.embed_query()
No additional setup is required for these methods.
Validate that LLM Observability is properly capturing spans by checking your application logs for successful span creation. You can also run the following command to check the status of the ddtrace
integration:
ddtrace-run --info
Look for the following message to confirm the setup:
Agent error: None
If you encounter issues during setup, enable debug logging by passing the --debug
flag:
ddtrace-run --debug
This will display any errors related to data transmission or instrumentation, including issues with LangChain traces.
Enable APM and StatsD in your Datadog Agent. For example, in Docker:
docker run -d --cgroupns host \
--pid host \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
-v /proc/:/host/proc/:ro \
-v /sys/fs/cgroup/:/host/sys/fs/cgroup:ro \
-e DD_API_KEY=<DATADOG_API_KEY> \
-p 127.0.0.1:8126:8126/tcp \
-p 127.0.0.1:8125:8125/udp \
-e DD_DOGSTATSD_NON_LOCAL_TRAFFIC=true \
-e DD_APM_ENABLED=true \
gcr.io/datadoghq/agent:latest
Install the Datadog APM Python library.
pip install ddtrace>=1.17
Prefix your LangChain Python application command with ddtrace-run
.
DD_SERVICE="my-service" DD_ENV="staging" DD_API_KEY=<DATADOG_API_KEY> ddtrace-run python <your-app>.py
Note: If the Agent is using a non-default hostname or port, be sure to also set DD_AGENT_HOST
, DD_TRACE_AGENT_PORT
, or DD_DOGSTATSD_PORT
.
See the APM Python library documentation for more advanced usage.
See the APM Python library documentation for all the available configuration options.
To enable log prompt and completion sampling, set the DD_LANGCHAIN_LOGS_ENABLED=1
environment variable. By default, 10% of traced requests will emit logs containing the prompts and completions.
To adjust the log sample rate, see the APM library documentation.
Note: Logs submission requires DD_API_KEY
to be specified when running ddtrace-run
.
Validate that the APM Python library can communicate with your Agent using:
ddtrace-run --info
You should see the following output:
Agent error: None
Pass the --debug
flag to ddtrace-run
to enable debug logging.
ddtrace-run --debug
This displays any errors sending data:
ERROR:ddtrace.internal.writer.writer:failed to send, dropping 1 traces to intake at http://localhost:8126/v0.5/traces after 3 retries ([Errno 61] Connection refused)
WARNING:ddtrace.vendor.dogstatsd:Error submitting packet: [Errno 61] Connection refused, dropping the packet and closing the socket
DEBUG:ddtrace.contrib._trace_utils_llm.py:sent 2 logs to 'http-intake.logs.datadoghq.com'
langchain.request.duration (gauge) | Request duration distribution. Shown as nanosecond |
langchain.request.error (count) | Number of errors. Shown as error |
langchain.tokens.completion (gauge) | Number of tokens used in the completion of a response. Shown as token |
langchain.tokens.prompt (gauge) | Number of tokens used in the prompt of a request. Shown as token |
langchain.tokens.total (gauge) | Total number of tokens used in a request and response. Shown as token |
langchain.tokens.total_cost (count) | Estimated cost in USD based on token usage. Shown as dollar |
The LangChain integration does not include any events.
The LangChain integration does not include any service checks.
Need help? Contact Datadog support.