이 페이지는 아직 한국어로 제공되지 않으며 번역 작업 중입니다. 번역에 관한 질문이나 의견이 있으시면 언제든지 저희에게 연락해 주십시오.

Datadog Application Performance Monitoring (APM) gives deep visibility into your applications with out-of-the-box performance dashboards for web services, queues, and databases to monitor requests, errors, and latency. Distributed traces seamlessly correlate to browser sessions, logs, profiles, synthetic checks, network, processes, and infrastructure metrics across hosts, containers, proxies, and serverless functions. Navigate directly from investigating a slow trace to identifying the specific line of code causing performance bottlenecks with code hotspots.

For an introduction to terminology used in Datadog APM, see APM Terms and Concepts.

Getting started

As you transition from monoliths to microservices, setting up Datadog APM across hosts, containers, or serverless functions takes just minutes.

Beta: Single Step APM Instrumentation - Enable APM instrumentation when you install the Datadog Agent to get started quickly with application performance monitoring. This option automatically instruments your services without you needing to modify the code. For more information, read Single Step APM Instrumentation.

Read Application Instrumentation to start using Datadog APM.

Add the Datadog Tracing Library for your environment and language, including tracing a proxy, tracing AWS Lambda functions, or using automatic or custom instrumentation.

Control and manage data flowing into and being kept by Datadog

APM Lifecycle

Traces start in your instrumented applications and flow into Datadog. For high-throughput services, you can view and control ingestion using Ingestion Controls. All ingested traces are available for live search and analytics for 15 minutes. You can use custom tag-based retention filters to keep exactly the traces that matter for your business for 15 days for search and analytics.

Trace Retention and Ingestion

Generate custom metrics from spans

Generate metrics with 15-month retention from all ingested spans to create and monitor key business and performance indicators over time.

Generate Custom Metrics from ingested spans

Correlate traces with other telemetry

View your application logs side-by-side with the trace for a single distributed request with automatic trace-id injection. Link between real user sessions and traces to see the exact traces that correspond to user experiences and reported issues. Link simulated tests to traces to find the root cause of failures across frontend, network and backend requests.

Connect Logs And Traces

Explore live and indexed traces

Search your ingested traces by any tag, live for 15 minutes. Analyze performance by any tag on any span during an outage to identify impacted users or transactions. View maps showing request flows and other visualizations to help you understand what your code is doing and where its performance can be improved.

Gain deep insight into your services

Understand service dependencies with an auto-generated service map from your traces alongside service performance metrics and monitor alert statuses.

Monitor Service metrics for requests, errors and latency percentiles. Analyze individual database queries or endpoints correlated with infrastructure.

Service Page

Monitor service performance and compare between versions for rolling, blue/green, shadow, or canary deployments.

Versions on the Service Page

Profile your production code

Improve application latency and optimize compute resources with always-on production profiling to pinpoint the lines of code consuming the most CPU, memory, or I/O.


Further Reading