APM & Continuous Profiler

Datadog APM & Continuous Profiler gives deep visibility into your applications with out-of-the-box performance dashboards for web services, queues, and databases to monitor requests, errors, and latency. Distributed traces seamlessly correlate to browser sessions, logs, profiles, synthetic checks, network, processes, and infrastructure metrics across hosts, containers, proxies, and serverless functions. Navigate directly from investigating a slow trace to identifying the specific line of code causing performance bottlenecks with code hotspots.

Journey of a trace

Trace Journey

Traces start in your instrumented applications and flow into Datadog, where 100% of traces up to 50 traces per second (per APM Host) are ingested. If needed for high-throughput services, you can view and control the experience using Ingestion Controls. All ingested traces are available for live search and analytics for 15 minutes. You can use custom tag-based retention filters to keep exactly the traces that matter for your business for 15 days for search and analytics.

Send traces to Datadog

As you transition from monoliths to microservices, setting up Datadog APM across hosts, containers or serverless functions takes just minutes.

Add the Datadog Tracing Library to find instructions for your environment and language, whether you are tracing a proxy or tracing across AWS Lambda functions and hosts, using automatic instrumentation, dd-trace-api, OpenTracing, or OpenTelemetry.


Explore Datadog APM

Now that you’ve configured your application to send traces to Datadog, start getting insights into your application performance:

Service map

Understand service dependencies with an auto-generated service map from your traces alongside service performance metrics and monitor alert statuses.

Service performance dashboards

Monitor Service metrics for requests, errors and latency percentiles. Analyze individual database queries or endpoints correlated with infrastructure.

Service Page

Continuous Profiler

Improve application latency and optimize compute resources with always-on production profiling to pinpoint the lines of code consuming the most CPU, memory, or I/O.


Search 100% of your traces by any tag, live with no sampling, for 15 minutes.

Live Search

Live analytics

Analyze performance by any tag on any span live for 15 minutes during an outage to identify impacted users or transactions.

Live Analytics

Deployment tracking

Monitor service performance and compare between versions for rolling, blue/green, shadow, or canary deployments.

Versions on the Service Page

Trace retention and ingestion

Retain the traces that matter most to you with tag-based retention filters and perform analytics on all indexed spans for 15 days.

Trace Retention and Ingestion

Generate custom metrics from all spans

Generate metrics with 15-month retention from all ingested spans to create and monitor key business and performance indicators.

Generate Custom Metrics from ingested spans

Connect logs and distributed traces

View your application logs side-by-side with the trace for a single distributed request with automatic trace-id injection.

Connect Logs And Traces

Connect Real User Monitoring and traces

Link between real user sessions and traces to see the exact traces that correspond to user experiences and reported issues.

Connect RUM sessions and traces

Connect synthetic test data and traces

Link simulated API tests to traces to find the root cause of failures across frontend, network and backend requests.

Synthetic tests

Further Reading