Datadog APM & Distributed Tracing gives deep visibility into your applications with out-of-the-box performance dashboards for web services, queues, and databases to monitor requests, errors, and latency. Distributed traces seamlessly correlate to browser sessions, logs, synthetic checks, network, processes, and infrastructure metrics across hosts, containers, proxies, and serverless functions. Ingest 100% of your traces with no sampling, search and analyze them live for the last 15 minutes, and use tag-based retention filters to keep traces that matter for your business for 15 days.
Traces start in your instrumented applications and flow into Datadog where we ingest 100% of traces up to 50 traces per second (per APM Host). If needed for high-throughput services, you can view and control the experience using Ingestion Controls. All ingested traces are available for live search and analytics for 15 minutes and you can use custom tag-based retention filters to keep exactly the traces that matter for your business for 15 days search and analytics.
As you transition from monoliths to microservices, setting up Datadog APM across hosts, containers or serverless functions takes just minutes.
Add the Datadog Tracing Library to find instructions for your environment and language, whether you are tracing a proxy or tracing across AWS Lambda functions and hosts, using automatic instrumentation, dd-trace-api, OpenTracing, or OpenTelemetry.
Now that you’ve configured your application to send traces to Datadog, start getting insights into your application performance:
Understand service dependencies with an auto-generated service map from your traces alongside service performance metrics and monitor alert statuses.
Monitor Service metrics for requests, errors and latency percentiles. Drill down into database queries or endpoints correlated with infrastructure.
Search 100% of your traces by any tag, live with no sampling, for 15 minutes.
Analyze performance by any tag on any span live for 15 minutes during an outage to identify impacted users or transactions.
Retain the traces that matter most to you with tag-based retention filters and perform analytics on all indexed spans for 15 days.
Generate metrics with 15-month retention from all ingested spans to create and monitor key business and performance indicators.
Monitor service performance and compare between versions for rolling, blue/green, shadow, or canary deployments.
View your application logs side-by-side with the trace for a single distributed request with automatic trace-id injection.
Link between real user sessions and traces to see the exact traces that correspond to user experiences and reported issues.
Link simulated API tests to traces to find the root cause of failures across frontend, network and backend requests.
Improve code efficiency with an always on production profiler to pinpoint the lines of code consuming the most CPU, memory, or I/O.
Additional helpful documentation, links, and articles: