Datadog APM & Distributed Tracing gives deep visibility into your applications with out-of-the-box performance dashboards for web services, queues and databases to monitor requests, errors, and latency. Distributed traces seamlessly correlate to browser sessions, logs, synthetic checks, network, processes and infrastructure metrics across hosts, containers, proxies, and serverless functions. Search 100% of ingested traces live with no sampling during an outage, while Datadog intelligently retains traces that represent an error, high latency, or unique code paths for analysis.
As you transition from monoliths to microservices, setting up Datadog APM across hosts, containers or serverless functions takes just minutes.
Install and configure the Datadog Agent in AWS, GCP, Azure, Kubernetes, ECS, Fargate, PCF, Heroku, on-prem and more.
Add a tracing library to your application or proxy service to start sending traces to the Datadog Agent.
Now that you’ve configured your application to send traces to Datadog, start getting insights into your application performance:
Understand service dependencies with an auto-generated service map from your traces alongside service performance metrics and monitor alert statuses.
Monitor Service metrics for requests, errors and latency percentiles. Drill down into database queries or endpoints correlated with infrastructure.
Monitor service performance and compare between versions for rolling, blue/green, shadow, or canary deployments.
Search on any span by any tag on 100% of your ingested traces live with no sampling for 15 minutes.
View your application logs side-by-side with the trace for a single distributed request with automatic trace-id injection.
Trace across AWS Lambda and hosts to view complete traces across your hybrid infrastructure.
Analyze performance by application, infrastructure or custom tags such as datacenter, availability zone, deployment version, domain, user, checkout amount, customer and more.
Link simulated API tests to traces to find the root cause of failures across frontend, network and backend requests.
Improve code efficiency with an always on production profiler to pinpoint the lines of code consuming the most CPU, memory, or I/O.
Seamlessly connect your instrumentation between automatic instrumentation, dd-trace-api, OpenTracing and OpenTelemetry exporters.
Additional helpful documentation, links, and articles: