- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
",t};e.buildCustomizationMenuUi=t;function n(e){let t='
",t}function s(e){let n=e.filter.currentValue||e.filter.defaultValue,t='${e.filter.label}
`,e.filter.options.forEach(s=>{let o=s.id===n;t+=``}),t+="${e.filter.label}
`,t+=`APM Investigator is in Preview. To request access, contact Datadog Support.
The APM Investigator helps you diagnose and resolve application latency issues through a guided, step-by-step workflow. It consolidates analysis tools into a single interface so you can identify root causes and take action.
The APM Investigator helps you:
Launch an investigation from an APM service page or a resource page.
To trigger the latency analysis, select two zones on the point plot:
This comparison between the slow and normal spans drives all subsequent analysis.
The investigator identifies whether latency originates from your service or its downstream dependencies (services, databases, third-party APIs).
Analysis approach: The investigator compares trace data from both your selected slow and normal periods. To find the service responsible for the latency increase, it compares:
Execution Time: Compares each service’s “self-time”, defined as the time spent on its own processing, excluding waits for downstream dependencies, across the two datasets. The service with the largest absolute latency increase is the primary focus.
Based on this comprehensive analysis, the investigator recommends a service as the likely latency bottleneck. Expand the latency bottleneck section to see details about the comparison between slow and normal traces. A table surfaces the changes in self-time and in the number of inbound requests by service.
The following example shows two side-by-side flame graphs that compare slow traces against healthy traces in more detail. Use the arrows to cycle through example traces and click View to open the trace in a full pageview.
To investigate recent changes to a service, click the +
icon that appears when you hover over a row to add it as context for your investigation.
The investigator then helps you determine if recent deployments on the service or the latency bottleneck service caused the latency increase.
The Recent changes section surfaces:
Analysis approach: The APM Investigator analyzes this data in the background to flag if this section is relevant to examine (if a deployment occurred around the time of the latency increase you are investigating).
The investigator also uses Tag Analysis to help you discover shared attributes that distinguish slow traces from healthy traces. Tag Analysis highlights dimensions with significant distribution differences between slow and normal datasets.
The section surfaces:
org_id
, kubernetes_cluster
, or datacenter.name
.The APM Investigator only surfaces this section when dimensions show significant differentiation that is worth examining.
Above the point plot, you can find a preview of how many end-users, account and application pages (for example, /checkout
) are affected by the problem. This information is collected if you enabled the connection between RUM and traces.
The investigator consolidates findings from all analysis steps (latency bottleneck, recent changes, and tag analysis) to generate a root cause hypothesis. For example, “a deployment of this downstream service introduced the latency increase”.
The APM Investigator helps reduce Mean Time to Resolution (MTTR) by accelerating issue diagnosis and response through automated trace and change data analysis.
추가 유용한 문서, 링크 및 기사: