- はじめに
- エージェント
- インテグレーション
- Watchdog
- イベント
- ダッシュボード
- モバイルアプリケーション
- インフラストラクチャー
- サーバーレス
- メトリクス
- ノートブック
- アラート設定
- APM & Continuous Profiler
- CI Visibility
- RUM & セッションリプレイ
- データベース モニタリング
- ログ管理
- セキュリティプラットフォーム
- Synthetic モニタリング
- ネットワークモニタリング
- 開発者
- API
- アカウントの管理
- データセキュリティ
- ヘルプ
The Continuous Profiler can compare two profiles or profile aggregations with each other to help you identify code performance improvements, regressions, and structural changes. You can compare a profile with:
This helps you see if the service is taking more or less time, using more or less memory, making more or fewer allocations, throwing more or fewer exceptions, or involving more or less code and calls than it was in the past.
Comparisons work best when the application is experiencing a similar workload (total requests) as it was in the past.
Some common scenarios to use comparison are:
Comparing two latest deployments. For example, verify if the latest deployed fix lowers the number of memory allocations your method makes.
Comparing two distinct time periods. For example, calculate the CPU consumption for today compared to the past week. What methods get better or worse in terms of CPU consumption?
Comparing two different sets of tags. For example, compare profiles between different environments, availability zones, pods, canaries, or other custom Datadog tags.
You can open different types of comparisons from different places in the UI.
On the Profiler Search view, select a profile from the list. Click Compare to open the comparison view. By default, the selected profile is shown as Profile B. For Profile A, select an aggregation time frame and tags, or a specific profile ID.
Select the metric you want to compare (the list varies based on code language). This can be helpful, for example, for looking at allocation spikes while investigating CPU profiles.
Take note of the legend colors, which show:
These colors help you identify structural changes in your code between versions, time ranges, or canaries, and how they affect performance.
Hover over methods in the profile to see specific metrics about the methods that are taking more or less time, or making fewer or more allocations, than in the compared profile.
On the Aggregation view, select a service to see its aggregated profile for a particular metric (for example, wall time) over the selected time frame. Then click Compare to compare it to the aggregated profile of another version.
Switch between Side-by-Side and Combined to find the view that is most helpful to you.
Side-by-side comparison is helpful when you want to retain the context of both A and B profiles. In this mode, The flame graph on the left represents profile scoped to tags and time range scoped for A while the flame graph on the right represents profile scoped to tags and time range scoped for B.
The methods highlighted in blue on the left flame graph show methods that were not seen running in profile B during the period that the profile was captured, or among the set of tags queried. Similarly, methods highlighted in purple show methods that were not seen in profile A.
The Combined comparison mode is helpful when you want to look at code performance changes in a single view. It computes one flame graph that averages method timings in A and B and shows an averaged difference in method timings between the two queries.
Removed methods are highlighted in green and revealed when you hover over the method frame. Added code is highlighted in red.
お役に立つドキュメント、リンクや記事: