- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
Use an integration monitor to check if an installed integration is running. For more detailed monitoring, a metric monitor can be used to gauge specific information about an integration.
To create an integration monitor in Datadog:
Create an integration metric monitor by following the instructions in the metric monitor documentation. Using the integration metric monitor type ensures the monitor can be selected by the integration monitor type facet on the Manage Monitors page.
Note: To configure an integration monitor, ensure that the integration submits metrics or service checks.
If there is only one check for the integration, no selection is necessary. Otherwise, select the check for your monitor.
Select the scope to monitor by choosing host names, tags, or choose All Monitored Hosts
. If you need to exclude certain hosts, use the second field to list names or tags.
AND
logic. All listed host names and tags must be present on a host for it to be included.OR
logic. Any host with a listed host name or tag is excluded.In this section, choose between a Check Alert or Cluster Alert:
A check alert tracks consecutive statuses submitted per check grouping and compares it to your thresholds.
Set up the check alert:
Trigger a separate alert for each <GROUP>
reporting your check.
Check grouping is specified either from a list of known groupings or by you. For integration monitors, the per-check grouping is explicitly known. For example, the Postgres integration is tagged with db
, host
, and port
.
Trigger the alert after selected consecutive failures: <NUMBER>
Each check run submits a single status of OK
, WARN
, CRITICAL
, or UNKNOWN
. Choose how many consecutive runs with the CRITICAL
status trigger a notification. For example, your database might have a single blip where connection fails. If you set this value to > 1
, the blip is ignored but a problem with more than one consecutive failure triggers a notification.
If the integration check reports an UNKNOWN
status, choose Do not notify
or Notify
for Unknown status.
If enabled, a state transition to UNKNOWN
triggers a notification. In the monitor status page, the status bar of a group in UNKNOWN
state uses NODATA
grey. The overall status of the monitor stays in OK
.
Resolve the alert after selected consecutive successes: <NUMBER>
Choose how many consecutive runs with the OK
status resolve the alert.
A cluster alert calculates the percent of checks in a given status and compares it to your thresholds.
Set up a cluster alert:
Decide whether or not to group your checks according to a tag. Ungrouped
calculates the status percentage across all sources. Grouped
calculates the status percentage on a per group basis.
Select the percentage for the alert threshold.
Each check tagged with a distinct combination of tags is considered to be a distinct check in the cluster. Only the status of the last check of each combination of tags is taken into account in the cluster percentage calculation.
For example, a cluster check monitor grouped by environment can alert if more that 70% of the checks on any of the environments submit a CRITICAL
status, and warn if more that 50% of the checks on any of the environments submit a WARN
status.
See the Monitor configuration documentation for information on No data, Auto resolve, and New group delay options.
For detailed instructions on the Configure notifications and automations section, see the Notifications page.