- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
The Apache Spark receiver allows for collection of Apache Spark metrics and access to the Spark Overview dashboard. Configure the receiver according to the specifications of the latest version of the apachesparkreceiver
.
For more information, see the OpenTelemetry project documentation for the Apache Spark receiver.
To collect Apache Spark metrics with OpenTelemetry for use with Datadog:
See the Apache Spark receiver documentation for detailed configuration options and requirements.
OTEL | DATADOG | DESCRIPTION | FILTER | TRANSFORM |
---|---|---|---|---|
spark.driver.block_manager.disk.usage | spark.driver.disk_used | Disk space used by the BlockManager. | × 1048576 | |
spark.driver.block_manager.memory.usage | spark.driver.memory_used | Memory usage for the driver’s BlockManager. | × 1048576 | |
spark.driver.dag_scheduler.stage.count | spark.stage.count | Number of stages the DAGScheduler is either running or needs to run. | ||
spark.executor.disk.usage | spark.executor.disk_used | Disk space used by this executor for RDD storage. | ||
spark.executor.disk.usage | spark.rdd.disk_used | Disk space used by this executor for RDD storage. | ||
spark.executor.memory.usage | spark.executor.memory_used | Storage memory used by this executor. | ||
spark.executor.memory.usage | spark.rdd.memory_used | Storage memory used by this executor. | ||
spark.job.stage.active | spark.job.num_active_stages | Number of active stages in this job. | ||
spark.job.stage.result | spark.job.num_completed_stages | Number of stages with a specific result in this job. | job_result : completed | |
spark.job.stage.result | spark.job.num_failed_stages | Number of stages with a specific result in this job. | job_result : failed | |
spark.job.stage.result | spark.job.num_skipped_stages | Number of stages with a specific result in this job. | job_result : skipped | |
spark.job.task.active | spark.job.num_tasks{status: running} | Number of active tasks in this job. | ||
spark.job.task.result | spark.job.num_skipped_tasks | Number of tasks with a specific result in this job. | job_result : skipped | |
spark.job.task.result | spark.job.num_failed_tasks | Number of tasks with a specific result in this job. | job_result : failed | |
spark.job.task.result | spark.job.num_completed_tasks | Number of tasks with a specific result in this job. | job_result : completed | |
spark.stage.io.records | spark.stage.input_records | Number of records written and read in this stage. | direction : in | |
spark.stage.io.records | spark.stage.output_records | Number of records written and read in this stage. | direction : out | |
spark.stage.io.size | spark.stage.input_bytes | Amount of data written and read at this stage. | direction : in | |
spark.stage.io.size | spark.stage.output_bytes | Amount of data written and read at this stage. | direction : out | |
spark.stage.shuffle.io.read.size | spark.stage.shuffle_read_bytes | Amount of data read in shuffle operations in this stage. | ||
spark.stage.shuffle.io.records | spark.stage.shuffle_read_records | Number of records written or read in shuffle operations in this stage. | direction : in | |
spark.stage.shuffle.io.records | spark.stage.shuffle_write_records | Number of records written or read in shuffle operations in this stage. | direction : out |
See OpenTelemetry Metrics Mapping for more information.
추가 유용한 문서, 링크 및 기사: