This endpoint is currently experimental and restricted to Datadog internal use only. Retrieve resource recommendations for a Spark job. The caller (Spark Gateway or DJM UI) provides a service name and shard identifier, and SPA returns structured recommendations for driver and executor resources.
인수
경로 파라미터
이름
유형
설명
shard [required]
string
The shard tag for a spark job, which differentiates jobs within the same service that have different resource needs
service [required]
string
The service name for a spark job
쿼리 문자열
이름
유형
설명
bypass_cache
string
The recommendation service should not use its metrics cache.
JSON:API resource object for SPA Recommendation. Includes type, optional ID, and resource attributes with structured recommendations.
attributes [required]
object
Attributes of the SPA Recommendation resource. Contains recommendations for both driver and executor components.
confidence_level
double
driver [required]
object
Resource recommendation for a single Spark component (driver or executor). Contains estimation data used to patch Spark job specs.
estimation [required]
object
Recommended resource values for a Spark driver or executor, derived from recent real usage metrics. Used by SPA to propose more efficient pod sizing.
cpu
object
CPU usage statistics derived from historical Spark job metrics. Provides multiple estimates so users can choose between conservative and cost-saving risk profiles.
max
int64
Maximum CPU usage observed for the job, expressed in millicores. This represents the upper bound of usage.
p75
int64
75th percentile of CPU usage (millicores). Represents a cost-saving configuration while covering most workloads.
p95
int64
95th percentile of CPU usage (millicores). Balances performance and cost, providing a safer margin than p75.
ephemeral_storage
int64
Recommended ephemeral storage allocation (in MiB). Derived from job temporary storage patterns.
heap
int64
Recommended JVM heap size (in MiB).
memory
int64
Recommended total memory allocation (in MiB). Includes both heap and overhead.
overhead
int64
Recommended JVM overhead (in MiB). Computed as total memory - heap.
executor [required]
object
Resource recommendation for a single Spark component (driver or executor). Contains estimation data used to patch Spark job specs.
estimation [required]
object
Recommended resource values for a Spark driver or executor, derived from recent real usage metrics. Used by SPA to propose more efficient pod sizing.
cpu
object
CPU usage statistics derived from historical Spark job metrics. Provides multiple estimates so users can choose between conservative and cost-saving risk profiles.
max
int64
Maximum CPU usage observed for the job, expressed in millicores. This represents the upper bound of usage.
p75
int64
75th percentile of CPU usage (millicores). Represents a cost-saving configuration while covering most workloads.
p95
int64
95th percentile of CPU usage (millicores). Balances performance and cost, providing a safer margin than p75.
ephemeral_storage
int64
Recommended ephemeral storage allocation (in MiB). Derived from job temporary storage patterns.
heap
int64
Recommended JVM heap size (in MiB).
memory
int64
Recommended total memory allocation (in MiB). Includes both heap and overhead.
overhead
int64
Recommended JVM overhead (in MiB). Computed as total memory - heap.
id
string
Resource identifier for the recommendation. Optional in responses.
type [required]
enum
JSON:API resource type for Spark Pod Autosizing recommendations. Identifies the Recommendation resource returned by SPA.
Allowed enum values: recommendation
JSON:API document containing a single Recommendation resource. Returned by SPA when the Spark Gateway requests recommendations.
Expand All
항목
유형
설명
data [required]
object
JSON:API resource object for SPA Recommendation. Includes type, optional ID, and resource attributes with structured recommendations.
attributes [required]
object
Attributes of the SPA Recommendation resource. Contains recommendations for both driver and executor components.
confidence_level
double
driver [required]
object
Resource recommendation for a single Spark component (driver or executor). Contains estimation data used to patch Spark job specs.
estimation [required]
object
Recommended resource values for a Spark driver or executor, derived from recent real usage metrics. Used by SPA to propose more efficient pod sizing.
cpu
object
CPU usage statistics derived from historical Spark job metrics. Provides multiple estimates so users can choose between conservative and cost-saving risk profiles.
max
int64
Maximum CPU usage observed for the job, expressed in millicores. This represents the upper bound of usage.
p75
int64
75th percentile of CPU usage (millicores). Represents a cost-saving configuration while covering most workloads.
p95
int64
95th percentile of CPU usage (millicores). Balances performance and cost, providing a safer margin than p75.
ephemeral_storage
int64
Recommended ephemeral storage allocation (in MiB). Derived from job temporary storage patterns.
heap
int64
Recommended JVM heap size (in MiB).
memory
int64
Recommended total memory allocation (in MiB). Includes both heap and overhead.
overhead
int64
Recommended JVM overhead (in MiB). Computed as total memory - heap.
executor [required]
object
Resource recommendation for a single Spark component (driver or executor). Contains estimation data used to patch Spark job specs.
estimation [required]
object
Recommended resource values for a Spark driver or executor, derived from recent real usage metrics. Used by SPA to propose more efficient pod sizing.
cpu
object
CPU usage statistics derived from historical Spark job metrics. Provides multiple estimates so users can choose between conservative and cost-saving risk profiles.
max
int64
Maximum CPU usage observed for the job, expressed in millicores. This represents the upper bound of usage.
p75
int64
75th percentile of CPU usage (millicores). Represents a cost-saving configuration while covering most workloads.
p95
int64
95th percentile of CPU usage (millicores). Balances performance and cost, providing a safer margin than p75.
ephemeral_storage
int64
Recommended ephemeral storage allocation (in MiB). Derived from job temporary storage patterns.
heap
int64
Recommended JVM heap size (in MiB).
memory
int64
Recommended total memory allocation (in MiB). Includes both heap and overhead.
overhead
int64
Recommended JVM overhead (in MiB). Computed as total memory - heap.
id
string
Resource identifier for the recommendation. Optional in responses.
type [required]
enum
JSON:API resource type for Spark Pod Autosizing recommendations. Identifies the Recommendation resource returned by SPA.
Allowed enum values: recommendation
This endpoint is currently experimental and restricted to Datadog internal use only. Retrieve resource recommendations for a Spark job. The caller (Spark Gateway or DJM UI) provides a service name and SPA returns structured recommendations for driver and executor resources. The version with a shard should be preferred, where possible, as it gives more accurate results.
인수
경로 파라미터
이름
유형
설명
service [required]
string
The service name for a spark job.
쿼리 문자열
이름
유형
설명
bypass_cache
string
The recommendation service should not use its metrics cache.
JSON:API resource object for SPA Recommendation. Includes type, optional ID, and resource attributes with structured recommendations.
attributes [required]
object
Attributes of the SPA Recommendation resource. Contains recommendations for both driver and executor components.
confidence_level
double
driver [required]
object
Resource recommendation for a single Spark component (driver or executor). Contains estimation data used to patch Spark job specs.
estimation [required]
object
Recommended resource values for a Spark driver or executor, derived from recent real usage metrics. Used by SPA to propose more efficient pod sizing.
cpu
object
CPU usage statistics derived from historical Spark job metrics. Provides multiple estimates so users can choose between conservative and cost-saving risk profiles.
max
int64
Maximum CPU usage observed for the job, expressed in millicores. This represents the upper bound of usage.
p75
int64
75th percentile of CPU usage (millicores). Represents a cost-saving configuration while covering most workloads.
p95
int64
95th percentile of CPU usage (millicores). Balances performance and cost, providing a safer margin than p75.
ephemeral_storage
int64
Recommended ephemeral storage allocation (in MiB). Derived from job temporary storage patterns.
heap
int64
Recommended JVM heap size (in MiB).
memory
int64
Recommended total memory allocation (in MiB). Includes both heap and overhead.
overhead
int64
Recommended JVM overhead (in MiB). Computed as total memory - heap.
executor [required]
object
Resource recommendation for a single Spark component (driver or executor). Contains estimation data used to patch Spark job specs.
estimation [required]
object
Recommended resource values for a Spark driver or executor, derived from recent real usage metrics. Used by SPA to propose more efficient pod sizing.
cpu
object
CPU usage statistics derived from historical Spark job metrics. Provides multiple estimates so users can choose between conservative and cost-saving risk profiles.
max
int64
Maximum CPU usage observed for the job, expressed in millicores. This represents the upper bound of usage.
p75
int64
75th percentile of CPU usage (millicores). Represents a cost-saving configuration while covering most workloads.
p95
int64
95th percentile of CPU usage (millicores). Balances performance and cost, providing a safer margin than p75.
ephemeral_storage
int64
Recommended ephemeral storage allocation (in MiB). Derived from job temporary storage patterns.
heap
int64
Recommended JVM heap size (in MiB).
memory
int64
Recommended total memory allocation (in MiB). Includes both heap and overhead.
overhead
int64
Recommended JVM overhead (in MiB). Computed as total memory - heap.
id
string
Resource identifier for the recommendation. Optional in responses.
type [required]
enum
JSON:API resource type for Spark Pod Autosizing recommendations. Identifies the Recommendation resource returned by SPA.
Allowed enum values: recommendation
JSON:API document containing a single Recommendation resource. Returned by SPA when the Spark Gateway requests recommendations.
Expand All
항목
유형
설명
data [required]
object
JSON:API resource object for SPA Recommendation. Includes type, optional ID, and resource attributes with structured recommendations.
attributes [required]
object
Attributes of the SPA Recommendation resource. Contains recommendations for both driver and executor components.
confidence_level
double
driver [required]
object
Resource recommendation for a single Spark component (driver or executor). Contains estimation data used to patch Spark job specs.
estimation [required]
object
Recommended resource values for a Spark driver or executor, derived from recent real usage metrics. Used by SPA to propose more efficient pod sizing.
cpu
object
CPU usage statistics derived from historical Spark job metrics. Provides multiple estimates so users can choose between conservative and cost-saving risk profiles.
max
int64
Maximum CPU usage observed for the job, expressed in millicores. This represents the upper bound of usage.
p75
int64
75th percentile of CPU usage (millicores). Represents a cost-saving configuration while covering most workloads.
p95
int64
95th percentile of CPU usage (millicores). Balances performance and cost, providing a safer margin than p75.
ephemeral_storage
int64
Recommended ephemeral storage allocation (in MiB). Derived from job temporary storage patterns.
heap
int64
Recommended JVM heap size (in MiB).
memory
int64
Recommended total memory allocation (in MiB). Includes both heap and overhead.
overhead
int64
Recommended JVM overhead (in MiB). Computed as total memory - heap.
executor [required]
object
Resource recommendation for a single Spark component (driver or executor). Contains estimation data used to patch Spark job specs.
estimation [required]
object
Recommended resource values for a Spark driver or executor, derived from recent real usage metrics. Used by SPA to propose more efficient pod sizing.
cpu
object
CPU usage statistics derived from historical Spark job metrics. Provides multiple estimates so users can choose between conservative and cost-saving risk profiles.
max
int64
Maximum CPU usage observed for the job, expressed in millicores. This represents the upper bound of usage.
p75
int64
75th percentile of CPU usage (millicores). Represents a cost-saving configuration while covering most workloads.
p95
int64
95th percentile of CPU usage (millicores). Balances performance and cost, providing a safer margin than p75.
ephemeral_storage
int64
Recommended ephemeral storage allocation (in MiB). Derived from job temporary storage patterns.
heap
int64
Recommended JVM heap size (in MiB).
memory
int64
Recommended total memory allocation (in MiB). Includes both heap and overhead.
overhead
int64
Recommended JVM overhead (in MiB). Computed as total memory - heap.
id
string
Resource identifier for the recommendation. Optional in responses.
type [required]
enum
JSON:API resource type for Spark Pod Autosizing recommendations. Identifies the Recommendation resource returned by SPA.
Allowed enum values: recommendation