- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
allocation_policy
Type: STRUCT
Provider name: allocationPolicy
Description: Compute resource allocation for all TaskGroups in the Job.
instances
UNORDERED_LIST_STRUCT
instances
block_project_ssh_keys
BOOLEAN
blockProjectSshKeys
true
if you want Batch to block project-level SSH keys from accessing this job’s VMs. Alternatively, you can configure the job to specify a VM instance template that blocks project-level SSH keys. In either case, Batch blocks project-level SSH keys while creating the VMs for this job. Batch allows project-level SSH keys for a job’s VMs only if all the following are true: + This field is undefined or set to false
. + The job’s VM instance template (if any) doesn’t block project-level SSH keys. Notably, you can override this behavior by manually updating a VM to block or allow project-level SSH keys. For more information about blocking project-level SSH keys, see the Compute Engine documentation: https://cloud.google.com/compute/docs/connect/restrict-ssh-keys#block-keysinstall_gpu_drivers
BOOLEAN
installGpuDrivers
policy.accelerators
or instance_template
on your behalf. Default is false. For Container-Optimized Image cases, Batch will install the accelerator driver following milestones of https://cloud.google.com/container-optimized-os/docs/release-notes. For non Container-Optimized Image cases, following https://github.com/GoogleCloudPlatform/compute-gpu-installation/blob/main/linux/install_gpu_driver.py.install_ops_agent
BOOLEAN
installOpsAgent
instance_template
STRING
instanceTemplate
policy
STRUCT
policy
accelerators
UNORDERED_LIST_STRUCT
accelerators
count
INT64
count
driver_version
STRING
driverVersion
install_gpu_drivers
BOOLEAN
installGpuDrivers
type
STRING
type
gcloud compute accelerator-types list
.boot_disk
STRUCT
bootDisk
disk_interface
STRING
diskInterface
image
STRING
image
batch-debian
: use Batch Debian images. * batch-cos
: use Batch Container-Optimized images. * batch-hpc-rocky
: use Batch HPC Rocky Linux images.size_gb
INT64
sizeGb
type
specifies a persistent disk, this field is ignored if data_source
is set as image
or snapshot
. If the type
specifies a local SSD, this field should be a multiple of 375 GB, otherwise, the final size will be the next greater multiple of 375 GB. Boot Disk: Batch will calculate the boot disk size based on source image and task requirements if you do not speicify the size. If both this field and the boot_disk_mib
field in task spec’s compute_resource
are defined, Batch will only honor this field. Also, this field should be no smaller than the source disk’s size when the data_source
is set as snapshot
or image
. For example, if you set an image as the data_source
field and the image’s default disk size 30 GB, you can only use this field to make the disk larger or equal to 30 GB.snapshot
STRING
snapshot
type
STRING
type
gcloud compute disk-types list
. For example, local SSD uses type “local-ssd”. Persistent disks and boot disks use “pd-balanced”, “pd-extreme”, “pd-ssd” or “pd-standard”. If not specified, “pd-standard” will be used as the default type for non-boot disks, “pd-balanced” will be used as the default type for boot disks.disks
UNORDERED_LIST_STRUCT
disks
device_name
STRING
deviceName
existing_disk
STRING
existingDisk
new_disk
STRUCT
newDisk
disk_interface
STRING
diskInterface
image
STRING
image
batch-debian
: use Batch Debian images. * batch-cos
: use Batch Container-Optimized images. * batch-hpc-rocky
: use Batch HPC Rocky Linux images.size_gb
INT64
sizeGb
type
specifies a persistent disk, this field is ignored if data_source
is set as image
or snapshot
. If the type
specifies a local SSD, this field should be a multiple of 375 GB, otherwise, the final size will be the next greater multiple of 375 GB. Boot Disk: Batch will calculate the boot disk size based on source image and task requirements if you do not speicify the size. If both this field and the boot_disk_mib
field in task spec’s compute_resource
are defined, Batch will only honor this field. Also, this field should be no smaller than the source disk’s size when the data_source
is set as snapshot
or image
. For example, if you set an image as the data_source
field and the image’s default disk size 30 GB, you can only use this field to make the disk larger or equal to 30 GB.snapshot
STRING
snapshot
type
STRING
type
gcloud compute disk-types list
. For example, local SSD uses type “local-ssd”. Persistent disks and boot disks use “pd-balanced”, “pd-extreme”, “pd-ssd” or “pd-standard”. If not specified, “pd-standard” will be used as the default type for non-boot disks, “pd-balanced” will be used as the default type for boot disks.machine_type
STRING
machineType
min_cpu_platform
STRING
minCpuPlatform
provisioning_model
STRING
provisioningModel
PROVISIONING_MODEL_UNSPECIFIED
- Unspecified.STANDARD
- Standard VM.SPOT
- SPOT VM.PREEMPTIBLE
- Preemptible VM (PVM). Above SPOT VM is the preferable model for preemptible VM instances: the old preemptible VM model (indicated by this field) is the older model, and has been migrated to use the SPOT model as the underlying technology. This old model will still be supported.reservation
STRING
reservation
location
STRUCT
location
allowed_locations
UNORDERED_LIST_STRING
allowedLocations
network
STRUCT
network
InstancePolicyOrTemplate
field, Batch will use the network settings in the instance template instead of this field.network_interfaces
UNORDERED_LIST_STRUCT
networkInterfaces
network
STRING
network
no_external_ip_address
BOOLEAN
noExternalIpAddress
subnetwork
STRING
subnetwork
placement
STRUCT
placement
collocation
STRING
collocation
max_distance
INT64
maxDistance
service_account
STRUCT
serviceAccount
email
STRING
email
scopes
UNORDERED_LIST_STRING
scopes
ancestors
Type: UNORDERED_LIST_STRING
create_time
Type: TIMESTAMP
Provider name: createTime
Description: Output only. When the Job was created.
gcp_status
Type: STRUCT
Provider name: status
Description: Output only. Job status. It is read only for users.
run_duration
STRING
runDuration
state
STRING
state
STATE_UNSPECIFIED
- Job state unspecified.QUEUED
- Job is admitted (validated and persisted) and waiting for resources.SCHEDULED
- Job is scheduled to run as soon as resource allocation is ready. The resource allocation may happen at a later time but with a high chance to succeed.RUNNING
- Resource allocation has been successful. At least one Task in the Job is RUNNING.SUCCEEDED
- All Tasks in the Job have finished successfully.FAILED
- At least one Task in the Job has failed.DELETION_IN_PROGRESS
- The Job will be deleted, but has not been deleted yet. Typically this is because resources used by the Job are still being cleaned up.CANCELLATION_IN_PROGRESS
- The Job cancellation is in progress, this is because the resources used by the Job are still being cleaned up.CANCELLED
- The Job has been cancelled, the task executions were stopped and the resources were cleaned up.status_events
UNORDERED_LIST_STRUCT
statusEvents
description
STRING
description
event_time
TIMESTAMP
eventTime
task_execution
STRUCT
taskExecution
exit_code
INT32
exitCode
task_state
STRING
taskState
STATE_UNSPECIFIED
- Unknown state.PENDING
- The Task is created and waiting for resources.ASSIGNED
- The Task is assigned to at least one VM.RUNNING
- The Task is running.FAILED
- The Task has failed.SUCCEEDED
- The Task has succeeded.UNEXECUTED
- The Task has not been executed when the Job finishes.type
STRING
type
labels
Type: UNORDERED_LIST_STRING
logs_policy
Type: STRUCT
Provider name: logsPolicy
Description: Log preservation policy for the Job.
cloud_logging_option
STRUCT
cloudLoggingOption
destination
is set to CLOUD_LOGGING
, you can optionally set this field to configure additional settings for Cloud Logging.use_generic_task_monitored_resource
BOOLEAN
useGenericTaskMonitoredResource
true
to change the monitored resource type for Cloud Logging logs generated by this Batch job from the batch.googleapis.com/Job
type to the formerly used generic_task
type.destination
STRING
destination
DESTINATION_UNSPECIFIED
- (Default) Logs are not preserved.CLOUD_LOGGING
- Logs are streamed to Cloud Logging. Optionally, you can configure additional settings in the cloudLoggingOption
field.PATH
- Logs are saved to the file path specified in the logsPath
field.logs_path
STRING
logsPath
destination
is set to PATH
, you must set this field to the path where you want logs to be saved. This path can point to a local directory on the VM or (if congifured) a directory under the mount path of any Cloud Storage bucket, network file system (NFS), or writable persistent disk that is mounted to the job. For example, if the job has a bucket with mountPath
set to /mnt/disks/my-bucket
, you can write logs to the root directory of the remotePath
of that bucket by setting this field to /mnt/disks/my-bucket/
.name
Type: STRING
Provider name: name
Description: Output only. Job name. For example: “projects/123456/locations/us-central1/jobs/job01”.
notifications
Type: UNORDERED_LIST_STRUCT
Provider name: notifications
Description: Notification configurations.
message
STRUCT
message
new_job_state
STRING
newJobState
STATE_UNSPECIFIED
- Job state unspecified.QUEUED
- Job is admitted (validated and persisted) and waiting for resources.SCHEDULED
- Job is scheduled to run as soon as resource allocation is ready. The resource allocation may happen at a later time but with a high chance to succeed.RUNNING
- Resource allocation has been successful. At least one Task in the Job is RUNNING.SUCCEEDED
- All Tasks in the Job have finished successfully.FAILED
- At least one Task in the Job has failed.DELETION_IN_PROGRESS
- The Job will be deleted, but has not been deleted yet. Typically this is because resources used by the Job are still being cleaned up.CANCELLATION_IN_PROGRESS
- The Job cancellation is in progress, this is because the resources used by the Job are still being cleaned up.CANCELLED
- The Job has been cancelled, the task executions were stopped and the resources were cleaned up.new_task_state
STRING
newTaskState
STATE_UNSPECIFIED
- Unknown state.PENDING
- The Task is created and waiting for resources.ASSIGNED
- The Task is assigned to at least one VM.RUNNING
- The Task is running.FAILED
- The Task has failed.SUCCEEDED
- The Task has succeeded.UNEXECUTED
- The Task has not been executed when the Job finishes.type
STRING
type
TYPE_UNSPECIFIED
- Unspecified.JOB_STATE_CHANGED
- Notify users that the job state has changed.TASK_STATE_CHANGED
- Notify users that the task state has changed.pubsub_topic
STRING
pubsubTopic
projects/{project}/topics/{topic}
. Notably, if you want to specify a Pub/Sub topic that is in a different project than the job, your administrator must grant your project’s Batch service agent permission to publish to that topic. For more information about configuring Pub/Sub notifications for a job, see https://cloud.google.com/batch/docs/enable-notifications.organization_id
Type: STRING
parent
Type: STRING
priority
Type: INT64
Provider name: priority
Description: Priority of the Job. The valid value range is [0, 100). Default value is 0. Higher value indicates higher priority. A job with higher priority value is more likely to run earlier if all other requirements are satisfied.
project_id
Type: STRING
project_number
Type: STRING
resource_name
Type: STRING
tags
Type: UNORDERED_LIST_STRING
task_groups
Type: UNORDERED_LIST_STRUCT
Provider name: taskGroups
Description: Required. TaskGroups in the Job. Only one TaskGroup is supported now.
name
STRING
name
parallelism
INT64
parallelism
permissive_ssh
BOOLEAN
permissiveSsh
require_hosts_file
BOOLEAN
requireHostsFile
run_as_non_root
BOOLEAN
runAsNonRoot
scheduling_policy
STRING
schedulingPolicy
SCHEDULING_POLICY_UNSPECIFIED
- Unspecified.AS_SOON_AS_POSSIBLE
- Run Tasks as soon as resources are available. Tasks might be executed in parallel depending on parallelism and task_count values.IN_ORDER
- Run Tasks sequentially with increased task index.task_count
INT64
taskCount
task_count_per_node
INT64
taskCountPerNode
task_environments
UNORDERED_LIST_STRUCT
taskEnvironments
encrypted_variables
STRUCT
encryptedVariables
cipher_text
STRING
cipherText
encrypt
method.key_name
STRING
keyName
task_spec
STRUCT
taskSpec
compute_resource
STRUCT
computeResource
boot_disk_mib
INT64
bootDiskMib
cpu_milli
INT64
cpuMilli
cpuMilli
defines the amount of CPU resources per task in milliCPU units. For example, 1000
corresponds to 1 vCPU per task. If undefined, the default value is 2000
. If you also define the VM’s machine type using the machineType
in InstancePolicy field or inside the instanceTemplate
in the InstancePolicyOrTemplate field, make sure the CPU resources for both fields are compatible with each other and with how many tasks you want to allow to run on the same VM at the same time. For example, if you specify the n2-standard-2
machine type, which has 2 vCPUs each, you are recommended to set cpuMilli
no more than 2000
, or you are recommended to run two tasks on the same VM if you set cpuMilli
to 1000
or less.memory_mib
INT64
memoryMib
memoryMib
defines the amount of memory per task in MiB units. If undefined, the default value is 2000
. If you also define the VM’s machine type using the machineType
in InstancePolicy field or inside the instanceTemplate
in the InstancePolicyOrTemplate field, make sure the memory resources for both fields are compatible with each other and with how many tasks you want to allow to run on the same VM at the same time. For example, if you specify the n2-standard-2
machine type, which has 8 GiB each, you are recommended to set memoryMib
to no more than 8192
, or you are recommended to run two tasks on the same VM if you set memoryMib
to 4096
or less.environment
STRUCT
environment
encrypted_variables
STRUCT
encryptedVariables
cipher_text
STRING
cipherText
encrypt
method.key_name
STRING
keyName
lifecycle_policies
UNORDERED_LIST_STRUCT
lifecyclePolicies
action
STRING
action
ACTION_UNSPECIFIED
- Action unspecified.RETRY_TASK
- Action that tasks in the group will be scheduled to re-execute.FAIL_TASK
- Action that tasks in the group will be stopped immediately.action_condition
STRUCT
actionCondition
exit_codes
UNORDERED_LIST_INT32
exitCodes
max_retry_count
INT32
maxRetryCount
max_run_duration
STRING
maxRunDuration
s
—for example, 3600s
for 1 hour. The field accepts any value between 0 and the maximum listed for the Duration
field type at https://protobuf.dev/reference/protobuf/google.protobuf/#duration; however, the actual maximum run time for a job will be limited to the maximum run time for a job listed at https://cloud.google.com/batch/quotas#max-job-duration.runnables
UNORDERED_LIST_STRUCT
runnables
background
subfield. + The runnable exited with a non-zero status, but you enabled its ignore_exit_status
subfield.always_run
BOOLEAN
alwaysRun
background
BOOLEAN
background
true
to configure a background runnable. Background runnables are allowed continue running in the background while the task executes subsequent runnables. For example, background runnables are useful for providing services to other runnables or providing debugging-support tools like SSH servers. Specifically, background runnables are killed automatically (if they have not already exited) a short time after all foreground runnables have completed. Even though this is likely to result in a non-zero exit status for the background runnable, these automatic kills are not treated as task failures.barrier
STRUCT
barrier
name
STRING
name
container
STRUCT
container
block_external_network
BOOLEAN
blockExternalNetwork
container.options
field.commands
UNORDERED_LIST_STRING
commands
CMD
specified in the container. If there is an ENTRYPOINT
(either in the container image or with the entrypoint
field below) then these commands are appended as arguments to the ENTRYPOINT
.enable_image_streaming
BOOLEAN
enableImageStreaming
enableImageStreaming
is set to true, the container runtime is containerd instead of Docker. Additionally, this container runnable only supports the following container
subfields: imageUri
, commands[]
, entrypoint
, and volumes[]
; any other container
subfields are ignored. For more information about the requirements and limitations for using Image streaming with Batch, see the image-streaming
sample on GitHub.entrypoint
STRING
entrypoint
ENTRYPOINT
specified in the container.image_uri
STRING
imageUri
options
STRING
options
docker run
command when running this container—for example, --network host
. For the --volume
option, use the volumes
field for the container.password
STRING
password
projects/*/secrets/*/versions/*
. Warning: If you specify the password using plain text, you risk the password being exposed to any users who can view the job or its logs. To avoid this risk, specify a secret that contains the password instead. Learn more about Secret Manager and using Secret Manager with Batch.username
STRING
username
projects/*/secrets/*/versions/*
. However, using a secret is recommended for enhanced security. Caution: If you specify the username using plain text, you risk the username being exposed to any users who can view the job or its logs. To avoid this risk, specify a secret that contains the username instead. Learn more about Secret Manager and using Secret Manager with Batch.volumes
UNORDERED_LIST_STRING
volumes
--volume
option for the docker run
command—for example, /foo:/bar
or /foo:/bar:ro
. If the TaskSpec.Volumes
field is specified but this field is not, Batch will mount each volume from the host machine to the container with the same mount path by default. In this case, the default mount option for containers will be read-only (ro
) for existing persistent disks and read-write (rw
) for other volume types, regardless of the original mount options specified in TaskSpec.Volumes
. If you need different mount settings, you can explicitly configure them in this field.environment
STRUCT
environment
encrypted_variables
STRUCT
encryptedVariables
cipher_text
STRING
cipherText
encrypt
method.key_name
STRING
keyName
gcp_display_name
STRING
displayName
ignore_exit_status
BOOLEAN
ignoreExitStatus
true
to allow the task to continue executing its other runnables even if this runnable fails.script
STRUCT
script
path
STRING
path
#!/bin/sh
shell interpreter, you must specify an interpreter by including a [shebang line](https://en.wikipedia.org/wiki/Shebang_(Unix) as the first line of the file. For example, to execute the script using bash, include #!/bin/bash
as the first line of the file. Alternatively, to execute the script using Python3, include #!/usr/bin/env python3
as the first line of the file.text
STRING
text
#!/bin/sh
shell interpreter, you must specify an interpreter by including a [shebang line](https://en.wikipedia.org/wiki/Shebang_(Unix) at the beginning of the text. For example, to execute the script using bash, include #!/bin/bash\n
at the beginning of the text. Alternatively, to execute the script using Python3, include #!/usr/bin/env python3\n
at the beginning of the text.timeout
STRING
timeout
volumes
UNORDERED_LIST_STRUCT
volumes
device_name
STRING
deviceName
gcs
STRUCT
gcs
remote_path
STRING
remotePath
mount_options
UNORDERED_LIST_STRING
mountOptions
gcsfuse
tool are supported. * For an existing persistent disk, all mount options provided by the mount
command except writing are supported. This is due to restrictions of multi-writer mode. * For any other disk or a Network File System (NFS), all the mount options provided by the mount
command are supported.mount_path
STRING
mountPath
nfs
STRUCT
nfs
remote_path
STRING
remotePath
server
STRING
server
uid
Type: STRING
Provider name: uid
Description: Output only. A system generated unique ID for the Job.
update_time
Type: TIMESTAMP
Provider name: updateTime
Description: Output only. The last time the Job was updated.