---
title: Getting Started with Datadog
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: Docs > Infrastructure > Datadog Resource Catalog
---

# gcp_dataflow_job{% #gcp_dataflow_job %}

## `ancestors`{% #ancestors %}

**Type**: `UNORDERED_LIST_STRING`

## `client_request_id`{% #client_request_id %}

**Type**: `STRING`**Provider name**: `clientRequestId`**Description**: The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.

## `create_time`{% #create_time %}

**Type**: `TIMESTAMP`**Provider name**: `createTime`**Description**: The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.

## `created_from_snapshot_id`{% #created_from_snapshot_id %}

**Type**: `STRING`**Provider name**: `createdFromSnapshotId`**Description**: If this is specified, the job's initial state is populated from the given snapshot.

## `current_state`{% #current_state %}

**Type**: `STRING`**Provider name**: `currentState`**Description**: The current state of the job. Jobs are created in the `JOB_STATE_STOPPED` state unless otherwise specified. A job in the `JOB_STATE_RUNNING` state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field might be mutated by the Dataflow service; callers cannot mutate it.**Possible values**:

- `JOB_STATE_UNKNOWN` - The job's run state isn't specified.
- `JOB_STATE_STOPPED` - `JOB_STATE_STOPPED` indicates that the job has not yet started to run.
- `JOB_STATE_RUNNING` - `JOB_STATE_RUNNING` indicates that the job is currently running.
- `JOB_STATE_DONE` - `JOB_STATE_DONE` indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from `JOB_STATE_RUNNING`. It may also be set via a Cloud Dataflow `UpdateJob` call, if the job has not yet reached a terminal state.
- `JOB_STATE_FAILED` - `JOB_STATE_FAILED` indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from `JOB_STATE_RUNNING`.
- `JOB_STATE_CANCELLED` - `JOB_STATE_CANCELLED` indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow `UpdateJob` call, and only if the job has not yet reached another terminal state.
- `JOB_STATE_UPDATED` - `JOB_STATE_UPDATED` indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from `JOB_STATE_RUNNING`.
- `JOB_STATE_DRAINING` - `JOB_STATE_DRAINING` indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow `UpdateJob` call, but only as a transition from `JOB_STATE_RUNNING`. Jobs that are draining may only transition to `JOB_STATE_DRAINED`, `JOB_STATE_CANCELLED`, or `JOB_STATE_FAILED`.
- `JOB_STATE_DRAINED` - `JOB_STATE_DRAINED` indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from `JOB_STATE_DRAINING`.
- `JOB_STATE_PENDING` - `JOB_STATE_PENDING` indicates that the job has been created but is not yet running. Jobs that are pending may only transition to `JOB_STATE_RUNNING`, or `JOB_STATE_FAILED`.
- `JOB_STATE_CANCELLING` - `JOB_STATE_CANCELLING` indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to `JOB_STATE_CANCELLED` or `JOB_STATE_FAILED`.
- `JOB_STATE_QUEUED` - `JOB_STATE_QUEUED` indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to `JOB_STATE_PENDING` or `JOB_STATE_CANCELLED`.
- `JOB_STATE_RESOURCE_CLEANING_UP` - `JOB_STATE_RESOURCE_CLEANING_UP` indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.

## `current_state_time`{% #current_state_time %}

**Type**: `TIMESTAMP`**Provider name**: `currentStateTime`**Description**: The timestamp associated with the current state.

## `environment`{% #environment %}

**Type**: `STRUCT`**Provider name**: `environment`**Description**: Optional. The environment for the job.

- `cluster_manager_api_service`**Type**: `STRING`**Provider name**: `clusterManagerApiService`**Description**: The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
- `dataset`**Type**: `STRING`**Provider name**: `dataset`**Description**: Optional. The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
- `debug_options`**Type**: `STRUCT`**Provider name**: `debugOptions`**Description**: Optional. Any debugging options to be supplied to the job.
  - `data_sampling`**Type**: `STRUCT`**Provider name**: `dataSampling`**Description**: Configuration options for sampling elements from a running pipeline.
    - `behaviors`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `behaviors`**Description**: List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
  - `enable_hot_key_logging`**Type**: `BOOLEAN`**Provider name**: `enableHotKeyLogging`**Description**: Optional. When true, enables the logging of the literal hot key to the user's Cloud Logging.
- `experiments`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `experiments`**Description**: The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
- `flex_resource_scheduling_goal`**Type**: `STRING`**Provider name**: `flexResourceSchedulingGoal`**Description**: Optional. Which Flexible Resource Scheduling mode to run in.**Possible values**:
  - `FLEXRS_UNSPECIFIED` - Run in the default mode.
  - `FLEXRS_SPEED_OPTIMIZED` - Optimize for lower execution time.
  - `FLEXRS_COST_OPTIMIZED` - Optimize for lower cost.
- `service_account_email`**Type**: `STRING`**Provider name**: `serviceAccountEmail`**Description**: Optional. Identity to run virtual machines as. Defaults to the default account.
- `service_kms_key_name`**Type**: `STRING`**Provider name**: `serviceKmsKeyName`**Description**: Optional. If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- `service_options`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `serviceOptions`**Description**: Optional. The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
- `shuffle_mode`**Type**: `STRING`**Provider name**: `shuffleMode`**Description**: Output only. The shuffle mode used for the job.**Possible values**:
  - `SHUFFLE_MODE_UNSPECIFIED` - Shuffle mode information is not available.
  - `VM_BASED` - Shuffle is done on the worker VMs.
  - `SERVICE_BASED` - Shuffle is done on the service side.
- `streaming_mode`**Type**: `STRING`**Provider name**: `streamingMode`**Description**: Optional. Specifies the Streaming Engine message processing guarantees. Reduces cost and latency but might result in duplicate messages committed to storage. Designed to run simple mapping streaming ETL jobs at the lowest cost. For example, Change Data Capture (CDC) to BigQuery is a canonical use case. For more information, see [Set the pipeline streaming mode](https://cloud.google.com/dataflow/docs/guides/streaming-modes).**Possible values**:
  - `STREAMING_MODE_UNSPECIFIED` - Run in the default mode.
  - `STREAMING_MODE_EXACTLY_ONCE` - In this mode, message deduplication is performed against persistent state to make sure each message is processed and committed to storage exactly once.
  - `STREAMING_MODE_AT_LEAST_ONCE` - Message deduplication is not performed. Messages might be processed multiple times, and the results are applied multiple times. Note: Setting this value also enables Streaming Engine and Streaming Engine resource-based billing.
- `temp_storage_prefix`**Type**: `STRING`**Provider name**: `tempStoragePrefix`**Description**: The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- `use_streaming_engine_resource_based_billing`**Type**: `BOOLEAN`**Provider name**: `useStreamingEngineResourceBasedBilling`**Description**: Output only. Whether the job uses the Streaming Engine resource-based billing model.
- `worker_pools`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `workerPools`**Description**: The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
  - `autoscaling_settings`**Type**: `STRUCT`**Provider name**: `autoscalingSettings`**Description**: Settings for autoscaling of this WorkerPool.
    - `algorithm`**Type**: `STRING`**Provider name**: `algorithm`**Description**: The algorithm to use for autoscaling.**Possible values**:
      - `AUTOSCALING_ALGORITHM_UNKNOWN` - The algorithm is unknown, or unspecified.
      - `AUTOSCALING_ALGORITHM_NONE` - Disable autoscaling.
      - `AUTOSCALING_ALGORITHM_BASIC` - Increase worker count over time to reduce job execution time.
    - `max_num_workers`**Type**: `INT32`**Provider name**: `maxNumWorkers`**Description**: The maximum number of workers to cap scaling at.
  - `data_disks`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `dataDisks`**Description**: Data disks that are used by a VM in this workflow.
    - `disk_type`**Type**: `STRING`**Provider name**: `diskType`**Description**: Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
    - `mount_point`**Type**: `STRING`**Provider name**: `mountPoint`**Description**: Directory in a VM where disk is mounted.
    - `size_gb`**Type**: `INT32`**Provider name**: `sizeGb`**Description**: Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
  - `default_package_set`**Type**: `STRING`**Provider name**: `defaultPackageSet`**Description**: The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.**Possible values**:
    - `DEFAULT_PACKAGE_SET_UNKNOWN` - The default set of packages to stage is unknown, or unspecified.
    - `DEFAULT_PACKAGE_SET_NONE` - Indicates that no packages should be staged at the worker unless explicitly specified by the job.
    - `DEFAULT_PACKAGE_SET_JAVA` - Stage packages typically useful to workers written in Java.
    - `DEFAULT_PACKAGE_SET_PYTHON` - Stage packages typically useful to workers written in Python.
  - `disk_size_gb`**Type**: `INT32`**Provider name**: `diskSizeGb`**Description**: Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
  - `disk_source_image`**Type**: `STRING`**Provider name**: `diskSourceImage`**Description**: Fully qualified source image for disks.
  - `disk_type`**Type**: `STRING`**Provider name**: `diskType`**Description**: Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
  - `ip_configuration`**Type**: `STRING`**Provider name**: `ipConfiguration`**Description**: Configuration for VM IPs.**Possible values**:
    - `WORKER_IP_UNSPECIFIED` - The configuration is unknown, or unspecified.
    - `WORKER_IP_PUBLIC` - Workers should have public IP addresses.
    - `WORKER_IP_PRIVATE` - Workers should have private IP addresses.
  - `kind`**Type**: `STRING`**Provider name**: `kind`**Description**: The kind of the worker pool; currently only `harness` and `shuffle` are supported.
  - `machine_type`**Type**: `STRING`**Provider name**: `machineType`**Description**: Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
  - `network`**Type**: `STRING`**Provider name**: `network`**Description**: Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
  - `num_threads_per_worker`**Type**: `INT32`**Provider name**: `numThreadsPerWorker`**Description**: The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
  - `num_workers`**Type**: `INT32`**Provider name**: `numWorkers`**Description**: Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
  - `on_host_maintenance`**Type**: `STRING`**Provider name**: `onHostMaintenance`**Description**: The action to take on host maintenance, as defined by the Google Compute Engine API.
  - `packages`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `packages`**Description**: Packages to be installed on workers.
    - `location`**Type**: `STRING`**Provider name**: `location`**Description**: The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/
    - `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name of the package.
  - `sdk_harness_container_images`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `sdkHarnessContainerImages`**Description**: Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
    - `capabilities`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `capabilities`**Description**: The set of capabilities enumerated in the above Environment proto. See also [beam_runner_api.proto](https://github.com/apache/beam/blob/master/model/pipeline/src/main/proto/org/apache/beam/model/pipeline/v1/beam_runner_api.proto)
    - `container_image`**Type**: `STRING`**Provider name**: `containerImage`**Description**: A docker container image that resides in Google Container Registry.
    - `environment_id`**Type**: `STRING`**Provider name**: `environmentId`**Description**: Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
    - `use_single_core_per_container`**Type**: `BOOLEAN`**Provider name**: `useSingleCorePerContainer`**Description**: If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
  - `subnetwork`**Type**: `STRING`**Provider name**: `subnetwork`**Description**: Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
  - `taskrunner_settings`**Type**: `STRUCT`**Provider name**: `taskrunnerSettings`**Description**: Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
    - `alsologtostderr`**Type**: `BOOLEAN`**Provider name**: `alsologtostderr`**Description**: Whether to also send taskrunner log info to stderr.
    - `base_task_dir`**Type**: `STRING`**Provider name**: `baseTaskDir`**Description**: The location on the worker for task-specific subdirectories.
    - `base_url`**Type**: `STRING`**Provider name**: `baseUrl`**Description**: The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "[http://www.googleapis.com/"](http://www.googleapis.com/%22)
    - `commandlines_file_name`**Type**: `STRING`**Provider name**: `commandlinesFileName`**Description**: The file to store preprocessing commands in.
    - `continue_on_exception`**Type**: `BOOLEAN`**Provider name**: `continueOnException`**Description**: Whether to continue taskrunner if an exception is hit.
    - `dataflow_api_version`**Type**: `STRING`**Provider name**: `dataflowApiVersion`**Description**: The API version of endpoint, e.g. "v1b3"
    - `harness_command`**Type**: `STRING`**Provider name**: `harnessCommand`**Description**: The command to launch the worker harness.
    - `language_hint`**Type**: `STRING`**Provider name**: `languageHint`**Description**: The suggested backend language.
    - `log_dir`**Type**: `STRING`**Provider name**: `logDir`**Description**: The directory on the VM to store logs.
    - `log_to_serialconsole`**Type**: `BOOLEAN`**Provider name**: `logToSerialconsole`**Description**: Whether to send taskrunner log info to Google Compute Engine VM serial console.
    - `log_upload_location`**Type**: `STRING`**Provider name**: `logUploadLocation`**Description**: Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    - `oauth_scopes`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `oauthScopes`**Description**: The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
    - `parallel_worker_settings`**Type**: `STRUCT`**Provider name**: `parallelWorkerSettings`**Description**: The settings to pass to the parallel worker harness.
      - `base_url`**Type**: `STRING`**Provider name**: `baseUrl`**Description**: The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "[http://www.googleapis.com/"](http://www.googleapis.com/%22)
      - `reporting_enabled`**Type**: `BOOLEAN`**Provider name**: `reportingEnabled`**Description**: Whether to send work progress updates to the service.
      - `service_path`**Type**: `STRING`**Provider name**: `servicePath`**Description**: The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
      - `shuffle_service_path`**Type**: `STRING`**Provider name**: `shuffleServicePath`**Description**: The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
      - `temp_storage_prefix`**Type**: `STRING`**Provider name**: `tempStoragePrefix`**Description**: The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
      - `worker_id`**Type**: `STRING`**Provider name**: `workerId`**Description**: The ID of the worker running this pipeline.
    - `streaming_worker_main_class`**Type**: `STRING`**Provider name**: `streamingWorkerMainClass`**Description**: The streaming worker main class name.
    - `task_group`**Type**: `STRING`**Provider name**: `taskGroup`**Description**: The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
    - `task_user`**Type**: `STRING`**Provider name**: `taskUser`**Description**: The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
    - `temp_storage_prefix`**Type**: `STRING`**Provider name**: `tempStoragePrefix`**Description**: The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    - `vm_id`**Type**: `STRING`**Provider name**: `vmId`**Description**: The ID string of the VM.
    - `workflow_file_name`**Type**: `STRING`**Provider name**: `workflowFileName`**Description**: The file to store the workflow in.
  - `teardown_policy`**Type**: `STRING`**Provider name**: `teardownPolicy`**Description**: Sets the policy for determining when to turndown worker pool. Allowed values are: `TEARDOWN_ALWAYS`, `TEARDOWN_ON_SUCCESS`, and `TEARDOWN_NEVER`. `TEARDOWN_ALWAYS` means workers are always torn down regardless of whether the job succeeds. `TEARDOWN_ON_SUCCESS` means workers are torn down if the job succeeds. `TEARDOWN_NEVER` means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the `TEARDOWN_ALWAYS` policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.**Possible values**:
    - `TEARDOWN_POLICY_UNKNOWN` - The teardown policy isn't specified, or is unknown.
    - `TEARDOWN_ALWAYS` - Always teardown the resource.
    - `TEARDOWN_ON_SUCCESS` - Teardown the resource on success. This is useful for debugging failures.
    - `TEARDOWN_NEVER` - Never teardown the resource. This is useful for debugging and development.
  - `worker_harness_container_image`**Type**: `STRING`**Provider name**: `workerHarnessContainerImage`**Description**: Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.
  - `zone`**Type**: `STRING`**Provider name**: `zone`**Description**: Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
- `worker_region`**Type**: `STRING`**Provider name**: `workerRegion`**Description**: Optional. The Compute Engine region ([https://cloud.google.com/compute/docs/regions-zones/regions-zones](https://cloud.google.com/compute/docs/regions-zones/regions-zones)) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- `worker_zone`**Type**: `STRING`**Provider name**: `workerZone`**Description**: Optional. The Compute Engine zone ([https://cloud.google.com/compute/docs/regions-zones/regions-zones](https://cloud.google.com/compute/docs/regions-zones/regions-zones)) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.

## `execution_info`{% #execution_info %}

**Type**: `STRUCT`**Provider name**: `executionInfo`**Description**: Deprecated.

## `id`{% #id %}

**Type**: `STRING`**Provider name**: `id`**Description**: The unique ID of this job. This field is set by the Dataflow service when the job is created, and is immutable for the life of the job.

## `job_metadata`{% #job_metadata %}

**Type**: `STRUCT`**Provider name**: `jobMetadata`**Description**: This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.

- `big_table_details`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `bigTableDetails`**Description**: Identification of a Cloud Bigtable source used in the Dataflow job.
  - `instance_id`**Type**: `STRING`**Provider name**: `instanceId`**Description**: InstanceId accessed in the connection.
  - `project_id`**Type**: `STRING`**Provider name**: `projectId`**Description**: ProjectId accessed in the connection.
  - `table_id`**Type**: `STRING`**Provider name**: `tableId`**Description**: TableId accessed in the connection.
- `bigquery_details`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `bigqueryDetails`**Description**: Identification of a BigQuery source used in the Dataflow job.
  - `dataset`**Type**: `STRING`**Provider name**: `dataset`**Description**: Dataset accessed in the connection.
  - `project_id`**Type**: `STRING`**Provider name**: `projectId`**Description**: Project accessed in the connection.
  - `query`**Type**: `STRING`**Provider name**: `query`**Description**: Query used to access data in the connection.
  - `table`**Type**: `STRING`**Provider name**: `table`**Description**: Table accessed in the connection.
- `datastore_details`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `datastoreDetails`**Description**: Identification of a Datastore source used in the Dataflow job.
  - `namespace`**Type**: `STRING`**Provider name**: `namespace`**Description**: Namespace used in the connection.
  - `project_id`**Type**: `STRING`**Provider name**: `projectId`**Description**: ProjectId accessed in the connection.
- `file_details`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `fileDetails`**Description**: Identification of a File source used in the Dataflow job.
  - `file_pattern`**Type**: `STRING`**Provider name**: `filePattern`**Description**: File Pattern used to access files by the connector.
- `pubsub_details`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `pubsubDetails`**Description**: Identification of a Pub/Sub source used in the Dataflow job.
  - `subscription`**Type**: `STRING`**Provider name**: `subscription`**Description**: Subscription used in the connection.
  - `topic`**Type**: `STRING`**Provider name**: `topic`**Description**: Topic accessed in the connection.
- `sdk_version`**Type**: `STRUCT`**Provider name**: `sdkVersion`**Description**: The SDK version used to run the job.
  - `bugs`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `bugs`**Description**: Output only. Known bugs found in this SDK version.
    - `severity`**Type**: `STRING`**Provider name**: `severity`**Description**: Output only. How severe the SDK bug is.**Possible values**:
      - `SEVERITY_UNSPECIFIED` - A bug of unknown severity.
      - `NOTICE` - A minor bug that that may reduce reliability or performance for some jobs. Impact will be minimal or non-existent for most jobs.
      - `WARNING` - A bug that has some likelihood of causing performance degradation, data loss, or job failures.
      - `SEVERE` - A bug with extremely significant impact. Jobs may fail erroneously, performance may be severely degraded, and data loss may be very likely.
    - `type`**Type**: `STRING`**Provider name**: `type`**Description**: Output only. Describes the impact of this SDK bug.**Possible values**:
      - `TYPE_UNSPECIFIED` - Unknown issue with this SDK.
      - `GENERAL` - Catch-all for SDK bugs that don't fit in the below categories.
      - `PERFORMANCE` - Using this version of the SDK may result in degraded performance.
      - `DATALOSS` - Using this version of the SDK may cause data loss.
    - `uri`**Type**: `STRING`**Provider name**: `uri`**Description**: Output only. Link to more information on the bug.
  - `sdk_support_status`**Type**: `STRING`**Provider name**: `sdkSupportStatus`**Description**: The support status for this SDK version.**Possible values**:
    - `UNKNOWN` - Cloud Dataflow is unaware of this version.
    - `SUPPORTED` - This is a known version of an SDK, and is supported.
    - `STALE` - A newer version of the SDK family exists, and an update is recommended.
    - `DEPRECATED` - This version of the SDK is deprecated and will eventually be unsupported.
    - `UNSUPPORTED` - Support for this SDK version has ended and it should no longer be used.
  - `version`**Type**: `STRING`**Provider name**: `version`**Description**: The version of the SDK used to run the job.
  - `version_display_name`**Type**: `STRING`**Provider name**: `versionDisplayName`**Description**: A readable string describing the version of the SDK.
- `spanner_details`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `spannerDetails`**Description**: Identification of a Spanner source used in the Dataflow job.
  - `database_id`**Type**: `STRING`**Provider name**: `databaseId`**Description**: DatabaseId accessed in the connection.
  - `instance_id`**Type**: `STRING`**Provider name**: `instanceId`**Description**: InstanceId accessed in the connection.
  - `project_id`**Type**: `STRING`**Provider name**: `projectId`**Description**: ProjectId accessed in the connection.

## `labels`{% #labels %}

**Type**: `UNORDERED_LIST_STRING`

## `location`{% #location %}

**Type**: `STRING`**Provider name**: `location`**Description**: Optional. The [regional endpoint] ([https://cloud.google.com/dataflow/docs/concepts/regional-endpoints](https://cloud.google.com/dataflow/docs/concepts/regional-endpoints)) that contains this job.

## `name`{% #name %}

**Type**: `STRING`**Provider name**: `name`**Description**: Optional. The user-specified Dataflow job name. Only one active job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a job with the same name as an active job that already exists, the attempt returns the existing job. The name must match the regular expression `[a-z]([-a-z0-9]{0,1022}[a-z0-9])?`

## `organization_id`{% #organization_id %}

**Type**: `STRING`

## `parent`{% #parent %}

**Type**: `STRING`

## `pipeline_description`{% #pipeline_description %}

**Type**: `STRUCT`**Provider name**: `pipelineDescription`**Description**: Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.

- `display_data`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `displayData`**Description**: Pipeline level display data.
  - `bool_value`**Type**: `BOOLEAN`**Provider name**: `boolValue`**Description**: Contains value if the data is of a boolean type.
  - `duration_value`**Type**: `STRING`**Provider name**: `durationValue`**Description**: Contains value if the data is of duration type.
  - `float_value`**Type**: `FLOAT`**Provider name**: `floatValue`**Description**: Contains value if the data is of float type.
  - `int64_value`**Type**: `INT64`**Provider name**: `int64Value`**Description**: Contains value if the data is of int64 type.
  - `java_class_value`**Type**: `STRING`**Provider name**: `javaClassValue`**Description**: Contains value if the data is of java class type.
  - `key`**Type**: `STRING`**Provider name**: `key`**Description**: The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
  - `label`**Type**: `STRING`**Provider name**: `label`**Description**: An optional label to display in a dax UI for the element.
  - `namespace`**Type**: `STRING`**Provider name**: `namespace`**Description**: The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
  - `short_str_value`**Type**: `STRING`**Provider name**: `shortStrValue`**Description**: A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
  - `str_value`**Type**: `STRING`**Provider name**: `strValue`**Description**: Contains value if the data is of string type.
  - `timestamp_value`**Type**: `TIMESTAMP`**Provider name**: `timestampValue`**Description**: Contains value if the data is of timestamp type.
  - `url`**Type**: `STRING`**Provider name**: `url`**Description**: An optional full URL.
- `execution_pipeline_stage`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `executionPipelineStage`**Description**: Description of each stage of execution of the pipeline.
  - `component_source`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `componentSource`**Description**: Collections produced and consumed by component transforms of this stage.
    - `name`**Type**: `STRING`**Provider name**: `name`**Description**: Dataflow service generated name for this source.
    - `original_transform_or_collection`**Type**: `STRING`**Provider name**: `originalTransformOrCollection`**Description**: User name for the original user transform or collection with which this source is most closely associated.
    - `user_name`**Type**: `STRING`**Provider name**: `userName`**Description**: Human-readable name for this transform; may be user or system generated.
  - `component_transform`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `componentTransform`**Description**: Transforms that comprise this execution stage.
    - `name`**Type**: `STRING`**Provider name**: `name`**Description**: Dataflow service generated name for this source.
    - `original_transform`**Type**: `STRING`**Provider name**: `originalTransform`**Description**: User name for the original user transform with which this transform is most closely associated.
    - `user_name`**Type**: `STRING`**Provider name**: `userName`**Description**: Human-readable name for this transform; may be user or system generated.
  - `id`**Type**: `STRING`**Provider name**: `id`**Description**: Dataflow service generated id for this stage.
  - `input_source`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `inputSource`**Description**: Input sources for this stage.
    - `name`**Type**: `STRING`**Provider name**: `name`**Description**: Dataflow service generated name for this source.
    - `original_transform_or_collection`**Type**: `STRING`**Provider name**: `originalTransformOrCollection`**Description**: User name for the original user transform or collection with which this source is most closely associated.
    - `size_bytes`**Type**: `INT64`**Provider name**: `sizeBytes`**Description**: Size of the source, if measurable.
    - `user_name`**Type**: `STRING`**Provider name**: `userName`**Description**: Human-readable name for this source; may be user or system generated.
  - `kind`**Type**: `STRING`**Provider name**: `kind`**Description**: Type of transform this stage is executing.**Possible values**:
    - `UNKNOWN_KIND` - Unrecognized transform type.
    - `PAR_DO_KIND` - ParDo transform.
    - `GROUP_BY_KEY_KIND` - Group By Key transform.
    - `FLATTEN_KIND` - Flatten transform.
    - `READ_KIND` - Read transform.
    - `WRITE_KIND` - Write transform.
    - `CONSTANT_KIND` - Constructs from a constant value, such as with Create.of.
    - `SINGLETON_KIND` - Creates a Singleton view of a collection.
    - `SHUFFLE_KIND` - Opening or closing a shuffle session, often as part of a GroupByKey.
  - `name`**Type**: `STRING`**Provider name**: `name`**Description**: Dataflow service generated name for this stage.
  - `output_source`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `outputSource`**Description**: Output sources for this stage.
    - `name`**Type**: `STRING`**Provider name**: `name`**Description**: Dataflow service generated name for this source.
    - `original_transform_or_collection`**Type**: `STRING`**Provider name**: `originalTransformOrCollection`**Description**: User name for the original user transform or collection with which this source is most closely associated.
    - `size_bytes`**Type**: `INT64`**Provider name**: `sizeBytes`**Description**: Size of the source, if measurable.
    - `user_name`**Type**: `STRING`**Provider name**: `userName`**Description**: Human-readable name for this source; may be user or system generated.
  - `prerequisite_stage`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `prerequisiteStage`**Description**: Other stages that must complete before this stage can run.
- `original_pipeline_transform`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `originalPipelineTransform`**Description**: Description of each transform in the pipeline and collections between them.
  - `display_data`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `displayData`**Description**: Transform-specific display data.
    - `bool_value`**Type**: `BOOLEAN`**Provider name**: `boolValue`**Description**: Contains value if the data is of a boolean type.
    - `duration_value`**Type**: `STRING`**Provider name**: `durationValue`**Description**: Contains value if the data is of duration type.
    - `float_value`**Type**: `FLOAT`**Provider name**: `floatValue`**Description**: Contains value if the data is of float type.
    - `int64_value`**Type**: `INT64`**Provider name**: `int64Value`**Description**: Contains value if the data is of int64 type.
    - `java_class_value`**Type**: `STRING`**Provider name**: `javaClassValue`**Description**: Contains value if the data is of java class type.
    - `key`**Type**: `STRING`**Provider name**: `key`**Description**: The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
    - `label`**Type**: `STRING`**Provider name**: `label`**Description**: An optional label to display in a dax UI for the element.
    - `namespace`**Type**: `STRING`**Provider name**: `namespace`**Description**: The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
    - `short_str_value`**Type**: `STRING`**Provider name**: `shortStrValue`**Description**: A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
    - `str_value`**Type**: `STRING`**Provider name**: `strValue`**Description**: Contains value if the data is of string type.
    - `timestamp_value`**Type**: `TIMESTAMP`**Provider name**: `timestampValue`**Description**: Contains value if the data is of timestamp type.
    - `url`**Type**: `STRING`**Provider name**: `url`**Description**: An optional full URL.
  - `id`**Type**: `STRING`**Provider name**: `id`**Description**: SDK generated id of this transform instance.
  - `input_collection_name`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `inputCollectionName`**Description**: User names for all collection inputs to this transform.
  - `kind`**Type**: `STRING`**Provider name**: `kind`**Description**: Type of transform.**Possible values**:
    - `UNKNOWN_KIND` - Unrecognized transform type.
    - `PAR_DO_KIND` - ParDo transform.
    - `GROUP_BY_KEY_KIND` - Group By Key transform.
    - `FLATTEN_KIND` - Flatten transform.
    - `READ_KIND` - Read transform.
    - `WRITE_KIND` - Write transform.
    - `CONSTANT_KIND` - Constructs from a constant value, such as with Create.of.
    - `SINGLETON_KIND` - Creates a Singleton view of a collection.
    - `SHUFFLE_KIND` - Opening or closing a shuffle session, often as part of a GroupByKey.
  - `name`**Type**: `STRING`**Provider name**: `name`**Description**: User provided name for this transform instance.
  - `output_collection_name`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `outputCollectionName`**Description**: User names for all collection outputs to this transform.
- `step_names_hash`**Type**: `STRING`**Provider name**: `stepNamesHash`**Description**: A hash value of the submitted pipeline portable graph step names if exists.

## `project_id`{% #project_id %}

**Type**: `STRING`

## `project_number`{% #project_number %}

**Type**: `STRING`

## `region_id`{% #region_id %}

**Type**: `STRING`

## `replace_job_id`{% #replace_job_id %}

**Type**: `STRING`**Provider name**: `replaceJobId`**Description**: If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a `CreateJobRequest`, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job.

## `replaced_by_job_id`{% #replaced_by_job_id %}

**Type**: `STRING`**Provider name**: `replacedByJobId`**Description**: If another job is an update of this job (and thus, this job is in `JOB_STATE_UPDATED`), this field contains the ID of that job.

## `requested_state`{% #requested_state %}

**Type**: `STRING`**Provider name**: `requestedState`**Description**: The job's requested state. Applies to `UpdateJob` requests. Set `requested_state` with `UpdateJob` requests to switch between the states `JOB_STATE_STOPPED` and `JOB_STATE_RUNNING`. You can also use `UpdateJob` requests to change a job's state from `JOB_STATE_RUNNING` to `JOB_STATE_CANCELLED`, `JOB_STATE_DONE`, or `JOB_STATE_DRAINED`. These states irrevocably terminate the job if it hasn't already reached a terminal state. This field has no effect on `CreateJob` requests.**Possible values**:

- `JOB_STATE_UNKNOWN` - The job's run state isn't specified.
- `JOB_STATE_STOPPED` - `JOB_STATE_STOPPED` indicates that the job has not yet started to run.
- `JOB_STATE_RUNNING` - `JOB_STATE_RUNNING` indicates that the job is currently running.
- `JOB_STATE_DONE` - `JOB_STATE_DONE` indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from `JOB_STATE_RUNNING`. It may also be set via a Cloud Dataflow `UpdateJob` call, if the job has not yet reached a terminal state.
- `JOB_STATE_FAILED` - `JOB_STATE_FAILED` indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from `JOB_STATE_RUNNING`.
- `JOB_STATE_CANCELLED` - `JOB_STATE_CANCELLED` indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow `UpdateJob` call, and only if the job has not yet reached another terminal state.
- `JOB_STATE_UPDATED` - `JOB_STATE_UPDATED` indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from `JOB_STATE_RUNNING`.
- `JOB_STATE_DRAINING` - `JOB_STATE_DRAINING` indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow `UpdateJob` call, but only as a transition from `JOB_STATE_RUNNING`. Jobs that are draining may only transition to `JOB_STATE_DRAINED`, `JOB_STATE_CANCELLED`, or `JOB_STATE_FAILED`.
- `JOB_STATE_DRAINED` - `JOB_STATE_DRAINED` indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from `JOB_STATE_DRAINING`.
- `JOB_STATE_PENDING` - `JOB_STATE_PENDING` indicates that the job has been created but is not yet running. Jobs that are pending may only transition to `JOB_STATE_RUNNING`, or `JOB_STATE_FAILED`.
- `JOB_STATE_CANCELLING` - `JOB_STATE_CANCELLING` indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to `JOB_STATE_CANCELLED` or `JOB_STATE_FAILED`.
- `JOB_STATE_QUEUED` - `JOB_STATE_QUEUED` indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to `JOB_STATE_PENDING` or `JOB_STATE_CANCELLED`.
- `JOB_STATE_RESOURCE_CLEANING_UP` - `JOB_STATE_RESOURCE_CLEANING_UP` indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.

## `resource_name`{% #resource_name %}

**Type**: `STRING`

## `runtime_updatable_params`{% #runtime_updatable_params %}

**Type**: `STRUCT`**Provider name**: `runtimeUpdatableParams`**Description**: This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.

- `max_num_workers`**Type**: `INT32`**Provider name**: `maxNumWorkers`**Description**: The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
- `min_num_workers`**Type**: `INT32`**Provider name**: `minNumWorkers`**Description**: The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
- `worker_utilization_hint`**Type**: `DOUBLE`**Provider name**: `workerUtilizationHint`**Description**: Target worker utilization, compared against the aggregate utilization of the worker pool by autoscaler, to determine upscaling and downscaling when absent other constraints such as backlog. For more information, see [Update an existing pipeline](https://cloud.google.com/dataflow/docs/guides/updating-a-pipeline).

## `satisfies_pzi`{% #satisfies_pzi %}

**Type**: `BOOLEAN`**Provider name**: `satisfiesPzi`**Description**: Output only. Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.

## `satisfies_pzs`{% #satisfies_pzs %}

**Type**: `BOOLEAN`**Provider name**: `satisfiesPzs`**Description**: Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.

## `service_resources`{% #service_resources %}

**Type**: `STRUCT`**Provider name**: `serviceResources`**Description**: Output only. Resources used by the Dataflow Service to run the job.

- `zones`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `zones`**Description**: Output only. List of Cloud Zones being used by the Dataflow Service for this job. Example: us-central1-c

## `stage_states`{% #stage_states %}

**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `stageStates`**Description**: This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.

- `current_state_time`**Type**: `TIMESTAMP`**Provider name**: `currentStateTime`**Description**: The time at which the stage transitioned to this state.
- `execution_stage_name`**Type**: `STRING`**Provider name**: `executionStageName`**Description**: The name of the execution stage.
- `execution_stage_state`**Type**: `STRING`**Provider name**: `executionStageState`**Description**: Executions stage states allow the same set of values as JobState.**Possible values**:
  - `JOB_STATE_UNKNOWN` - The job's run state isn't specified.
  - `JOB_STATE_STOPPED` - `JOB_STATE_STOPPED` indicates that the job has not yet started to run.
  - `JOB_STATE_RUNNING` - `JOB_STATE_RUNNING` indicates that the job is currently running.
  - `JOB_STATE_DONE` - `JOB_STATE_DONE` indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from `JOB_STATE_RUNNING`. It may also be set via a Cloud Dataflow `UpdateJob` call, if the job has not yet reached a terminal state.
  - `JOB_STATE_FAILED` - `JOB_STATE_FAILED` indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from `JOB_STATE_RUNNING`.
  - `JOB_STATE_CANCELLED` - `JOB_STATE_CANCELLED` indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow `UpdateJob` call, and only if the job has not yet reached another terminal state.
  - `JOB_STATE_UPDATED` - `JOB_STATE_UPDATED` indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from `JOB_STATE_RUNNING`.
  - `JOB_STATE_DRAINING` - `JOB_STATE_DRAINING` indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow `UpdateJob` call, but only as a transition from `JOB_STATE_RUNNING`. Jobs that are draining may only transition to `JOB_STATE_DRAINED`, `JOB_STATE_CANCELLED`, or `JOB_STATE_FAILED`.
  - `JOB_STATE_DRAINED` - `JOB_STATE_DRAINED` indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from `JOB_STATE_DRAINING`.
  - `JOB_STATE_PENDING` - `JOB_STATE_PENDING` indicates that the job has been created but is not yet running. Jobs that are pending may only transition to `JOB_STATE_RUNNING`, or `JOB_STATE_FAILED`.
  - `JOB_STATE_CANCELLING` - `JOB_STATE_CANCELLING` indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to `JOB_STATE_CANCELLED` or `JOB_STATE_FAILED`.
  - `JOB_STATE_QUEUED` - `JOB_STATE_QUEUED` indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to `JOB_STATE_PENDING` or `JOB_STATE_CANCELLED`.
  - `JOB_STATE_RESOURCE_CLEANING_UP` - `JOB_STATE_RESOURCE_CLEANING_UP` indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.

## `start_time`{% #start_time %}

**Type**: `TIMESTAMP`**Provider name**: `startTime`**Description**: The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.

## `steps`{% #steps %}

**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `steps`**Description**: Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.

- `kind`**Type**: `STRING`**Provider name**: `kind`**Description**: The kind of step in the Cloud Dataflow job.
- `name`**Type**: `STRING`**Provider name**: `name`**Description**: The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.

## `steps_location`{% #steps_location %}

**Type**: `STRING`**Provider name**: `stepsLocation`**Description**: The Cloud Storage location where the steps are stored.

## `tags`{% #tags %}

**Type**: `UNORDERED_LIST_STRING`

## `temp_files`{% #temp_files %}

**Type**: `UNORDERED_LIST_STRING`**Provider name**: `tempFiles`**Description**: A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}

## `type`{% #type %}

**Type**: `STRING`**Provider name**: `type`**Description**: Optional. The type of Dataflow job.**Possible values**:

- `JOB_TYPE_UNKNOWN` - The type of the job is unspecified, or unknown.
- `JOB_TYPE_BATCH` - A batch job with a well-defined end point: data is read, data is processed, data is written, and the job is done.
- `JOB_TYPE_STREAMING` - A continuously streaming job with no end: data is read, processed, and written continuously.

## `zone_id`{% #zone_id %}

**Type**: `STRING`
