---
title: Getting Started with Datadog
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: Docs > Infrastructure > Datadog Resource Catalog
---

# gcp_aiplatform_pipeline_job{% #gcp_aiplatform_pipeline_job %}

## `ancestors`{% #ancestors %}

**Type**: `UNORDERED_LIST_STRING`

## `create_time`{% #create_time %}

**Type**: `TIMESTAMP`**Provider name**: `createTime`**Description**: Output only. Pipeline creation time.

## `encryption_spec`{% #encryption_spec %}

**Type**: `STRUCT`**Provider name**: `encryptionSpec`**Description**: Customer-managed encryption key spec for a pipelineJob. If set, this PipelineJob and all of its sub-resources will be secured by this key.

- `kms_key_name`**Type**: `STRING`**Provider name**: `kmsKeyName`**Description**: Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created.

## `end_time`{% #end_time %}

**Type**: `TIMESTAMP`**Provider name**: `endTime`**Description**: Output only. Pipeline end time.

## `error`{% #error %}

**Type**: `STRUCT`**Provider name**: `error`**Description**: Output only. The error that occurred during pipeline execution. Only populated when the pipeline's state is FAILED or CANCELLED.

- `code`**Type**: `INT32`**Provider name**: `code`**Description**: The status code, which should be an enum value of google.rpc.Code.
- `message`**Type**: `STRING`**Provider name**: `message`**Description**: A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.

## `gcp_display_name`{% #gcp_display_name %}

**Type**: `STRING`**Provider name**: `displayName`**Description**: The display name of the Pipeline. The name can be up to 128 characters long and can consist of any UTF-8 characters.

## `job_detail`{% #job_detail %}

**Type**: `STRUCT`**Provider name**: `jobDetail`**Description**: Output only. The details of pipeline run. Not available in the list view.

- `pipeline_context`**Type**: `STRUCT`**Provider name**: `pipelineContext`**Description**: Output only. The context of the pipeline.
  - `create_time`**Type**: `TIMESTAMP`**Provider name**: `createTime`**Description**: Output only. Timestamp when this Context was created.
  - `description`**Type**: `STRING`**Provider name**: `description`**Description**: Description of the Context
  - `etag`**Type**: `STRING`**Provider name**: `etag`**Description**: An eTag used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
  - `gcp_display_name`**Type**: `STRING`**Provider name**: `displayName`**Description**: User provided display name of the Context. May be up to 128 Unicode characters.
  - `name`**Type**: `STRING`**Provider name**: `name`**Description**: Immutable. The resource name of the Context.
  - `parent_contexts`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `parentContexts`**Description**: Output only. A list of resource names of Contexts that are parents of this Context. A Context may have at most 10 parent_contexts.
  - `schema_title`**Type**: `STRING`**Provider name**: `schemaTitle`**Description**: The title of the schema describing the metadata. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
  - `schema_version`**Type**: `STRING`**Provider name**: `schemaVersion`**Description**: The version of the schema in schema_name to use. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
  - `update_time`**Type**: `TIMESTAMP`**Provider name**: `updateTime`**Description**: Output only. Timestamp when this Context was last updated.
- `pipeline_run_context`**Type**: `STRUCT`**Provider name**: `pipelineRunContext`**Description**: Output only. The context of the current pipeline run.
  - `create_time`**Type**: `TIMESTAMP`**Provider name**: `createTime`**Description**: Output only. Timestamp when this Context was created.
  - `description`**Type**: `STRING`**Provider name**: `description`**Description**: Description of the Context
  - `etag`**Type**: `STRING`**Provider name**: `etag`**Description**: An eTag used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
  - `gcp_display_name`**Type**: `STRING`**Provider name**: `displayName`**Description**: User provided display name of the Context. May be up to 128 Unicode characters.
  - `name`**Type**: `STRING`**Provider name**: `name`**Description**: Immutable. The resource name of the Context.
  - `parent_contexts`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `parentContexts`**Description**: Output only. A list of resource names of Contexts that are parents of this Context. A Context may have at most 10 parent_contexts.
  - `schema_title`**Type**: `STRING`**Provider name**: `schemaTitle`**Description**: The title of the schema describing the metadata. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
  - `schema_version`**Type**: `STRING`**Provider name**: `schemaVersion`**Description**: The version of the schema in schema_name to use. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
  - `update_time`**Type**: `TIMESTAMP`**Provider name**: `updateTime`**Description**: Output only. Timestamp when this Context was last updated.
- `task_details`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `taskDetails`**Description**: Output only. The runtime details of the tasks under the pipeline.
  - `create_time`**Type**: `TIMESTAMP`**Provider name**: `createTime`**Description**: Output only. Task create time.
  - `end_time`**Type**: `TIMESTAMP`**Provider name**: `endTime`**Description**: Output only. Task end time.
  - `error`**Type**: `STRUCT`**Provider name**: `error`**Description**: Output only. The error that occurred during task execution. Only populated when the task's state is FAILED or CANCELLED.
    - `code`**Type**: `INT32`**Provider name**: `code`**Description**: The status code, which should be an enum value of google.rpc.Code.
    - `message`**Type**: `STRING`**Provider name**: `message`**Description**: A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
  - `execution`**Type**: `STRUCT`**Provider name**: `execution`**Description**: Output only. The execution metadata of the task.
    - `create_time`**Type**: `TIMESTAMP`**Provider name**: `createTime`**Description**: Output only. Timestamp when this Execution was created.
    - `description`**Type**: `STRING`**Provider name**: `description`**Description**: Description of the Execution
    - `etag`**Type**: `STRING`**Provider name**: `etag`**Description**: An eTag used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
    - `gcp_display_name`**Type**: `STRING`**Provider name**: `displayName`**Description**: User provided display name of the Execution. May be up to 128 Unicode characters.
    - `name`**Type**: `STRING`**Provider name**: `name`**Description**: Output only. The resource name of the Execution.
    - `schema_title`**Type**: `STRING`**Provider name**: `schemaTitle`**Description**: The title of the schema describing the metadata. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
    - `schema_version`**Type**: `STRING`**Provider name**: `schemaVersion`**Description**: The version of the schema in `schema_title` to use. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
    - `state`**Type**: `STRING`**Provider name**: `state`**Description**: The state of this Execution. This is a property of the Execution, and does not imply or capture any ongoing process. This property is managed by clients (such as Vertex AI Pipelines) and the system does not prescribe or check the validity of state transitions.**Possible values**:
      - `STATE_UNSPECIFIED` - Unspecified Execution state
      - `NEW` - The Execution is new
      - `RUNNING` - The Execution is running
      - `COMPLETE` - The Execution has finished running
      - `FAILED` - The Execution has failed
      - `CACHED` - The Execution completed through Cache hit.
      - `CANCELLED` - The Execution was cancelled.
    - `update_time`**Type**: `TIMESTAMP`**Provider name**: `updateTime`**Description**: Output only. Timestamp when this Execution was last updated.
  - `executor_detail`**Type**: `STRUCT`**Provider name**: `executorDetail`**Description**: Output only. The detailed execution info.
    - `container_detail`**Type**: `STRUCT`**Provider name**: `containerDetail`**Description**: Output only. The detailed info for a container executor.
      - `failed_main_jobs`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `failedMainJobs`**Description**: Output only. The names of the previously failed CustomJob for the main container executions. The list includes the all attempts in chronological order.
      - `failed_pre_caching_check_jobs`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `failedPreCachingCheckJobs`**Description**: Output only. The names of the previously failed CustomJob for the pre-caching-check container executions. This job will be available if the PipelineJob.pipeline_spec specifies the `pre_caching_check` hook in the lifecycle events. The list includes the all attempts in chronological order.
      - `main_job`**Type**: `STRING`**Provider name**: `mainJob`**Description**: Output only. The name of the CustomJob for the main container execution.
      - `pre_caching_check_job`**Type**: `STRING`**Provider name**: `preCachingCheckJob`**Description**: Output only. The name of the CustomJob for the pre-caching-check container execution. This job will be available if the PipelineJob.pipeline_spec specifies the `pre_caching_check` hook in the lifecycle events.
    - `custom_job_detail`**Type**: `STRUCT`**Provider name**: `customJobDetail`**Description**: Output only. The detailed info for a custom job executor.
      - `failed_jobs`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `failedJobs`**Description**: Output only. The names of the previously failed CustomJob. The list includes the all attempts in chronological order.
      - `job`**Type**: `STRING`**Provider name**: `job`**Description**: Output only. The name of the CustomJob.
  - `parent_task_id`**Type**: `INT64`**Provider name**: `parentTaskId`**Description**: Output only. The id of the parent task if the task is within a component scope. Empty if the task is at the root level.
  - `pipeline_task_status`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `pipelineTaskStatus`**Description**: Output only. A list of task status. This field keeps a record of task status evolving over time.
    - `error`**Type**: `STRUCT`**Provider name**: `error`**Description**: Output only. The error that occurred during the state. May be set when the state is any of the non-final state (PENDING/RUNNING/CANCELLING) or FAILED state. If the state is FAILED, the error here is final and not going to be retried. If the state is a non-final state, the error indicates a system-error being retried.
      - `code`**Type**: `INT32`**Provider name**: `code`**Description**: The status code, which should be an enum value of google.rpc.Code.
      - `message`**Type**: `STRING`**Provider name**: `message`**Description**: A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
    - `state`**Type**: `STRING`**Provider name**: `state`**Description**: Output only. The state of the task.**Possible values**:
      - `STATE_UNSPECIFIED` - Unspecified.
      - `PENDING` - Specifies pending state for the task.
      - `RUNNING` - Specifies task is being executed.
      - `SUCCEEDED` - Specifies task completed successfully.
      - `CANCEL_PENDING` - Specifies Task cancel is in pending state.
      - `CANCELLING` - Specifies task is being cancelled.
      - `CANCELLED` - Specifies task was cancelled.
      - `FAILED` - Specifies task failed.
      - `SKIPPED` - Specifies task was skipped due to cache hit.
      - `NOT_TRIGGERED` - Specifies that the task was not triggered because the task's trigger policy is not satisfied. The trigger policy is specified in the `condition` field of PipelineJob.pipeline_spec.
    - `update_time`**Type**: `TIMESTAMP`**Provider name**: `updateTime`**Description**: Output only. Update time of this status.
  - `start_time`**Type**: `TIMESTAMP`**Provider name**: `startTime`**Description**: Output only. Task start time.
  - `state`**Type**: `STRING`**Provider name**: `state`**Description**: Output only. State of the task.**Possible values**:
    - `STATE_UNSPECIFIED` - Unspecified.
    - `PENDING` - Specifies pending state for the task.
    - `RUNNING` - Specifies task is being executed.
    - `SUCCEEDED` - Specifies task completed successfully.
    - `CANCEL_PENDING` - Specifies Task cancel is in pending state.
    - `CANCELLING` - Specifies task is being cancelled.
    - `CANCELLED` - Specifies task was cancelled.
    - `FAILED` - Specifies task failed.
    - `SKIPPED` - Specifies task was skipped due to cache hit.
    - `NOT_TRIGGERED` - Specifies that the task was not triggered because the task's trigger policy is not satisfied. The trigger policy is specified in the `condition` field of PipelineJob.pipeline_spec.
  - `task_id`**Type**: `INT64`**Provider name**: `taskId`**Description**: Output only. The system generated ID of the task.
  - `task_name`**Type**: `STRING`**Provider name**: `taskName`**Description**: Output only. The user specified name of the task that is defined in pipeline_spec.

## `labels`{% #labels %}

**Type**: `UNORDERED_LIST_STRING`

## `name`{% #name %}

**Type**: `STRING`**Provider name**: `name`**Description**: Output only. The resource name of the PipelineJob.

## `network`{% #network %}

**Type**: `STRING`**Provider name**: `network`**Description**: The full name of the Compute Engine [network](https://docs.datadoghq.com/compute/docs/networks-and-firewalls.md#networks) to which the Pipeline Job's workload should be peered. For example, `projects/12345/global/networks/myVPC`. [Format](https://docs.datadoghq.com/compute/docs/reference/rest/v1/networks/insert.md) is of the form `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in `12345`, and {network} is a network name. Private services access must already be configured for the network. Pipeline job will apply the network configuration to the Google Cloud resources being launched, if applied, such as Vertex AI Training or Dataflow job. If left unspecified, the workload is not peered with any network.

## `organization_id`{% #organization_id %}

**Type**: `STRING`

## `parent`{% #parent %}

**Type**: `STRING`

## `preflight_validations`{% #preflight_validations %}

**Type**: `BOOLEAN`**Provider name**: `preflightValidations`**Description**: Optional. Whether to do component level validations before job creation.

## `project_id`{% #project_id %}

**Type**: `STRING`

## `project_number`{% #project_number %}

**Type**: `STRING`

## `region_id`{% #region_id %}

**Type**: `STRING`

## `reserved_ip_ranges`{% #reserved_ip_ranges %}

**Type**: `UNORDERED_LIST_STRING`**Provider name**: `reservedIpRanges`**Description**: A list of names for the reserved ip ranges under the VPC network that can be used for this Pipeline Job's workload. If set, we will deploy the Pipeline Job's workload within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].

## `resource_name`{% #resource_name %}

**Type**: `STRING`

## `runtime_config`{% #runtime_config %}

**Type**: `STRUCT`**Provider name**: `runtimeConfig`**Description**: Runtime config of the pipeline.

- `failure_policy`**Type**: `STRING`**Provider name**: `failurePolicy`**Description**: Represents the failure policy of a pipeline. Currently, the default of a pipeline is that the pipeline will continue to run until no more tasks can be executed, also known as PIPELINE_FAILURE_POLICY_FAIL_SLOW. However, if a pipeline is set to PIPELINE_FAILURE_POLICY_FAIL_FAST, it will stop scheduling any new tasks when a task has failed. Any scheduled tasks will continue to completion.**Possible values**:
  - `PIPELINE_FAILURE_POLICY_UNSPECIFIED` - Default value, and follows fail slow behavior.
  - `PIPELINE_FAILURE_POLICY_FAIL_SLOW` - Indicates that the pipeline should continue to run until all possible tasks have been scheduled and completed.
  - `PIPELINE_FAILURE_POLICY_FAIL_FAST` - Indicates that the pipeline should stop scheduling new tasks after a task has failed.
- `gcs_output_directory`**Type**: `STRING`**Provider name**: `gcsOutputDirectory`**Description**: Required. A path in a Cloud Storage bucket, which will be treated as the root output directory of the pipeline. It is used by the system to generate the paths of output artifacts. The artifact paths are generated with a sub-path pattern `{job_id}/{task_id}/{output_key}` under the specified output directory. The service account specified in this pipeline must have the `storage.objects.get` and `storage.objects.create` permissions for this bucket.

## `schedule_name`{% #schedule_name %}

**Type**: `STRING`**Provider name**: `scheduleName`**Description**: Output only. The schedule resource name. Only returned if the Pipeline is created by Schedule API.

## `service_account`{% #service_account %}

**Type**: `STRING`**Provider name**: `serviceAccount`**Description**: The service account that the pipeline workload runs as. If not specified, the Compute Engine default service account in the project will be used. See [https://cloud.google.com/compute/docs/access/service-accounts#default_service_account](https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) Users starting the pipeline must have the `iam.serviceAccounts.actAs` permission on this service account.

## `start_time`{% #start_time %}

**Type**: `TIMESTAMP`**Provider name**: `startTime`**Description**: Output only. Pipeline start time.

## `state`{% #state %}

**Type**: `STRING`**Provider name**: `state`**Description**: Output only. The detailed state of the job.**Possible values**:

- `PIPELINE_STATE_UNSPECIFIED` - The pipeline state is unspecified.
- `PIPELINE_STATE_QUEUED` - The pipeline has been created or resumed, and processing has not yet begun.
- `PIPELINE_STATE_PENDING` - The service is preparing to run the pipeline.
- `PIPELINE_STATE_RUNNING` - The pipeline is in progress.
- `PIPELINE_STATE_SUCCEEDED` - The pipeline completed successfully.
- `PIPELINE_STATE_FAILED` - The pipeline failed.
- `PIPELINE_STATE_CANCELLING` - The pipeline is being cancelled. From this state, the pipeline may only go to either PIPELINE_STATE_SUCCEEDED, PIPELINE_STATE_FAILED or PIPELINE_STATE_CANCELLED.
- `PIPELINE_STATE_CANCELLED` - The pipeline has been cancelled.
- `PIPELINE_STATE_PAUSED` - The pipeline has been stopped, and can be resumed.

## `tags`{% #tags %}

**Type**: `UNORDERED_LIST_STRING`

## `template_metadata`{% #template_metadata %}

**Type**: `STRUCT`**Provider name**: `templateMetadata`**Description**: Output only. Pipeline template metadata. Will fill up fields if PipelineJob.template_uri is from supported template registry.

- `version`**Type**: `STRING`**Provider name**: `version`**Description**: The version_name in artifact registry. Will always be presented in output if the PipelineJob.template_uri is from supported template registry. Format is "sha256:abcdef123456…".

## `template_uri`{% #template_uri %}

**Type**: `STRING`**Provider name**: `templateUri`**Description**: A template uri from where the PipelineJob.pipeline_spec, if empty, will be downloaded. Currently, only uri from Vertex Template Registry & Gallery is supported. Reference to [https://cloud.google.com/vertex-ai/docs/pipelines/create-pipeline-template](https://cloud.google.com/vertex-ai/docs/pipelines/create-pipeline-template).

## `update_time`{% #update_time %}

**Type**: `TIMESTAMP`**Provider name**: `updateTime`**Description**: Output only. Timestamp when this PipelineJob was most recently updated.

## `zone_id`{% #zone_id %}

**Type**: `STRING`
