---
title: Getting Started with Datadog
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: Docs > Infrastructure > Datadog Resource Catalog
---

# gcp_aiplatform_index_endpoint{% #gcp_aiplatform_index_endpoint %}

## `ancestors`{% #ancestors %}

**Type**: `UNORDERED_LIST_STRING`

## `create_time`{% #create_time %}

**Type**: `TIMESTAMP`**Provider name**: `createTime`**Description**: Output only. Timestamp when this IndexEndpoint was created.

## `deployed_indexes`{% #deployed_indexes %}

**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `deployedIndexes`**Description**: Output only. The indexes deployed in this endpoint.

- `automatic_resources`**Type**: `STRUCT`**Provider name**: `automaticResources`**Description**: Optional. A description of resources that the DeployedIndex uses, which to large degree are decided by Vertex AI, and optionally allows only a modest additional configuration. If min_replica_count is not set, the default value is 2 (we don't provide SLA when min_replica_count=1). If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000.
  - `max_replica_count`**Type**: `INT32`**Provider name**: `maxReplicaCount`**Description**: Immutable. The maximum number of replicas that may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale to that many replicas is guaranteed (barring service outages). If traffic increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
  - `min_replica_count`**Type**: `INT32`**Provider name**: `minReplicaCount`**Description**: Immutable. The minimum number of replicas that will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
- `create_time`**Type**: `TIMESTAMP`**Provider name**: `createTime`**Description**: Output only. Timestamp when the DeployedIndex was created.
- `dedicated_resources`**Type**: `STRUCT`**Provider name**: `dedicatedResources`**Description**: Optional. A description of resources that are dedicated to the DeployedIndex, and that need a higher degree of manual configuration. The field min_replica_count must be set to a value strictly greater than 0, or else validation will fail. We don't provide SLA when min_replica_count=1. If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000. Available machine types for SMALL shard: e2-standard-2 and all machine types available for MEDIUM and LARGE shard. Available machine types for MEDIUM shard: e2-standard-16 and all machine types available for LARGE shard. Available machine types for LARGE shard: e2-highmem-16, n2d-standard-32. n1-standard-16 and n1-standard-32 are still available, but we recommend e2-standard-16 and e2-highmem-16 for cost efficiency.
  - `autoscaling_metric_specs`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `autoscalingMetricSpecs`**Description**: Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to `aiplatform.googleapis.com/prediction/online/cpu/utilization` and autoscaling_metric_specs.target to `80`.
    - `metric_name`**Type**: `STRING`**Provider name**: `metricName`**Description**: Required. The resource metric name. Supported metrics: * For Online Prediction: * `aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle` * `aiplatform.googleapis.com/prediction/online/cpu/utilization`
    - `target`**Type**: `INT32`**Provider name**: `target`**Description**: The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
  - `machine_spec`**Type**: `STRUCT`**Provider name**: `machineSpec`**Description**: Required. Immutable. The specification of a single machine being used.
    - `accelerator_count`**Type**: `INT32`**Provider name**: `acceleratorCount`**Description**: The number of accelerators to attach to the machine.
    - `accelerator_type`**Type**: `STRING`**Provider name**: `acceleratorType`**Description**: Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.**Possible values**:
      - `ACCELERATOR_TYPE_UNSPECIFIED` - Unspecified accelerator type, which means no accelerator.
      - `NVIDIA_TESLA_K80` - Deprecated: Nvidia Tesla K80 GPU has reached end of support, see [https://cloud.google.com/compute/docs/eol/k80-eol](https://cloud.google.com/compute/docs/eol/k80-eol).
      - `NVIDIA_TESLA_P100` - Nvidia Tesla P100 GPU.
      - `NVIDIA_TESLA_V100` - Nvidia Tesla V100 GPU.
      - `NVIDIA_TESLA_P4` - Nvidia Tesla P4 GPU.
      - `NVIDIA_TESLA_T4` - Nvidia Tesla T4 GPU.
      - `NVIDIA_TESLA_A100` - Nvidia Tesla A100 GPU.
      - `NVIDIA_A100_80GB` - Nvidia A100 80GB GPU.
      - `NVIDIA_L4` - Nvidia L4 GPU.
      - `NVIDIA_H100_80GB` - Nvidia H100 80Gb GPU.
      - `NVIDIA_H100_MEGA_80GB` - Nvidia H100 Mega 80Gb GPU.
      - `NVIDIA_H200_141GB` - Nvidia H200 141Gb GPU.
      - `TPU_V2` - TPU v2.
      - `TPU_V3` - TPU v3.
      - `TPU_V4_POD` - TPU v4.
      - `TPU_V5_LITEPOD` - TPU v5.
    - `machine_type`**Type**: `STRING`**Provider name**: `machineType`**Description**: Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For DeployedModel this field is optional, and the default value is `n1-standard-2`. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    - `reservation_affinity`**Type**: `STRUCT`**Provider name**: `reservationAffinity`**Description**: Optional. Immutable. Configuration controlling how this resource pool consumes reservation.
      - `key`**Type**: `STRING`**Provider name**: `key`**Description**: Optional. Corresponds to the label key of a reservation resource. To target a SPECIFIC_RESERVATION by name, use `compute.googleapis.com/reservation-name` as the key and specify the name of your reservation as its value.
      - `reservation_affinity_type`**Type**: `STRING`**Provider name**: `reservationAffinityType`**Description**: Required. Specifies the reservation affinity type.**Possible values**:
        - `TYPE_UNSPECIFIED` - Default value. This should not be used.
        - `NO_RESERVATION` - Do not consume from any reserved capacity, only use on-demand.
        - `ANY_RESERVATION` - Consume any reservation available, falling back to on-demand.
        - `SPECIFIC_RESERVATION` - Consume from a specific reservation. When chosen, the reservation must be identified via the `key` and `values` fields.
      - `values`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `values`**Description**: Optional. Corresponds to the label values of a reservation resource. This must be the full resource name of the reservation or reservation block.
    - `tpu_topology`**Type**: `STRING`**Provider name**: `tpuTopology`**Description**: Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
  - `max_replica_count`**Type**: `INT32`**Provider name**: `maxReplicaCount`**Description**: Immutable. The maximum number of replicas that may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale to that many replicas is guaranteed (barring service outages). If traffic increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
  - `min_replica_count`**Type**: `INT32`**Provider name**: `minReplicaCount`**Description**: Required. Immutable. The minimum number of machine replicas that will be always deployed on. This value must be greater than or equal to 1. If traffic increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
  - `required_replica_count`**Type**: `INT32`**Provider name**: `requiredReplicaCount`**Description**: Optional. Number of required available replicas for the deployment to succeed. This field is only needed when partial deployment/mutation is desired. If set, the deploy/mutate operation will succeed once available_replica_count reaches required_replica_count, and the rest of the replicas will be retried. If not set, the default required_replica_count will be min_replica_count.
  - `spot`**Type**: `BOOLEAN`**Provider name**: `spot`**Description**: Optional. If true, schedule the deployment workload on [spot VMs](https://cloud.google.com/kubernetes-engine/docs/concepts/spot-vms).
- `deployed_index_auth_config`**Type**: `STRUCT`**Provider name**: `deployedIndexAuthConfig`**Description**: Optional. If set, the authentication is enabled for the private endpoint.
  - `auth_provider`**Type**: `STRUCT`**Provider name**: `authProvider`**Description**: Defines the authentication provider that the DeployedIndex uses.
    - `allowed_issuers`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `allowedIssuers`**Description**: A list of allowed JWT issuers. Each entry must be a valid Google service account, in the following format: `service-account-name@project-id.iam.gserviceaccount.com`
    - `audiences`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `audiences`**Description**: The list of JWT [audiences](https://tools.ietf.org/html/draft-ietf-oauth-json-web-token-32#section-4.1.3). that are allowed to access. A JWT containing any of these audiences will be accepted.
- `deployment_group`**Type**: `STRING`**Provider name**: `deploymentGroup`**Description**: Optional. The deployment group can be no longer than 64 characters (eg: 'test', 'prod'). If not set, we will use the 'default' deployment group. Creating `deployment_groups` with `reserved_ip_ranges` is a recommended practice when the peered network has multiple peering ranges. This creates your deployments from predictable IP spaces for easier traffic administration. Also, one deployment_group (except 'default') can only be used with the same reserved_ip_ranges which means if the deployment_group has been used with reserved_ip_ranges: [a, b, c], using it with [a, b] or [d, e] is disallowed. Note: we only support up to 5 deployment groups(not including 'default').
- `enable_access_logging`**Type**: `BOOLEAN`**Provider name**: `enableAccessLogging`**Description**: Optional. If true, private endpoint's access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each MatchRequest. Note that logs may incur a cost, especially if the deployed index receives a high queries per second rate (QPS). Estimate your costs before enabling this option.
- `gcp_display_name`**Type**: `STRING`**Provider name**: `displayName`**Description**: The display name of the DeployedIndex. If not provided upon creation, the Index's display_name is used.
- `id`**Type**: `STRING`**Provider name**: `id`**Description**: Required. The user specified ID of the DeployedIndex. The ID can be up to 128 characters long and must start with a letter and only contain letters, numbers, and underscores. The ID must be unique within the project it is created in.
- `index`**Type**: `STRING`**Provider name**: `index`**Description**: Required. The name of the Index this is the deployment of. We may refer to this Index as the DeployedIndex's "original" Index.
- `index_sync_time`**Type**: `TIMESTAMP`**Provider name**: `indexSyncTime`**Description**: Output only. The DeployedIndex may depend on various data on its original Index. Additionally when certain changes to the original Index are being done (e.g. when what the Index contains is being changed) the DeployedIndex may be asynchronously updated in the background to reflect these changes. If this timestamp's value is at least the Index.update_time of the original Index, it means that this DeployedIndex and the original Index are in sync. If this timestamp is older, then to see which updates this DeployedIndex already contains (and which it does not), one must list the operations that are running on the original Index. Only the successfully completed Operations with update_time equal or before this sync time are contained in this DeployedIndex.
- `private_endpoints`**Type**: `STRUCT`**Provider name**: `privateEndpoints`**Description**: Output only. Provides paths for users to send requests directly to the deployed index services running on Cloud via private services access. This field is populated if network is configured.
  - `match_grpc_address`**Type**: `STRING`**Provider name**: `matchGrpcAddress`**Description**: Output only. The ip address used to send match gRPC requests.
  - `psc_automated_endpoints`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `pscAutomatedEndpoints`**Description**: Output only. PscAutomatedEndpoints is populated if private service connect is enabled if PscAutomatedConfig is set.
    - `match_address`**Type**: `STRING`**Provider name**: `matchAddress`**Description**: Ip Address created by the automated forwarding rule.
    - `network`**Type**: `STRING`**Provider name**: `network`**Description**: Corresponding network in pscAutomationConfigs.
    - `project_id`**Type**: `STRING`**Provider name**: `projectId`**Description**: Corresponding project_id in pscAutomationConfigs
  - `service_attachment`**Type**: `STRING`**Provider name**: `serviceAttachment`**Description**: Output only. The name of the service attachment resource. Populated if private service connect is enabled.
- `psc_automation_configs`**Type**: `UNORDERED_LIST_STRUCT`**Provider name**: `pscAutomationConfigs`**Description**: Optional. If set for PSC deployed index, PSC connection will be automatically created after deployment is done and the endpoint information is populated in private_endpoints.psc_automated_endpoints.
  - `network`**Type**: `STRING`**Provider name**: `network`**Description**: Required. The full name of the Google Compute Engine [network](https://cloud.google.com/compute/docs/networks-and-firewalls#networks). [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/insert): `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in '12345', and {network} is network name.
  - `project_id`**Type**: `STRING`**Provider name**: `projectId`**Description**: Required. Project id used to create forwarding rule.
- `reserved_ip_ranges`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `reservedIpRanges`**Description**: Optional. A list of reserved ip ranges under the VPC network that can be used for this DeployedIndex. If set, we will deploy the index within the provided ip ranges. Otherwise, the index might be deployed to any ip ranges under the provided VPC network. The value should be the name of the address ([https://cloud.google.com/compute/docs/reference/rest/v1/addresses](https://cloud.google.com/compute/docs/reference/rest/v1/addresses)) Example: ['vertex-ai-ip-range']. For more information about subnets and network IP ranges, please see [https://cloud.google.com/vpc/docs/subnets#manually_created_subnet_ip_ranges](https://cloud.google.com/vpc/docs/subnets#manually_created_subnet_ip_ranges).

## `description`{% #description %}

**Type**: `STRING`**Provider name**: `description`**Description**: The description of the IndexEndpoint.

## `enable_private_service_connect`{% #enable_private_service_connect %}

**Type**: `BOOLEAN`**Provider name**: `enablePrivateServiceConnect`**Description**: Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.

## `encryption_spec`{% #encryption_spec %}

**Type**: `STRUCT`**Provider name**: `encryptionSpec`**Description**: Immutable. Customer-managed encryption key spec for an IndexEndpoint. If set, this IndexEndpoint and all sub-resources of this IndexEndpoint will be secured by this key.

- `kms_key_name`**Type**: `STRING`**Provider name**: `kmsKeyName`**Description**: Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: `projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key`. The key needs to be in the same region as where the compute resource is created.

## `etag`{% #etag %}

**Type**: `STRING`**Provider name**: `etag`**Description**: Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.

## `gcp_display_name`{% #gcp_display_name %}

**Type**: `STRING`**Provider name**: `displayName`**Description**: Required. The display name of the IndexEndpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.

## `labels`{% #labels %}

**Type**: `UNORDERED_LIST_STRING`

## `name`{% #name %}

**Type**: `STRING`**Provider name**: `name`**Description**: Output only. The resource name of the IndexEndpoint.

## `network`{% #network %}

**Type**: `STRING`**Provider name**: `network`**Description**: Optional. The full name of the Google Compute Engine [network](https://cloud.google.com/compute/docs/networks-and-firewalls#networks) to which the IndexEndpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. network and private_service_connect_config are mutually exclusive. [Format](https://cloud.google.com/compute/docs/reference/rest/v1/networks/insert): `projects/{project}/global/networks/{network}`. Where {project} is a project number, as in '12345', and {network} is network name.

## `organization_id`{% #organization_id %}

**Type**: `STRING`

## `parent`{% #parent %}

**Type**: `STRING`

## `private_service_connect_config`{% #private_service_connect_config %}

**Type**: `STRUCT`**Provider name**: `privateServiceConnectConfig`**Description**: Optional. Configuration for private service connect. network and private_service_connect_config are mutually exclusive.

- `enable_private_service_connect`**Type**: `BOOLEAN`**Provider name**: `enablePrivateServiceConnect`**Description**: Required. If true, expose the IndexEndpoint via private service connect.
- `project_allowlist`**Type**: `UNORDERED_LIST_STRING`**Provider name**: `projectAllowlist`**Description**: A list of Projects from which the forwarding rule will target the service attachment.
- `service_attachment`**Type**: `STRING`**Provider name**: `serviceAttachment`**Description**: Output only. The name of the generated service attachment resource. This is only populated if the endpoint is deployed with PrivateServiceConnect.

## `project_id`{% #project_id %}

**Type**: `STRING`

## `project_number`{% #project_number %}

**Type**: `STRING`

## `public_endpoint_domain_name`{% #public_endpoint_domain_name %}

**Type**: `STRING`**Provider name**: `publicEndpointDomainName`**Description**: Output only. If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.

## `public_endpoint_enabled`{% #public_endpoint_enabled %}

**Type**: `BOOLEAN`**Provider name**: `publicEndpointEnabled`**Description**: Optional. If true, the deployed index will be accessible through public endpoint.

## `region_id`{% #region_id %}

**Type**: `STRING`

## `resource_name`{% #resource_name %}

**Type**: `STRING`

## `satisfies_pzi`{% #satisfies_pzi %}

**Type**: `BOOLEAN`**Provider name**: `satisfiesPzi`**Description**: Output only. Reserved for future use.

## `satisfies_pzs`{% #satisfies_pzs %}

**Type**: `BOOLEAN`**Provider name**: `satisfiesPzs`**Description**: Output only. Reserved for future use.

## `tags`{% #tags %}

**Type**: `UNORDERED_LIST_STRING`

## `update_time`{% #update_time %}

**Type**: `TIMESTAMP`**Provider name**: `updateTime`**Description**: Output only. Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.

## `zone_id`{% #zone_id %}

**Type**: `STRING`
