This product is not supported for your selected
Datadog site. (
).
analysis_instance_schema_uri
Type: STRING
Provider name: analysisInstanceSchemaUri
Description: YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
ancestors
Type: UNORDERED_LIST_STRING
bigquery_tables
Type: UNORDERED_LIST_STRUCT
Provider name: bigqueryTables
Description: Output only. The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response
bigquery_table_path
Type: STRING
Provider name: bigqueryTablePath
Description: The created BigQuery table to store logs. Customer could do their own query & analysis. Format: bq://.model_deployment_monitoring_._
log_source
Type: STRING
Provider name: logSource
Description: The source of log.
Possible values:
LOG_SOURCE_UNSPECIFIED
- Unspecified source.
TRAINING
- Logs coming from Training dataset.
SERVING
- Logs coming from Serving traffic.
log_type
Type: STRING
Provider name: logType
Description: The type of log.
Possible values:
LOG_TYPE_UNSPECIFIED
- Unspecified type.
PREDICT
- Predict logs.
EXPLAIN
- Explain logs.
request_response_logging_schema_version
Type: STRING
Provider name: requestResponseLoggingSchemaVersion
Description: Output only. The schema version of the request/response logging BigQuery table. Default to v1 if unset.
create_time
Type: TIMESTAMP
Provider name: createTime
Description: Output only. Timestamp when this ModelDeploymentMonitoringJob was created.
enable_monitoring_pipeline_logs
Type: BOOLEAN
Provider name: enableMonitoringPipelineLogs
Description: If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
encryption_spec
Type: STRUCT
Provider name: encryptionSpec
Description: Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
kms_key_name
Type: STRING
Provider name: kmsKeyName
Description: Required. The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
endpoint
Type: STRING
Provider name: endpoint
Description: Required. Endpoint resource name. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
error
Type: STRUCT
Provider name: error
Description: Output only. Only populated when the job’s state is JOB_STATE_FAILED
or JOB_STATE_CANCELLED
.
code
Type: INT32
Provider name: code
Description: The status code, which should be an enum value of google.rpc.Code.
message
Type: STRING
Provider name: message
Description: A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
gcp_display_name
Type: STRING
Provider name: displayName
Description: Required. The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
labels
Type: UNORDERED_LIST_STRING
Type: STRUCT
Provider name: latestMonitoringPipelineMetadata
Description: Output only. Latest triggered monitoring pipeline metadata.
gcp_status
Type: STRUCT
Provider name: status
Description: The status of the most recent monitoring pipeline.
code
Type: INT32
Provider name: code
Description: The status code, which should be an enum value of google.rpc.Code.
message
Type: STRING
Provider name: message
Description: A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
run_time
Type: TIMESTAMP
Provider name: runTime
Description: The time that most recent monitoring pipelines that is related to this run.
log_ttl
Type: STRING
Provider name: logTtl
Description: The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
logging_sampling_strategy
Type: STRUCT
Provider name: loggingSamplingStrategy
Description: Required. Sample Strategy for logging.
random_sample_config
Type: STRUCT
Provider name: randomSampleConfig
Description: Random sample config. Will support more sampling strategies later.
sample_rate
Type: DOUBLE
Provider name: sampleRate
Description: Sample rate (0, 1]
model_deployment_monitoring_objective_configs
Type: UNORDERED_LIST_STRUCT
Provider name: modelDeploymentMonitoringObjectiveConfigs
Description: Required. The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
deployed_model_id
Type: STRING
Provider name: deployedModelId
Description: The DeployedModel ID of the objective config.
objective_config
Type: STRUCT
Provider name: objectiveConfig
Description: The objective config of for the modelmonitoring job of this deployed model.
explanation_config
Type: STRUCT
Provider name: explanationConfig
Description: The config for integrating with Vertex Explainable AI.
enable_feature_attributes
Type: BOOLEAN
Provider name: enableFeatureAttributes
Description: If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
explanation_baseline
Type: STRUCT
Provider name: explanationBaseline
Description: Predictions generated by the BatchPredictionJob using baseline dataset.
bigquery
Type: STRUCT
Provider name: bigquery
Description: BigQuery location for BatchExplain output.
output_uri
Type: STRING
Provider name: outputUri
Description: Required. BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: bq://projectId
or bq://projectId.bqDatasetId
or bq://projectId.bqDatasetId.bqTableId
.
gcs
Type: STRUCT
Provider name: gcs
Description: Cloud Storage location for BatchExplain output.
output_uri_prefix
Type: STRING
Provider name: outputUriPrefix
Description: Required. Google Cloud Storage URI to output directory. If the uri doesn’t end with ‘/’, a ‘/’ will be automatically appended. The directory is created if it doesn’t exist.
prediction_format
Type: STRING
Provider name: predictionFormat
Description: The storage format of the predictions generated BatchPrediction job.
Possible values:
PREDICTION_FORMAT_UNSPECIFIED
- Should not be set.
JSONL
- Predictions are in JSONL files.
BIGQUERY
- Predictions are in BigQuery.
prediction_drift_detection_config
Type: STRUCT
Provider name: predictionDriftDetectionConfig
Description: The config for drift of prediction data.
default_drift_threshold
Type: STRUCT
Provider name: defaultDriftThreshold
Description: Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
value
Type: DOUBLE
Provider name: value
Description: Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
training_dataset
Type: STRUCT
Provider name: trainingDataset
Description: Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
bigquery_source
Type: STRUCT
Provider name: bigquerySource
Description: The BigQuery table of the unmanaged Dataset used to train this Model.
input_uri
Type: STRING
Provider name: inputUri
Description: Required. BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example: bq://projectId.bqDatasetId.bqTableId
.
data_format
Type: STRING
Provider name: dataFormat
Description: Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: “tf-record” The source file is a TFRecord file. “csv” The source file is a CSV file. “jsonl” The source file is a JSONL file.
dataset
Type: STRING
Provider name: dataset
Description: The resource name of the Dataset used to train this Model.
gcs_source
Type: STRUCT
Provider name: gcsSource
Description: The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
logging_sampling_strategy
Type: STRUCT
Provider name: loggingSamplingStrategy
Description: Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
random_sample_config
Type: STRUCT
Provider name: randomSampleConfig
Description: Random sample config. Will support more sampling strategies later.
sample_rate
Type: DOUBLE
Provider name: sampleRate
Description: Sample rate (0, 1]
target_field
Type: STRING
Provider name: targetField
Description: The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
training_prediction_skew_detection_config
Type: STRUCT
Provider name: trainingPredictionSkewDetectionConfig
Description: The config for skew between training data and prediction data.
default_skew_threshold
Type: STRUCT
Provider name: defaultSkewThreshold
Description: Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
value
Type: DOUBLE
Provider name: value
Description: Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
model_deployment_monitoring_schedule_config
Type: STRUCT
Provider name: modelDeploymentMonitoringScheduleConfig
Description: Required. Schedule config for running the monitoring job.
monitor_interval
Type: STRING
Provider name: monitorInterval
Description: Required. The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
monitor_window
Type: STRING
Provider name: monitorWindow
Description: The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
model_monitoring_alert_config
Type: STRUCT
Provider name: modelMonitoringAlertConfig
Description: Alert config for model monitoring.
email_alert_config
Type: STRUCT
Provider name: emailAlertConfig
Description: Email alert config.
user_emails
Type: UNORDERED_LIST_STRING
Provider name: userEmails
Description: The email addresses to send the alert.
enable_logging
Type: BOOLEAN
Provider name: enableLogging
Description: Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto ModelMonitoringStatsAnomalies. This can be further synced to Pub/Sub or any other services supported by Cloud Logging.
notification_channels
Type: UNORDERED_LIST_STRING
Provider name: notificationChannels
Description: Resource names of the NotificationChannels to send alert. Must be of the format projects//notificationChannels/
name
Type: STRING
Provider name: name
Description: Output only. Resource name of a ModelDeploymentMonitoringJob.
next_schedule_time
Type: TIMESTAMP
Provider name: nextScheduleTime
Description: Output only. Timestamp when this monitoring pipeline will be scheduled to run for the next round.
organization_id
Type: STRING
parent
Type: STRING
predict_instance_schema_uri
Type: STRING
Provider name: predictInstanceSchemaUri
Description: YAML schema file uri describing the format of a single instance, which are given to format this Endpoint’s prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
project_id
Type: STRING
project_number
Type: STRING
resource_name
Type: STRING
satisfies_pzi
Type: BOOLEAN
Provider name: satisfiesPzi
Description: Output only. Reserved for future use.
satisfies_pzs
Type: BOOLEAN
Provider name: satisfiesPzs
Description: Output only. Reserved for future use.
schedule_state
Type: STRING
Provider name: scheduleState
Description: Output only. Schedule state when the monitoring job is in Running state.
Possible values:
MONITORING_SCHEDULE_STATE_UNSPECIFIED
- Unspecified state.
PENDING
- The pipeline is picked up and wait to run.
OFFLINE
- The pipeline is offline and will be scheduled for next run.
RUNNING
- The pipeline is running.
state
Type: STRING
Provider name: state
Description: Output only. The detailed state of the monitoring job. When the job is still creating, the state will be ‘PENDING’. Once the job is successfully created, the state will be ‘RUNNING’. Pause the job, the state will be ‘PAUSED’. Resume the job, the state will return to ‘RUNNING’.
Possible values:
JOB_STATE_UNSPECIFIED
- The job state is unspecified.
JOB_STATE_QUEUED
- The job has been just created or resumed and processing has not yet begun.
JOB_STATE_PENDING
- The service is preparing to run the job.
JOB_STATE_RUNNING
- The job is in progress.
JOB_STATE_SUCCEEDED
- The job completed successfully.
JOB_STATE_FAILED
- The job failed.
JOB_STATE_CANCELLING
- The job is being cancelled. From this state the job may only go to either JOB_STATE_SUCCEEDED
, JOB_STATE_FAILED
or JOB_STATE_CANCELLED
.
JOB_STATE_CANCELLED
- The job has been cancelled.
JOB_STATE_PAUSED
- The job has been stopped, and can be resumed.
JOB_STATE_EXPIRED
- The job has expired.
JOB_STATE_UPDATING
- The job is being updated. Only jobs in the RUNNING
state can be updated. After updating, the job goes back to the RUNNING
state.
JOB_STATE_PARTIALLY_SUCCEEDED
- The job is partially succeeded, some results may be missing due to errors.
stats_anomalies_base_directory
Type: STRUCT
Provider name: statsAnomaliesBaseDirectory
Description: Stats anomalies base folder path.
output_uri_prefix
Type: STRING
Provider name: outputUriPrefix
Description: Required. Google Cloud Storage URI to output directory. If the uri doesn’t end with ‘/’, a ‘/’ will be automatically appended. The directory is created if it doesn’t exist.
Type: UNORDERED_LIST_STRING
update_time
Type: TIMESTAMP
Provider name: updateTime
Description: Output only. Timestamp when this ModelDeploymentMonitoringJob was updated most recently.