Vertex AI Data Labeling Job

A Vertex AI Data Labeling Job in Google Cloud is a managed service that helps create high-quality labeled datasets for machine learning. It coordinates human labelers or automated labeling tools to annotate data such as images, text, or video according to defined instructions. The labeled output is stored in a dataset for training and evaluation of AI models.

gcp.aiplatform_data_labeling_job

Fields

TitleIDTypeData TypeDescription
_keycorestring
active_learning_configcorejsonParameters that configure the active learning pipeline. Active learning will label the data incrementally via several iterations. For every iteration, it will select a batch of data based on the sampling strategy.
ancestorscorearray<string>
create_timecoretimestampOutput only. Timestamp when this DataLabelingJob was created.
current_spendcorejsonOutput only. Estimated cost(in US dollars) that the DataLabelingJob has incurred to date.
datadog_display_namecorestring
datasetscorearray<string>Required. Dataset resource names. Right now we only support labeling from a single Dataset. Format: `projects/{project}/locations/{location}/datasets/{dataset}`
encryption_speccorejsonCustomer-managed encryption key spec for a DataLabelingJob. If set, this DataLabelingJob will be secured by this key. Note: Annotations created in the DataLabelingJob are associated with the EncryptionSpec of the Dataset they are exported to.
errorcorejsonOutput only. DataLabelingJob errors. It is only populated when job's state is `JOB_STATE_FAILED` or `JOB_STATE_CANCELLED`.
gcp_display_namecorestringRequired. The user-defined name of the DataLabelingJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a DataLabelingJob.
inputs_schema_uricorestringRequired. Points to a YAML file stored on Google Cloud Storage describing the config for a specific type of DataLabelingJob. The schema files that can be used here are found in the https://storage.googleapis.com/google-cloud-aiplatform bucket in the /schema/datalabelingjob/inputs/ folder.
instruction_uricorestringRequired. The Google Cloud Storage location of the instruction pdf. This pdf is shared with labelers, and provides detailed description on how to label DataItems in Datasets.
labeler_countcoreint64Required. Number of labelers to work on each DataItem.
labeling_progresscoreint64Output only. Current labeling job progress percentage scaled in interval [0, 100], indicating the percentage of DataItems that has been finished.
labelscorearray<string>The labels with user-defined metadata to organize your DataLabelingJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. Following system labels exist for each DataLabelingJob: * "aiplatform.googleapis.com/schema": output only, its value is the inputs_schema's title.
namecorestringOutput only. Resource name of the DataLabelingJob.
organization_idcorestring
parentcorestring
project_idcorestring
project_numbercorestring
region_idcorestring
resource_namecorestring
specialist_poolscorearray<string>The SpecialistPools' resource names associated with this job.
statecorestringOutput only. The detailed state of the job.
tagscorehstore_csv
update_timecoretimestampOutput only. Timestamp when this DataLabelingJob was updated most recently.
zone_idcorestring