CI Visibility is not available in the selected site () at this time.

Compatibility

  • Supported GitLab versions:

    • GitLab.com (SaaS)
    • GitLab >= 14.1 (self-hosted)
    • GitLab >= 13.7.0 (self-hosted) by enabling the datadog_ci_integration feature flag
  • Partial pipelines: View partially retried and downstream pipeline executions

  • Manual steps: View manually triggered pipelines

  • Queue time: View amount of time pipeline jobs wait in the queue before processing

  • Logs correlation: Correlate pipeline spans to logs and enable job log collection

  • Infrastructure metric correlation: Correlate pipelines to infrastructure host metrics for self-hosted GitLab runners

  • Custom pre-defined tags: Configure custom tags to all generated pipeline, stages, and job spans

  • Custom tags and metrics at runtime: Configure custom tags and metrics at runtime

  • Parameters: Set custom env or service parameters

  • Pipeline failure reasons: Identify pipeline failure reasons from error messages

Configure the Datadog integration

Configure the integration on a project or group by going to Settings > Integrations > Datadog for each project or group you want to instrument.

Configure the integration on a project or group by going to Settings > Integrations > Datadog for each project or group you want to instrument.

You can also activate the integration at the GitLab instance level, by going to Admin > Settings > Integrations > Datadog.

Enable the datadog_ci_integration feature flag to activate the integration. Run one of the following commands, which use GitLab’s Rails Runner, depending on your installation type:

Omnibus installations

sudo gitlab-rails runner "Feature.enable(:datadog_ci_integration)"

From source installations

sudo -u git -H bundle exec rails runner \
  -e production \
  "Feature.enable(:datadog_ci_integration)"

Kubernetes installations

kubectl exec -it <task-runner-pod-name> -- \
  /srv/gitlab/bin/rails runner "Feature.enable(:datadog_ci_integration)"

Then, configure the integration on a project by going to Settings > Integrations > Datadog for each project you want to instrument.

Note: Due to a bug in early versions of GitLab, the Datadog integration cannot be enabled at group or instance level on GitLab versions < 14.1, even if the option is available on GitLab's UI

Fill in the integration configuration settings:

Active
Enables the integration.
Datadog site
Specifies which Datadog site to send data to.
Default: datadoghq.com
Selected site:
API URL (optional)
Allows overriding the API URL used for sending data directly, only used in advanced scenarios.
Default: (empty, no override)
API key
Specifies which API key to use when sending data. You can generate one in the APIs tab of the Integrations section on Datadog.
Service (optional)
Specifies which service name to attach to each span generated by the integration. Use this to differentiate between GitLab instances.
Default: gitlab-ci
Env (optional)
Specifies which environment (env tag) to attach to each span generated by the integration. Use this to differentiate between groups of GitLab instances (for example: staging or production).
Default: none
Tags (optional)
Specifies any custom tags to attach to each span generated by the integration. Provide one tag per line in the format: key:value.
Default: (empty, no additional tags)
Note: Available only in GitLab.com and GitLab >= 14.8 self-hosted.

You can test the integration with the Test settings button (only available when configuring the integration on a project). After it’s successful, click Save changes to finish the integration set up.

Integrate through webhooks

As an alternative to using the native Datadog integration, you can use webhooks to send pipeline data to Datadog.

Note: The native Datadog integration is the recommended approach and the option that is actively under development.

Go to Settings > Webhooks in your repository (or GitLab instance settings), and add a new webhook:

  • URL: https://webhook-intake./api/v2/webhook/?dd-api-key=<API_KEY> where <API_KEY> is your Datadog API key.
  • Secret Token: leave blank
  • Trigger: Select Job events and Pipeline events.

To set custom env or service parameters, add more query parameters in the webhooks URL: &env=<YOUR_ENV>&service=<YOUR_SERVICE_NAME>

Set custom tags

To set custom tags to all the pipeline and job spans generated by the integration, add to the URL a URL-encoded query parameter tags, with key:value pairs separated by commas. If a key:value pair contains any commas, surround it with quotes. For example, to add key1:value1,"key2: value with , comma",key3:value3, the following string would need to be appended to the Webhook URL:

?tags=key1%3Avalue1%2C%22key2%3A+value+with+%2C+comma%22%2Ckey3%3Avalue3

Integrate with Datadog Teams

To display and filter the teams associated with your pipelines, add team:<your-team> as a custom tag. The custom tag name must match your Datadog Teams team handle exactly.

Visualize pipeline data in Datadog

After the integration is successfully configured, the Pipelines and Pipeline Executions pages populate with data after the pipelines finish.

Note: The Pipelines page shows data for only the default branch of each repository.

Partial and downstream pipelines

In the Pipeline Executions page, you can use the filters below in the search bar:

Downstream Pipeline
Possible values: true, false
Manually Triggered
Possible values: true, false
Partial Pipeline
Possible values: retry, paused, resumed
The Pipeline executions page with Partial Pipeline:retry entered in the search query

These filters can also be applied through the facet panel on the left hand side of the page.

The facet panel with Partial Pipeline facet expanded and the value Retry selected, the Partial Retry facet expanded and the value true selected

Correlate infrastructure metrics to jobs

If you are using self-hosted GitLab runners, you can correlate jobs with the infrastructure that is running them. For this feature to work, the GitLab runner must have a tag of the form host:<hostname>. Tags can be added while registering a new runner. For existing runners, add tags by updating the runner’s config.toml. Or add tags through the UI by going to Settings > CI/CD > Runners and editing the appropriate runner.

After these steps, CI Visibility adds the hostname to each job. To see the metrics, click on a job span in the trace view. In the drawer, a new tab named Infrastructure appears which contains the host metrics.

View error messages for pipeline failures

Error messages are supported for GitLab versions 15.2.0 and above.

For failed GitLab pipeline executions, each error under the Errors tab within a specific pipeline execution displays a message associated with the error type from GitLab.

GitLab Failure Reason

See the table below for the message and domain correlated with each error type. Any unlisted error type will lead to the error message of Job failed and error domain of unknown.

Error TypeError MessageError Domain
unknown_failureFailed due to unknown reasonunknown
config_errorFailed due to error on CI/CD configuration fileuser
external_validation_failureFailed due to external pipeline validationunknown
user_not_verifiedThe pipeline failed due to the user not being verifieduser
activity_limit_exceededThe pipeline activity limit was exceededprovider
size_limit_exceededThe pipeline size limit was exceededprovider
job_activity_limit_exceededThe pipeline job activity limit was exceededprovider
deployments_limit_exceededThe pipeline deployments limit was exceededprovider
project_deletedThe project associated with this pipeline was deletedprovider
api_failureAPI Failureprovider
stuck_or_timeout_failurePipeline is stuck or timed outunknown
runner_system_failureFailed due to runner system failureprovider
missing_dependency_failureFailed due to missing dependencyunknown
runner_unsupportedFailed due to unsupported runnerprovider
stale_scheduleFailed due to stale scheduleprovider
job_execution_timeoutFailed due to job timeoutunknown
archived_failureArchived failureprovider
unmet_prerequisitesFailed due to unmet prerequisiteunknown
scheduler_failureFailed due to schedule failureprovider
data_integrity_failureFailed due to data integrityprovider
forward_deployment_failureDeployment failureunknown
user_blockedBlocked by useruser
ci_quota_exceededCI quota exceededprovider
pipeline_loop_detectedPipeline loop detecteduser
builds_disabledBuild disableduser
deployment_rejectedDeployment rejecteduser
protected_environment_failureEnvironment failureprovider
secrets_provider_not_foundSecret provider not founduser
reached_max_descendant_pipelines_depthReached max descendant pipelinesuser
ip_restriction_failureIP restriction failureprovider

Enable job log collection

The following GitLab versions support collecting job logs:

To enable collection of job logs:

  1. Enable the datadog_integration_logs_collection feature flag in your GitLab self-hosted or GitLab.com account. This allows you to see the Enable job logs collection checkbox on the Pipeline Setup page.
  2. Click Enable job logs collection and click Save changes.

Job logs are collected in Log Management and are automatically correlated with the GitLab pipeline in CI Visibility. Log files larger than one GiB are truncated.

Note: Logs are billed separately from CI Visibility.

For more information about processing job logs collected from the GitLab integration, see the Processors documentation.

Further reading