---
title: Search and Manage CI Pipelines
description: Learn how to search for your CI pipelines.
breadcrumbs: Docs > Continuous Integration Visibility > Search and Manage CI Pipelines
---

# Search and Manage CI Pipelines

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site). ().
{% /alert %}

{% /callout %}

## Overview{% #overview %}

The [Pipelines page](https://app.datadoghq.com/ci/pipelines) is useful for developers who want to keep an eye on the build pipeline for their service.

{% image
   source="https://docs.dd-static.net/images/continuous_integration/pipelines.1812d3ec828d127c467d13f23fb44891.png?auto=format"
   alt="" /%}

This page answers the following questions:

- Is the pipeline for your service performant and reliable, especially on the default branch?
- If not, what's the root cause?

You can access high-level accumulation and trends, including:

- An overview of the health of the whole build system, with aggregated stats for pipeline runs and branches.
- A window to quickly spotting and fixing immediate, urgent issues like broken pipelines to production.
- How each pipeline has run, over time, and with what results and trends.
- The breakdown of where time is spent in each build stage, over time, so you can focus your improvement efforts where it makes the biggest difference.

## Search for pipelines{% #search-for-pipelines %}

To see your pipelines, navigate to [**Software Delivery** > **CI Visibility** > **CI Pipeline List**](https://app.datadoghq.com/ci/pipelines).

The [Pipelines page](https://app.datadoghq.com/ci/pipelines) shows aggregate stats for the default branch of each pipeline over the selected time frame, as well as the status of the latest pipeline execution. Use this page to see all your pipelines and get a quick view of their health. Only pipelines with Git information associated to the default branch (usually named `main` or `prod`), as well as pipelines without any Git information, are displayed on this page.

The metrics shown include build frequency, failure rate, median duration, and change in median duration on both an absolute and relative basis. This information reveals which pipelines are high-usage and potentially high-resource consumers, or are experiencing regressions. The last build result, duration, and last runtime shows you the effect of the last commit.

You can filter the page by pipeline name to see the pipelines you're most concerned with. Click on a pipeline that is slow or failing to dig into details that show what commit might have introduced the performance regression or build error. If you are using [Datadog Teams](https://docs.datadoghq.com/account_management/teams/), you can filter for specific pipelines associated to your team using [custom tags](https://docs.datadoghq.com/continuous_integration/pipelines/custom_tags_and_measures/?tab=linux) that match team handles.

## Pipeline details and executions{% #pipeline-details-and-executions %}

Click into a specific pipeline to see the *Pipeline Details* page which provides views of the data for the pipeline you selected over a specified time frame.

{% image
   source="https://docs.dd-static.net/images/ci/pipeline_branch_overview_updated.cbe9d43c712e0e2361f21ec1bdb7ba36.png?auto=format"
   alt="Pipeline Details page for a single pipeline" /%}

Get insights on the selected pipeline such as total and failed executions over time, build duration percentiles, error rates, and total time spent breakdown by stage. There are also summary tables for stages and jobs so you can quickly sort them in terms of duration, percentage of overall execution time, or failure rate.

The pipeline execution list shows all the times that pipeline (or its stages or jobs) ran during the selected time frame, for the selected branch. Use the facets on the left side to filter the list to exactly the pipelines, stages, or jobs you want to see.

### Highlight critical path{% #highlight-critical-path %}

To highlight the critical path on the trace, click on the `Critical path` checkbox on the pipeline execution page.

The critical path highlights the spans that you need to speed up if you want to reduce the overall execution time of your pipeline. If a CI job is on the critical path, it means it is part of the longest path through the trace in terms of execution time. Speeding up the CI Jobs on the critical path is strictly necessary to speed up the CI pipeline.

You can use [this guide](https://docs.datadoghq.com/continuous_integration/guides/identify_highest_impact_jobs_with_critical_path) to identify the CI jobs on the critical path to help you determine which jobs to prioritize in order to reduce the overall duration of the CI pipelines.

### Explore connections to services, resources, and network events{% #explore-connections-to-services-resources-and-network-events %}

Click one of the executions to open the pipeline execution view and see the flame graph or span list for the pipeline and its stages. The *Executions (n)* list on the left side gives you quick access to the data for each retry of the pipeline for the same commit.

Click the CI provider link (`gitlab-ci gitlab.pipeline > documentation` in the following image) to investigate the Resource, Service, or Analytics page for the pipeline, stage, or job specifically. You can also find complete tags information and links to network monitoring events.

{% image
   source="https://docs.dd-static.net/images/ci/ci-pipeline-execution.ec78a1868aff1284fdabcc43ef55bb6a.png?auto=format"
   alt="Pipeline execution view with trace info and flamegraph display" /%}

### Explore connections to logs{% #explore-connections-to-logs %}

If job log collection is supported and enabled for the CI provider, related log events can be found in the *Logs* tab of the pipeline execution view.

Job log collection is supported for the following providers:

- [AWS CodePipeline](https://docs.datadoghq.com/continuous_integration/pipelines/awscodepipeline/#collect-job-logs)
- [Azure](https://docs.datadoghq.com/continuous_integration/pipelines/azure/#collect-job-logs)
- [CircleCI](https://docs.datadoghq.com/continuous_integration/pipelines/circleci/#enable-log-collection)
- [GitHub Actions](https://docs.datadoghq.com/continuous_integration/pipelines/github/#enable-log-collection)
- [GitLab](https://docs.datadoghq.com/continuous_integration/pipelines/gitlab/#enable-job-log-collection)
- [Jenkins](https://docs.datadoghq.com/continuous_integration/pipelines/jenkins#enable-job-log-collection)

### CI jobs failure analysis based on relevant logs{% #ci-jobs-failure-analysis-based-on-relevant-logs %}

CI Visibility uses an LLM model to generate enhanced error messages and categorize them with a domain and subdomain, based on the relevant logs collected from every failed CI job.

Use [CI jobs failure analysis](https://docs.datadoghq.com/continuous_integration/guides/use_ci_jobs_failure_analysis/) to identify the most common root causes of failure for your CI jobs.

## Further reading{% #further-reading %}

- [Search and filter pipeline executions](https://docs.datadoghq.com/continuous_integration/explorer)
- [Identify CI Jobs on the Critical Path to reduce the Pipeline Duration](https://docs.datadoghq.com/continuous_integration/guides/identify_highest_impact_jobs_with_critical_path/)
- [Use CI jobs failure analysis to identify root causes in failed jobs](https://docs.datadoghq.com/continuous_integration/guides/use_ci_jobs_failure_analysis/)
