- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
",t};e.buildCustomizationMenuUi=t;function n(e){let t='
",t}function s(e){let n=e.filter.currentValue||e.filter.defaultValue,t='${e.filter.label}
`,e.filter.options.forEach(s=>{let o=s.id===n;t+=``}),t+="${e.filter.label}
`,t+=`Datadog Software Composition Analysis (SCA) scans your repositories for open-source libraries and detects known security vulnerabilities before you ship to production.
Supported languages: C#, Go, Java, JavaScript, PHP, Python, Ruby
To get started:
The sections below cover the different ways to configure SCA for your repositories.
You can run SCA scans in two ways:
Analyze code directly in Datadog
For GitHub repositories, you can run Datadog SCA scans directly on Datadog infrastructure. To get started, navigate to Code Security settings.
Analyze code in your CI Pipelines
Run SCA by following instructions for your chosen CI provider below. Datadog SCA offers native support for:
You must scan your default branch at least once before results appear in Datadog Code Security.
Datadog SCA scans libraries in the following languages and requires a lockfile to report them:
Language | Package Manager | Lockfile |
---|---|---|
C# | .NET | packages.lock.json |
Go | mod | go.mod |
JVM | Gradle | gradle.lockfile |
JVM | Maven | pom.xml |
Node.js | npm | package-lock.json |
Node.js | pnpm | pnpm-lock.yaml |
Node.js | yarn | yarn.lock |
PHP | composer | composer.lock |
Python | pip | requirements.txt , Pipfile.lock |
Python | poetry | poetry.lock |
Ruby | bundler | Gemfile.lock |
Datadog SCA supports all source code management providers, with native support for GitHub, GitLab, and Azure DevOps.
If GitHub is your source code management provider, you must configure a GitHub App using the GitHub integration tile and set up the source code integration to see inline code snippets and enable pull request comments.
When installing a GitHub App, the following permissions are required to enable certain features:
Content: Read
, which allows you to see code snippets displayed in DatadogPull Request: Read & Write
, which allows Datadog to add feedback for violations directly in your pull requests using pull request comments.Checks: Read & Write
, which allows you to create checks on SAST violations to block pull requestsIf Azure DevOps is your source code management provider, before you can begin installation, you must request access to the closed Preview using the form above. After being granted access, follow the instructions below to complete the setup process.
Note: Azure DevOps Server is not supported.
If you are an admin in your Azure portal, you can configure Entra apps to connect your tenant to Datadog.
To enable all Code Security features in Azure DevOps, you’ll need to use a Datadog API key to configure service hooks for your projects.
First, set your environment variables (note: the Datadog UI will fill these values out for you):
export AZURE_DEVOPS_TOKEN="..." # Client Secret Value
export DD_API_KEY="..." # Datadog API Key
Then, replace the placeholders in the script below with your Datadog Site and Azure DevOps organization name to configure the necessary service hooks on your organization’s projects:
curl https://raw.githubusercontent.com/DataDog/azdevops-sci-hooks/refs/heads/main/setup-hooks.py > setup-hooks.py && chmod a+x ./setup-hooks.py
./setup-hooks.py --dd-site="<dd-site>" --az-devops-org="<org-name>"
Click here to see our CLI that automates this process.
If GitLab is your source code management provider, before you can begin installation, you must request access to the closed Preview using the form above. After being granted access, follow these instructions to complete the setup process.
If you are using another source code management provider, configure SCA to run in your CI pipelines using the datadog-ci
CLI tool and upload the results to Datadog.
To upload results to Datadog, you must be authenticated. To ensure you’re authenticated, configure the following environment variables:
Name | Description | Required | Default |
---|---|---|---|
DD_API_KEY | Your Datadog API key. This key is created by your Datadog organization and should be stored as a secret. | Yes | |
DD_APP_KEY | Your Datadog application key. This key, created by your Datadog organization, should include the code_analysis_read scope and be stored as a secret. | Yes | |
DD_SITE | The Datadog site to send information to. Your Datadog site is . | No | datadoghq.com |
There are two ways to run SCA scans from within your CI Pipelines:
You can run SCA scans automatically as part of your CI/CD workflows using built-in integrations for popular CI providers.
GitHub Actions
SCA can run as a job in your GitHub Actions workflows. The action provided below invokes Datadog’s recommended SBOM tool, Datadog SBOM Generator, on your codebase and uploads the results into Datadog.
Add the following code snippet in .github/workflows/datadog-sca.yml
.
Make sure to replace the dd_site
attribute with the Datadog site you are using.
datadog-sca.yml
on: [push]
name: Datadog Software Composition Analysis
jobs:
software-composition-analysis:
runs-on: ubuntu-latest
name: Datadog SBOM Generation and Upload
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Check imported libraries are secure and compliant
id: datadog-software-composition-analysis
uses: DataDog/datadog-sca-github-action@main
with:
dd_api_key: ${{ secrets.DD_API_KEY }}
dd_app_key: ${{ secrets.DD_APP_KEY }}
dd_site: "datadoghq.com"
Related GitHub Actions
Datadog Static Code Analysis (SAST) analyzes your first-party code. Static Code Analysis can be set up using the datadog-static-analyzer-github-action
GitHub action.
Azure DevOps Pipelines
To add a new pipeline in Azure DevOps, go to Pipelines > New Pipeline, select your repository, and then create/select a pipeline.
Add the following content to your Azure DevOps pipeline YAML file:
datadog-sca.yml
trigger:
branches:
include:
# Optionally specify a specific branch to trigger on when merging
- "*"
pr:
branches:
include:
- "*"
variables:
- group: "Datadog"
jobs:
- job: DatadogSoftwareCompositionAnalysis
displayName: "Datadog Software Composition Analysis"
steps:
- script: |
npm install -g @datadog/datadog-ci
export DATADOG_OSV_SCANNER_URL="https://github.com/DataDog/datadog-sbom-generator/releases/latest/download/datadog-sbom-generator_linux_amd64.zip"
mkdir -p /tmp/datadog-sbom-generator
curl -L -o /tmp/datadog-sbom-generator/datadog-sbom-generator.zip $DATADOG_OSV_SCANNER_URL
unzip /tmp/datadog-sbom-generator/datadog-sbom-generator.zip -d /tmp/datadog-sbom-generator
chmod 755 /tmp/datadog-sbom-generator/datadog-sbom-generator
/tmp/datadog-sbom-generator/datadog-sbom-generator scan --output=/tmp/sbom.json .
datadog-ci sbom upload /tmp/sbom.json
env:
DD_APP_KEY: $(DD_APP_KEY)
DD_API_KEY: $(DD_API_KEY)
DD_SITE: datadoghq.com
For all other providers, use the customizable script in the section below to run SCA scans and upload results to Datadog.
If you use a different CI provider or want more control, you can run SCA scans using a customizable script. This approach lets you manually install and run the scanner, then upload results to Datadog from any environment.
Prerequisites:
# Set the Datadog site to send information to
export DD_SITE="
"
# Install dependencies
npm install -g @datadog/datadog-ci
# Download the latest Datadog SBOM Generator:
# https://github.com/DataDog/datadog-sbom-generator/releases
DATADOG_SBOM_GENERATOR_URL=https://github.com/DataDog/datadog-sbom-generator/releases/latest/download/datadog-sbom-generator_linux_amd64.zip
# Install Datadog SBOM Generator
mkdir /datadog-sbom-generator
curl -L -o /datadog-sbom-generator/datadog-sbom-generator.zip $DATADOG_SBOM_GENERATOR_URL
unzip /datadog-sbom-generator/datadog-sbom-generator.zip -d /datadog-sbom-generator
chmod 755 /datadog-sbom-generator/datadog-sbom-generator
# Run Datadog SBOM Generator to scan your dependencies
/datadog-sbom-generator/datadog-sbom-generator scan --output=/tmp/sbom.json /path/to/repository
# Upload results to Datadog
datadog-ci sbom upload /tmp/sbom.json
Datadog recommends using the Datadog SBOM generator, but it is also possible to ingest a third-party SBOM.
You can upload SBOMs generated by other tools if they meet these requirements:
library
purl
attributeThird-party SBOM files are uploaded to Datadog using the datadog-ci
command.
You can use the following command to upload your third-party SBOM. Ensure the environment variables DD_API_KEY
, DD_APP_KEY
, and DD_SITE
are set to your API key, APP key, and Datadog site, respectively.
datadog-ci sbom upload /path/to/third-party-sbom.json
Datadog associates static code and library scan results with relevant services by using the following mechanisms:
The schema version v3
and later of the Software Catalog allows you to add the mapping of your code location for your service. The codeLocations
section specifies the location of the repository containing the code and its associated paths.
The paths
attribute is a list of globs that should match paths in the repository.
entity.datadog.yaml
apiVersion: v3
kind: service
metadata:
name: my-service
datadog:
codeLocations:
- repositoryURL: https://github.com/myorganization/myrepo.git
paths:
- path/to/service/code/**
entity.datadog.yaml
apiVersion: v3
kind: service
metadata:
name: my-service
datadog:
codeLocations:
- repositoryURL: https://github.com/myorganization/myrepo.git
paths:
- "**/*"
Datadog detects file usage in additional products such as Error Tracking and associate
files with the runtime service. For example, if a service called foo
has
a log entry or a stack trace containing a file with a path /modules/foo/bar.py
,
it associates files /modules/foo/bar.py
to service foo
.
Datadog detects service names in paths and repository names, and associates the file with the service if a match is found.
For a repository match, if there is a service called myservice
and
the repository URL is https://github.com/myorganization/myservice.git
, then,
it associates myservice
to all files in the repository.
If no repository match is found, Datadog attempts to find a match in the
path
of the file. If there is a service named myservice
, and the path is /path/to/myservice/foo.py
, the file is associated with myservice
because the service name is part of the path. If two services are present
in the path, the service name closest to the filename is selected.
If one method succeeds (in order), no further mapping attempts are made.
Datadog automatically associates the team attached to a service when a violation or vulnerability is detected. For example, if the file domains/ecommerce/apps/myservice/foo.py
is associated with myservice
, then the team myservice
will be associated to any violation
detected in this file.
If no services or teams are found, Datadog uses the CODEOWNERS
file in your repository. The CODEOWNERS
file determines which team owns a file in your Git provider.
Note: You must accurately map your Git provider teams to your Datadog teams for this feature to function properly.
More about SCA:
Other Code Security scanning for your repositories: