Storage Monitoring for Amazon S3, Google Cloud Storage, and Azure Blob Storage provides deep, prefix-level analytics to help you understand exactly how your storage is being used. With Storage Monitoring you can:
Pinpoint where spend is coming from in your bucket: Break storage cost to the prefix so you know which workloads, teams, or environments drive growth.
Identify cold data: Spot buckets with rarely accessed prefixes, and move cold data to lower-cost tiers.
Tune retention and lifecycle rules with data: Read/write and age metrics show when objects were last used, so you can shift unused prefixes to Glacier, Intelligent-Tiering, and other low-cost classes.
Monitor data freshness: Age metrics show how recently each prefix was updated, so you can confirm that backups and other time-sensitive data are landing in prefixes when they should.
You can access Storage Monitoring in Datadog by navigating to Infrastructure > Storage Monitoring.
This guide explains how to configure Storage Monitoring in Datadog for your Amazon S3 buckets, Google Cloud Storage buckets, and Azure storage accounts. Select your cloud storage service to access setup instructions.
Setup for Amazon S3
The fastest way to configure Storage Monitoring is through the Enable Buckets page, where you can enable S3 inventory and configure monitoring for multiple buckets at once.
As an alternative, you can set up S3 inventory manually or with Terraform and enable Storage Monitoring using your existing setup. For details, see Existing S3 Inventory.
Enable Amazon S3 Integration and Resource collection for all the AWS accounts you want to monitor.
Enable S3 Inventory to get prefix level monitoring.
- Source bucket: The S3 bucket you want to monitor with Storage Monitoring - Destination bucket: Used to store inventory reports (one per AWS region, can be reused cross-account)
Add the following permissions to your Datadog IAM policy so Datadog can enable S3 inventory on your source buckets and read the generated reports from the destination buckets:
s3:PutInventoryConfiguration
s3:GetObject (scoped to the destination bucket(s))
s3:ListBucket (scoped to the destination bucket(s))
Example IAM policy:
{"Version":"2012-10-17","Statement":[{"Sid":"AllowDatadogToEnableInventory","Effect":"Allow","Action":["s3:PutInventoryConfiguration"// If you want Datadog to enable S3 inventory through Datadog UI
],"Resource":"arn:aws:s3:::storage-monitoring-source-bucket"// source bucket(s)
},{// destination bucket
"Sid":"AllowDatadogToReadInventoryFiles","Effect":"Allow","Action":["s3:GetObject","s3:ListBucket"],"Resource":["arn:aws:s3:::storage-monitoring-inventory-destination","arn:aws:s3:::storage-monitoring-inventory-destination/*"// destination bucket(s)/prefix
]}]}
On the Enable it for me tab, select the regions or accounts you want to enable and assign a destination bucket per region or per account to store S3 Inventory reports. You can either use an existing bucket or create one in AWS.
The destination buckets must allow the source buckets to write inventory data. See Creating a destination bucket policy. in the AWS documentation for details.
Complete the inventory configuration. The first inventory report may take up to 24 hours to generate.
Enable S3 Access Logs for prefix-level request and latency metrics: To get prefix-level access metrics including request counts, server-side latency, and cold data identification for cost optimization, follow these additional steps:
Set up the Datadog Lambda Forwarder (if not already configured):
Return to Infrastructure > Storage Monitoring to see any new buckets. The inventory generation process starts in AWS within 24 hours of the first report. Data from your buckets is visible after this period.
You can also set up Storage Monitoring using the provided CloudFormation templates. This process involves two steps:
Step 1: Configure inventory generation
This template configures your existing S3 bucket to generate inventory reports, which Datadog uses to generate detailed metrics about your bucket prefixes.
In AWS CloudFormation, click Create stack in the top right corner and select With existing resources (import resources).
In the Specify template step, select Upload a template file.
Click Choose file and select the source-bucket-inventory-cfn.yaml file, then click Next.
Enter the bucket name you want AWS to start generating inventories for, and click Next.
Fill in the required parameters:
DestinationBucketName: The bucket for storing inventory files. Note: You must only use one destination bucket for all inventory files generated in an AWS account.
SourceBucketName: The bucket you want to monitor and start generating inventory files for
Optional parameters:
SourceBucketPrefix: (Optional) Limit monitoring to a specific path in the source bucket
DestinationBucketPrefix: Specific path within the destination bucket. Ensure this path doesn’t include trailing slashes (/)
Click Next.
Wait for AWS to locate your source bucket, and click Import resources in the bottom right corner.
Note: This CloudFormation template can be rolled back, but rolling back doesn’t delete the created resources. This is to ensure the existing bucket doesn’t get deleted. You can manually delete the inventory configurations by going on the Management tab in the bucket view.
Note: Review Amazon S3 pricing for costs related to inventory generation.
Step 2: Configure required permissions
This template creates two IAM policies:
A policy to allow Datadog to read inventory files from the destination bucket
A policy to allow your source bucket to write inventory files to the destination bucket
In AWS CloudFormation, click Create stack in the top right corner and select With new resources (standard).
In the Specify template step, select Upload a template file.
Click Choose file and select the cloud-inventory-policies-cfn.yaml file, then click Next.
Fill in the required parameters:
DatadogIntegrationRole: Your Datadog AWS integration role name
DestinationBucketName: The name of the bucket that receives your inventory files. Note: You must only use one destination bucket for all inventory files generated in an AWS account.
SourceBucketName: The name of the bucket you want to start generating inventory files for
Optional parameters:
SourceBucketPrefix: This parameter limits the inventory generation to a specific prefix in the source bucket
DestinationBucketPrefix: If you want to reuse an existing bucket as the destination, this parameter allows the inventory files to be shipped to a specific prefix in that bucket. Ensure that any prefixes do not include trailing slashes (/)
On the Review and create step, verify the parameters have been entered correctly, and click Submit.
Finish setting up S3 buckets for Storage Monitoring
After completing the CloudFormation setup, enable buckets for Storage Monitoring from the Datadog UI:
The destination bucket can be your source bucket, but for security and logical separation, many organizations use a separate bucket.
The optional_fields section is required for Datadog prefix metrics and cost optimization insights like duplicate objects.
Finish setting up S3 buckets for Storage Monitoring
After the inventory configuration is set up and your inventory files begin appearing in the destination bucket, enable buckets for Storage Monitoring from the Datadog UI:
In Step 2: Enable S3 Inventory to get prefix level monitoring, select I enabled it myself
Choose the destination buckets that contain the inventory files for the source buckets you want to monitor and click Confirm
Use modules for complex setups
If you need to manage multiple buckets, complex inventory policies, encryption, or cross-account setups, you can use the terraform-aws-s3-bucket module.
Troubleshooting
S3 Inventory files are delivered daily, and may take up to 24 hours to appear after setup.
Ensure IAM permissions allow S3 to write inventory files to your destination bucket.
If cross-account access is needed, confirm that the inventory destination prefix (datadog-inventory/ in the example) is correct and accessible to Datadog.
To manually set up the required Amazon S3 Inventory and related configuration, follow these steps:
Step 1: Create a destination bucket
Create an S3 bucket to store your inventory files. This bucket acts as the central location for inventory reports. Note: You must only use one destination bucket for all inventory files generated in an AWS account.
Create a prefix within the destination bucket (optional).
Step 2: Configure the bucket and integration role policies
Ensure the Datadog AWS integration role has s3:GetObject and s3:ListBucket permissions on the destination bucket. These permissions allow Datadog to read the generated inventory files.
Follow the steps in the Amazon S3 user guide to add a bucket policy to your destination bucket allowing write access (s3:PutObject) from your source buckets.
Object versions: Datadog recommends selecting Current Versions Only
Destination: Select the common destination bucket for inventory files in your AWS account. For example, if the bucket is named destination-bucket, enter s3://your-destination-bucketNote: If you want to use a prefix on the destination bucket, add this as well
Frequency: Datadog recommends choosing Daily. This setting determines how often your prefix-level metrics are updated in Datadog
Output format: CSV
Status: Enabled
Server-side encryption: Don’t specify an encryption key
Select all the available Additional metadata fields
Note: Review Amazon S3 pricing for costs related to inventory generation.
Post-setup steps
After the inventory configuration is set up and your inventory files begin appearing in the destination bucket, enable buckets for Storage Monitoring from the Datadog UI:
In Step 2: Enable S3 Inventory to get prefix level monitoring, select I enabled it myself
Choose the destination buckets that contain the inventory files for the source buckets you want to monitor and click Confirm
If you have already configured S3 Inventory for the buckets you want to monitor, enable buckets for Storage Monitoring from the Datadog UI:
- Navigate to Storage Monitoring → Enable Buckets
In Step 2: Enable S3 Inventory to get prefix level monitoring, select I enabled it myself
Choose the destination buckets that contain the inventory files for the source buckets you want to monitor and click Confirm
Validation
To verify your setup:
Wait for the first inventory report to generate (up to 24 hours for daily inventories).
Navigate to Infrastructure > Storage Monitoring to see if the bucket(s) you configured are showing in the explorer list when “Monitored buckets” is selected.
Troubleshooting
If you don’t see data for buckets you set up for Storage Monitoring:
Check the destination bucket for inventory files.
Confirm the Datadog integration can access the files: ensure s3:GetObject and s3:ListBucket permissions for the destination buckets are set on the Datadog AWS Integration Role.
If you’re still encountering issues, contact Datadog with your bucket details, AWS account ID, and Datadog org name.
Setup for Google Cloud Storage
The process involves the following steps:
Step 1: Install the Google Cloud integration and enable resource collection
To collect Google Cloud Storage metrics from your Google Cloud project, install the Google Cloud integration in Datadog. Enable Resource Collection for the project containing the buckets you want to monitor. Resource Collection allows Datadog to associate your buckets’ labels with the metrics collected through storage monitoring.
Note: While you can disable specific metric namespaces, keep the Cloud Storage namespace (gcp.storage) enabled.
After enabling the Storage Insights API, a project-level service agent is created automatically with the following format: service-PROJECT_NUMBER@gcp-sa-storageinsights.iam.gserviceaccount.com
The service agent requires these IAM roles:
roles/storage.insightsCollectorService on the source bucket (includes storage.buckets.getObjectInsights and storage.buckets.get permissions)
roles/storage.objectCreator on the destination bucket (includes the storage.objects.create permission)
Step 4: Create an inventory report configuration
You can create an inventory report configuration in multiple ways. The quickest methods use the Google Cloud CLI or Terraform templates. Regardless of the method, ensure the configuration:
Includes these metadata fields: "bucket", "name", "project", "size", "updated", "storageClass"
Generates CSV reports with '\n' as the delimiter and ',' as the separator
Uses this destination path format: <BUCKET>/{{date}}, where <BUCKET> is the monitored bucket-name
Copy the following Terraform template, substitute the necessary arguments, and apply it in the Google Cloud project that contains your bucket.
Terraform configuration for inventory reports
locals {
source_bucket="" # The name of the bucket you want to monitor
destination_bucket="" # The bucket where inventory reports are written
frequency="" # Possible values: Daily, Weekly (report generation frequency)
location="" # The location of your source and destination buckets
}
data"google_project" "project" {
}
resource"google_storage_insights_report_config" "config" {
display_name="datadog-storage-monitoring" location=local.locationfrequency_options {
frequency=local.frequencystart_date {
day="" # Fill in the day
month="" # Fill in the month
year="" # Fill in the year
}
end_date {
day="" # Fill in the day
month="" # Fill in the month
year="" # Fill in the year
}
}
csv_options {
record_separator="\n" delimiter="," header_required=false }
object_metadata_report_options {
metadata_fields=["bucket", "name", "project", "size", "updated", "storageClass"]storage_filters {
bucket=local.source_bucket }
storage_destination_options {
bucket=google_storage_bucket.report_bucket.name destination_path="${local.source_bucket}/{{date}}" }
}
depends_on=[google_storage_bucket_iam_member.admin]}
resource"google_storage_bucket" "report_bucket" {
name=local.destination_bucket location=local.location force_destroy=true uniform_bucket_level_access=true}
resource"google_storage_bucket_iam_member" "admin" {
bucket=google_storage_bucket.report_bucket.name role="roles/storage.admin" member="serviceAccount:service-${data.google_project.project.number}@gcp-sa-storageinsights.iam.gserviceaccount.com"}
You can allow Datadog to handle the inventory report configuration by providing the proper permissions to your service account:
Navigate to IAM & Admin -> Service accounts
Find your Datadog service account and add the roles/storageinsights.Admin role
Navigate to the source bucket you want to monitor and grant these permissions:
roles/storage.insightsCollectorService
roles/storage.ObjectViewer
Navigate to the destination bucket and grant these permissions:
roles/storage.objectCreator
roles/storage.insightsCollectorService
Alternatively, you can create a custom role specifically for Datadog with these required permissions:
After granting the necessary permissions, Datadog can create the inventory report configuration with your setup details.
Step 5: Add the Storage Object Viewer role to your Datadog service account
Grant Datadog permission to access and extract the generated inventory reports from Google. This permission should be on the destination bucket where the inventory reports are stored.
Select the destination bucket for your inventory reports
In the bucket details page, click the Permissions tab
Under Permissions, click Grant Access to add a new principal
Principal: Enter the Datadog Service Account email
Before running the script, set your shell environment to Bash and replace the various placeholder inputs with the correct values:
<CLIENT_ID>: The client ID of an App Registration already set up using the Datadog Azure integration
<SUBSCRIPTION_ID>: The subscription ID of the Azure subscription containing the storage accounts
<COMMA_SEPARATED_STORAGE_ACCOUNT_NAMES>: A comma-separated list of the storage accounts you want to monitor (for example, storageaccount1,storageaccount2)
For Each Storage Account you wish to monitor, follow all of the steps here:
Create a blob inventory policy
In the Azure portal, navigate to your Storage Account.
Go to Data management > Blob inventory.
Click Add.
Configure the policy:
Name: datadog-storage-monitoring
Destination container:
Click Create new, and enter the name datadog-storage-monitoring.
Object type to inventory: Blob
Schedule: Daily
Blob types: Select Block blobs, Append blobs, and Page blobs.
Subtypes: Select Include blob versions
Schema fields: Select All, or ensure that at least the following are selected:
Name
Access tier
Last modified
Content length
Server encrypted
Current version status
Version ID
Exclude prefix: datadog-storage-monitoring
Click Add.
Add the role assignment
In the Azure portal, navigate to your Storage Account.
Go to Data storage > Containers.
Click on the datadog-storage-monitoring container.
Click on Access control (IAM) in the left-hand menu.
Click Add > Add role assignment.
Fill out the role assignment:
Role: Select Storage Blob Data Reader. Click Next.
Assign access to: User, group, or service principal.
Members: Click + Select members and search for your App Registration by its name and select it.
Note: This should be an App Registration set up in the Datadog Azure integration. Keep in mind the Client ID for later.
Click Review + assign.
Post-Installation
After you finish with the above steps, fill out the post-setup form.
Further reading
Additional helpful documentation, links, and articles: