---
isPrivate: true
title: Packs
description: Learn more about Observability Pipelines Packs
breadcrumbs: Docs > Observability Pipelines > Packs
---

# Packs

{% callout %}
# Important note for users on the following Datadog sites: app.ddog-gov.com

{% alert level="danger" %}
This product is not supported for your selected [Datadog site](https://docs.datadoghq.com/getting_started/site). ().
{% /alert %}

{% /callout %}

## Overview{% #overview %}

{% image
   source="https://datadog-docs.imgix.net/images/observability_pipelines/packs/packs.b87a52886c5259886323ee2a47dcfc00.png?auto=format"
   alt="The packs section of Observability Pipelines" /%}

When setting up a pipeline to send logs from a specific source to Observability Pipelines, you often need to decide how to process and manage those logs.

Questions such as the following might come up:

- Which logs from this source are important?
- Which logs can safely be dropped?
- Should repetitive logs be sampled?
- Which fields should be parsed or formatted for the destination?

Making these decisions typically requires coordination across multiple teams and detailed knowledge of each log source.

Observability Pipelines Packs provide predefined configurations to help you make these decisions quickly and consistently. Packs apply Datadog-recommended best practices for specific log sources such as Akamai, AWS CloudTrail, Cloudflare, Fastly, Palo Alto Firewall, and Zscaler.

### What Packs do{% #what-packs-do %}

Each Pack includes source-specific configurations that defines:

- **Fields that can safely be removed** to reduce payload size
- **Logs that can be dropped**, such as duplicate events or health checks
- **Logs that should be retained or parsed**, such as errors or security detections
- **Formatting and normalization rules** to align logs across different destinations and environments

By using Packs, you can apply consistent parsing, filtering, and routing logic for each log source without creating configurations manually.

### Why use Packs{% #why-use-packs %}

Packs help teams:

- **Reduce ingestion volume and costs** by filtering or sampling repetitive, low-value events
- **Maintain consistency** in parsing and field mapping across environments and destinations
- **Accelerate setup** by applying ready-to-use configurations for common sources

## Packs{% #packs %}

These packs are available:

- [Akamai CDN](https://docs.datadoghq.com/observability_pipelines/packs/akamai_cdn/)
- [Amazon VPC Flow Logs](https://docs.datadoghq.com/observability_pipelines/packs/amazon_vpc_flow_logs/)
- [AWS Application Load Balancer Logs](https://docs.datadoghq.com/observability_pipelines/packs/aws_alb/)
- [AWS CloudFront](https://docs.datadoghq.com/observability_pipelines/packs/amazon_cloudfront/)
- [AWS CloudTrail](https://docs.datadoghq.com/observability_pipelines/packs/aws_cloudtrail/)
- [AWS Elastic Load Balancer Logs](https://docs.datadoghq.com/observability_pipelines/packs/aws_elb/)
- [AWS Network Load Balancer Logs](https://docs.datadoghq.com/observability_pipelines/packs/aws_nlb/)
- [AWS WAF](https://docs.datadoghq.com/observability_pipelines/packs/aws_waf/)
- [Check Point](https://docs.datadoghq.com/observability_pipelines/packs/checkpoint/)
- [Cisco ASA](https://docs.datadoghq.com/observability_pipelines/packs/cisco_asa/)
- [Cisco Meraki](https://docs.datadoghq.com/observability_pipelines/packs/cisco_meraki/)
- [Cloudflare](https://docs.datadoghq.com/observability_pipelines/packs/cloudflare/)
- [CrowdStrike FDR](https://docs.datadoghq.com/observability_pipelines/packs/crowdstrike/)
- [F5](https://docs.datadoghq.com/observability_pipelines/packs/f5/)
- [Fastly](https://docs.datadoghq.com/observability_pipelines/packs/fastly/)
- [Fortinet Firewall](https://docs.datadoghq.com/observability_pipelines/packs/fortinet_firewall/)
- [HAProxy Ingress](https://docs.datadoghq.com/observability_pipelines/packs/haproxy_ingress/)
- [Infoblox](https://docs.datadoghq.com/observability_pipelines/packs/infoblox/)
- [Istio Proxy](https://docs.datadoghq.com/observability_pipelines/packs/istio_proxy/)
- [Juniper SRX Firewall Traffic Logs](https://docs.datadoghq.com/observability_pipelines/packs/juniper_srx_traffic/)
- [Netskope](https://docs.datadoghq.com/observability_pipelines/packs/netskope/)
- [NGINX](https://docs.datadoghq.com/observability_pipelines/packs/nginx/)
- [Okta](https://docs.datadoghq.com/observability_pipelines/packs/okta/)
- [Palo Alto Firewall](https://docs.datadoghq.com/observability_pipelines/packs/palo_alto_firewall/)
- [SentinelOne Cloud Funnel EDR](https://docs.datadoghq.com/observability_pipelines/packs/sentinel_one/)
- [Windows XML](https://docs.datadoghq.com/observability_pipelines/packs/windows_xml/)
- [ZScaler ZIA DNS](https://docs.datadoghq.com/observability_pipelines/packs/zscaler_zia_dns/)
- [Zscaler ZIA Firewall](https://docs.datadoghq.com/observability_pipelines/packs/zscaler_zia_firewall/)
- [Zscaler ZIA Tunnel](https://docs.datadoghq.com/observability_pipelines/packs/zscaler_zia_tunnel/)
- [Zscaler ZIA Web Logs](https://docs.datadoghq.com/observability_pipelines/packs/zscaler_zia_web_logs/)
- [Zscaler ZPA](https://docs.datadoghq.com/observability_pipelines/packs/zscaler_zpa/)

## Setup{% #setup %}

To set up packs:

1. Navigate to the [Pipelines](https://app.datadoghq.com/observability-pipelines) page.
1. Click **Packs**.
1. Click the pack you want to set up.
1. You can either create a new pipeline from the pack or add the pack to an existing pipelines.
   - If you clicked **Add to New Pipeline**, in the new pipeline that was created:
     - Click the processor group that was added to see the individual processors that the pack added and edit them as needed. See [Processors](https://docs.datadoghq.com/observability_pipelines/processors/) for more information.
     - See [Set Up Pipelines](https://docs.datadoghq.com/observability_pipelines/set_up_pipelines/) for information on setting up the rest of the pipeline.
   - If you clicked **Add to Existing Pipeline**:
     1. Select the pipeline you want to add the pack to.
     1. Click **Add to Existing Pipeline**.
        1. The pack is added to the last processor group in your pipeline.
        1. Click on the group to review the individual processors and edit them as needed. See [Processors](https://docs.datadoghq.com/observability_pipelines/processors/) for more information.

## Further Reading{% #further-reading %}

- [Rehydrate archived logs in any SIEM or logging vendor with Observability Pipelines](https://www.datadoghq.com/blog/rehydrate-archived-logs-with-observability-pipelines)
