Name of the environment variable or secret that holds a custom header value (used with custom auth strategies).
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
password_key
string
Name of the environment variable or secret that holds the password (used when auth_strategy is basic).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
token_key
string
Name of the environment variable or secret that holds the bearer token (used when auth_strategy is bearer).
type [required]
enum
The destination type. The value should always be http_client.
Allowed enum values: http_client
default: http_client
uri_key
string
Name of the environment variable or secret that holds the HTTP endpoint URI.
username_key
string
Name of the environment variable or secret that holds the username (used when auth_strategy is basic).
Option 2
object
The amazon_opensearch destination writes logs to Amazon OpenSearch.
Supported pipeline types: logs
auth [required]
object
Authentication settings for the Amazon OpenSearch destination.
The strategy field determines whether basic or AWS-based authentication is used.
assume_role
string
The ARN of the role to assume (used with aws strategy).
aws_region
string
AWS region
external_id
string
External ID for the assumed role (used with aws strategy).
session_name
string
Session name for the assumed role (used with aws strategy).
strategy [required]
enum
The authentication strategy to use.
Allowed enum values: basic,aws
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
bulk_index
string
The index to write logs to.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be amazon_opensearch.
Allowed enum values: amazon_opensearch
default: amazon_opensearch
Option 3
object
The amazon_s3 destination sends your logs in Datadog-rehydratable format to an Amazon S3 bucket for archiving.
Supported pipeline types: logs
auth
object
AWS authentication credentials used for accessing AWS services such as S3.
If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).
assume_role
string
The Amazon Resource Name (ARN) of the role to assume.
external_id
string
A unique identifier for cross-account role assumption.
session_name
string
A session identifier used for logging and tracing the assumed role session.
bucket [required]
string
S3 bucket name.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
Unique identifier for the destination component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. Always amazon_s3.
Allowed enum values: amazon_s3
default: amazon_s3
Option 4
object
The amazon_security_lake destination sends your logs to Amazon Security Lake.
Supported pipeline types: logs
auth
object
AWS authentication credentials used for accessing AWS services such as S3.
If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).
assume_role
string
The Amazon Resource Name (ARN) of the role to assume.
external_id
string
A unique identifier for cross-account role assumption.
session_name
string
A session identifier used for logging and tracing the assumed role session.
bucket [required]
string
Name of the Amazon S3 bucket in Security Lake (3-63 characters).
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
custom_source_name [required]
string
Custom source name for the logs in Security Lake.
id [required]
string
Unique identifier for the destination component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
region [required]
string
AWS region of the S3 bucket.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. Always amazon_security_lake.
Allowed enum values: amazon_security_lake
default: amazon_security_lake
Option 5
object
The azure_storage destination forwards logs to an Azure Blob Storage container.
Supported pipeline types: logs
blob_prefix
string
Optional prefix for blobs written to the container.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
connection_string_key
string
Name of the environment variable or secret that holds the Azure Storage connection string.
container_name [required]
string
The name of the Azure Blob Storage container to store logs in.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be azure_storage.
Allowed enum values: azure_storage
default: azure_storage
Option 6
object
The cloud_prem destination sends logs to Datadog CloudPrem.
Supported pipeline types: logs
endpoint_url_key
string
Name of the environment variable or secret that holds the CloudPrem endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be cloud_prem.
Allowed enum values: cloud_prem
default: cloud_prem
Option 7
object
The crowdstrike_next_gen_siem destination forwards logs to CrowdStrike Next Gen SIEM.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
compression
object
Compression configuration for log events.
algorithm [required]
enum
Compression algorithm for log events.
Allowed enum values: gzip,zlib
level
int64
Compression level.
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json,raw_message
endpoint_url_key
string
Name of the environment variable or secret that holds the CrowdStrike endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
token_key
string
Name of the environment variable or secret that holds the CrowdStrike API token.
type [required]
enum
The destination type. The value should always be crowdstrike_next_gen_siem.
Allowed enum values: crowdstrike_next_gen_siem
default: crowdstrike_next_gen_siem
Option 8
object
The datadog_logs destination forwards logs to Datadog Log Management.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
routes
[object]
A list of routing rules that forward matching logs to Datadog using dedicated API keys.
api_key_key
string
Name of the environment variable or secret that stores the Datadog API key used by this route.
include
string
A Datadog search query that determines which logs are forwarded using this route.
route_id
string
Unique identifier for this route within the destination.
site
string
Datadog site where matching logs are sent (for example, us1).
type [required]
enum
The destination type. The value should always be datadog_logs.
Allowed enum values: datadog_logs
default: datadog_logs
Option 9
object
The elasticsearch destination writes logs to an Elasticsearch cluster.
Supported pipeline types: logs
api_version
enum
The Elasticsearch API version to use. Set to auto to auto-detect.
Allowed enum values: auto,v6,v7,v8
auth
object
Authentication settings for the Elasticsearch destination.
When strategy is basic, use username_key and password_key to reference credentials stored in environment variables or secrets.
password_key
string
Name of the environment variable or secret that holds the Elasticsearch password (used when strategy is basic).
strategy [required]
enum
The authentication strategy to use.
Allowed enum values: basic,aws
username_key
string
Name of the environment variable or secret that holds the Elasticsearch username (used when strategy is basic).
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
bulk_index
string
The index to write logs to in Elasticsearch.
data_stream
object
Configuration options for writing to Elasticsearch Data Streams instead of a fixed index.
dataset
string
The data stream dataset for your logs. This groups logs by their source or application.
dtype
string
The data stream type for your logs. This determines how logs are categorized within the data stream.
namespace
string
The data stream namespace for your logs. This separates logs into different environments or domains.
endpoint_url_key
string
Name of the environment variable or secret that holds the Elasticsearch endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be elasticsearch.
Allowed enum values: elasticsearch
default: elasticsearch
Option 10
object
The google_chronicle destination sends logs to Google Chronicle.
Supported pipeline types: logs
auth
object
Google Cloud credentials used to authenticate with Google Cloud Storage.
credentials_file [required]
string
Path to the Google Cloud service account key file.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
customer_id [required]
string
The Google Chronicle customer ID.
encoding
enum
The encoding format for the logs sent to Chronicle.
Allowed enum values: json,raw_message
endpoint_url_key
string
Name of the environment variable or secret that holds the Google Chronicle endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
log_type
string
The log type metadata associated with the Chronicle destination.
type [required]
enum
The destination type. The value should always be google_chronicle.
Allowed enum values: google_chronicle
default: google_chronicle
Option 11
object
The google_cloud_storage destination stores logs in a Google Cloud Storage (GCS) bucket.
It requires a bucket name, Google Cloud authentication, and metadata fields.
Supported pipeline types: logs
acl
enum
Access control list setting for objects written to the bucket.
Allowed enum values: private,project-private,public-read,authenticated-read,bucket-owner-read,bucket-owner-full-control
auth
object
Google Cloud credentials used to authenticate with Google Cloud Storage.
credentials_file [required]
string
Path to the Google Cloud service account key file.
bucket [required]
string
Name of the GCS bucket.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
Unique identifier for the destination component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
key_prefix
string
Optional prefix for object keys within the GCS bucket.
metadata
[object]
Custom metadata to attach to each object uploaded to the GCS bucket.
name [required]
string
The metadata key.
value [required]
string
The metadata value.
storage_class [required]
enum
Storage class used for objects stored in GCS.
Allowed enum values: STANDARD,NEARLINE,COLDLINE,ARCHIVE
type [required]
enum
The destination type. Always google_cloud_storage.
Allowed enum values: google_cloud_storage
default: google_cloud_storage
Option 12
object
The google_pubsub destination publishes logs to a Google Cloud Pub/Sub topic.
Supported pipeline types: logs
auth
object
Google Cloud credentials used to authenticate with Google Cloud Storage.
credentials_file [required]
string
Path to the Google Cloud service account key file.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json,raw_message
endpoint_url_key
string
Name of the environment variable or secret that holds the Google Cloud Pub/Sub endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
project [required]
string
The Google Cloud project ID that owns the Pub/Sub topic.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
topic [required]
string
The Pub/Sub topic name to publish logs to.
type [required]
enum
The destination type. The value should always be google_pubsub.
Allowed enum values: google_pubsub
default: google_pubsub
Option 13
object
The kafka destination sends logs to Apache Kafka topics.
Supported pipeline types: logs
bootstrap_servers_key
string
Name of the environment variable or secret that holds the Kafka bootstrap servers list.
compression
enum
Compression codec for Kafka messages.
Allowed enum values: none,gzip,snappy,lz4,zstd
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json,raw_message
headers_key
string
The field name to use for Kafka message headers.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
key_field
string
The field name to use as the Kafka message key.
librdkafka_options
[object]
Optional list of advanced Kafka producer configuration options, defined as key-value pairs.
name [required]
string
The name of the librdkafka configuration option to set.
value [required]
string
The value assigned to the specified librdkafka configuration option.
message_timeout_ms
int64
Maximum time in milliseconds to wait for message delivery confirmation.
rate_limit_duration_secs
int64
Duration in seconds for the rate limit window.
rate_limit_num
int64
Maximum number of messages allowed per rate limit duration.
sasl
object
Specifies the SASL mechanism for authenticating with a Kafka cluster.
mechanism
enum
SASL mechanism used for Kafka authentication.
Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
password_key
string
Name of the environment variable or secret that holds the SASL password.
username_key
string
Name of the environment variable or secret that holds the SASL username.
socket_timeout_ms
int64
Socket timeout in milliseconds for network requests.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
topic [required]
string
The Kafka topic name to publish logs to.
type [required]
enum
The destination type. The value should always be kafka.
Allowed enum values: kafka
default: kafka
Option 14
object
The microsoft_sentinel destination forwards logs to Microsoft Sentinel.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
client_id [required]
string
Azure AD client ID used for authentication.
client_secret_key
string
Name of the environment variable or secret that holds the Azure AD client secret.
dce_uri_key
string
Name of the environment variable or secret that holds the Data Collection Endpoint (DCE) URI.
dcr_immutable_id [required]
string
The immutable ID of the Data Collection Rule (DCR).
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
table [required]
string
The name of the Log Analytics table where logs are sent.
tenant_id [required]
string
Azure AD tenant ID.
type [required]
enum
The destination type. The value should always be microsoft_sentinel.
Allowed enum values: microsoft_sentinel
default: microsoft_sentinel
Option 15
object
The new_relic destination sends logs to the New Relic platform.
Supported pipeline types: logs
account_id_key
string
Name of the environment variable or secret that holds the New Relic account ID.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
license_key_key
string
Name of the environment variable or secret that holds the New Relic license key.
region [required]
enum
The New Relic region.
Allowed enum values: us,eu
type [required]
enum
The destination type. The value should always be new_relic.
Allowed enum values: new_relic
default: new_relic
Option 16
object
The opensearch destination writes logs to an OpenSearch cluster.
Supported pipeline types: logs
auth
object
Authentication settings for the Elasticsearch destination.
When strategy is basic, use username_key and password_key to reference credentials stored in environment variables or secrets.
password_key
string
Name of the environment variable or secret that holds the Elasticsearch password (used when strategy is basic).
strategy [required]
enum
The authentication strategy to use.
Allowed enum values: basic,aws
username_key
string
Name of the environment variable or secret that holds the Elasticsearch username (used when strategy is basic).
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
bulk_index
string
The index to write logs to.
data_stream
object
Configuration options for writing to OpenSearch Data Streams instead of a fixed index.
dataset
string
The data stream dataset for your logs. This groups logs by their source or application.
dtype
string
The data stream type for your logs. This determines how logs are categorized within the data stream.
namespace
string
The data stream namespace for your logs. This separates logs into different environments or domains.
endpoint_url_key
string
Name of the environment variable or secret that holds the OpenSearch endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be opensearch.
Allowed enum values: opensearch
default: opensearch
Option 17
object
The rsyslog destination forwards logs to an external rsyslog server over TCP or UDP using the syslog protocol.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
endpoint_url_key
string
Name of the environment variable or secret that holds the syslog server endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
keepalive
int64
Optional socket keepalive duration in milliseconds.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. The value should always be rsyslog.
Allowed enum values: rsyslog
default: rsyslog
Option 18
object
The sentinel_one destination sends logs to SentinelOne.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
region [required]
enum
The SentinelOne region to send logs to.
Allowed enum values: us,eu,ca,data_set_us
token_key
string
Name of the environment variable or secret that holds the SentinelOne API token.
type [required]
enum
The destination type. The value should always be sentinel_one.
Allowed enum values: sentinel_one
default: sentinel_one
Option 19
object
The socket destination sends logs over TCP or UDP to a remote server.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the socket address (host:port).
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json,raw_message
framing [required]
<oneOf>
Framing method configuration.
Option 1
object
Each log event is delimited by a newline character.
method [required]
enum
The definition of ObservabilityPipelineSocketDestinationFramingNewlineDelimitedMethod object.
Allowed enum values: newline_delimited
Option 2
object
Event data is not delimited at all.
method [required]
enum
The definition of ObservabilityPipelineSocketDestinationFramingBytesMethod object.
Allowed enum values: bytes
Option 3
object
Each log event is separated using the specified delimiter character.
delimiter [required]
string
A single ASCII character used as a delimiter.
method [required]
enum
The definition of ObservabilityPipelineSocketDestinationFramingCharacterDelimitedMethod object.
Allowed enum values: character_delimited
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
mode [required]
enum
Protocol used to send logs.
Allowed enum values: tcp,udp
tls
object
TLS configuration. Relevant only when mode is tcp.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. The value should always be socket.
Allowed enum values: socket
default: socket
Option 20
object
The splunk_hec destination forwards logs to Splunk using the HTTP Event Collector (HEC).
Supported pipeline types: logs
auto_extract_timestamp
boolean
If true, Splunk tries to extract timestamps from incoming log events.
If false, Splunk assigns the time the event was received.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
encoding
enum
Encoding format for log events.
Allowed enum values: json,raw_message
endpoint_url_key
string
Name of the environment variable or secret that holds the Splunk HEC endpoint URL.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
index
string
Optional name of the Splunk index where logs are written.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
sourcetype
string
The Splunk sourcetype to assign to log events.
token_key
string
Name of the environment variable or secret that holds the Splunk HEC token.
type [required]
enum
The destination type. Always splunk_hec.
Allowed enum values: splunk_hec
default: splunk_hec
Option 21
object
The sumo_logic destination forwards logs to Sumo Logic.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
encoding
enum
The output encoding format.
Allowed enum values: json,raw_message,logfmt
endpoint_url_key
string
Name of the environment variable or secret that holds the Sumo Logic HTTP endpoint URL.
header_custom_fields
[object]
A list of custom headers to include in the request to Sumo Logic.
name [required]
string
The header field name.
value [required]
string
The header field value.
header_host_name
string
Optional override for the host name header.
header_source_category
string
Optional override for the source category header.
header_source_name
string
Optional override for the source name header.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be sumo_logic.
Allowed enum values: sumo_logic
default: sumo_logic
Option 22
object
The syslog_ng destination forwards logs to an external syslog-ng server over TCP or UDP using the syslog protocol.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
endpoint_url_key
string
Name of the environment variable or secret that holds the syslog-ng server endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
keepalive
int64
Optional socket keepalive duration in milliseconds.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. The value should always be syslog_ng.
Allowed enum values: syslog_ng
default: syslog_ng
Option 23
object
The datadog_metrics destination forwards metrics to Datadog.
Supported pipeline types: metrics
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be datadog_metrics.
Allowed enum values: datadog_metrics
default: datadog_metrics
pipeline_type
enum
The type of data being ingested. Defaults to logs if not specified.
Allowed enum values: logs,metrics
default: logs
processor_groups
[object]
A list of processor groups that transform or enrich log data.
display_name
string
The display name for a component.
enabled [required]
boolean
Whether this processor group is enabled.
id [required]
string
The unique identifier for the processor group.
include [required]
string
Conditional expression for when this processor group should execute.
inputs [required]
[string]
A list of IDs for components whose output is used as the input for this processor group.
processors [required]
[ <oneOf>]
Processors applied sequentially within this group. Events flow through each processor in order.
Option 1
object
The filter processor allows conditional processing of logs/metrics based on a Datadog search query. Logs/metrics that match the include query are passed through; others are discarded.
Supported pipeline types: logs, metrics
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs/metrics should pass through the filter. Logs/metrics that match this query continue to downstream components; others are dropped.
type [required]
enum
The processor type. The value should always be filter.
Allowed enum values: filter
default: filter
Option 2
object
The add_env_vars processor adds environment variable values to log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this processor in the pipeline.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_env_vars.
Allowed enum values: add_env_vars
default: add_env_vars
variables [required]
[object]
A list of environment variable mappings to apply to log fields.
field [required]
string
The target field in the log event.
name [required]
string
The name of the environment variable to read.
Option 3
object
The add_fields processor adds static key-value fields to logs.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[object]
A list of static fields (key-value pairs) that is added to each log event processed by this component.
name [required]
string
The field name.
value [required]
string
The field value.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_fields.
Allowed enum values: add_fields
default: add_fields
Option 4
object
The add_hostname processor adds the hostname to log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_hostname.
Allowed enum values: add_hostname
default: add_hostname
Option 5
object
The custom_processor processor transforms events using Vector Remap Language (VRL) scripts with advanced filtering capabilities.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets. This field should always be set to * for the custom_processor processor.
default: *
remaps [required]
[object]
Array of VRL remap rules.
drop_on_error [required]
boolean
Whether to drop events that caused errors during processing.
enabled
boolean
Whether this remap rule is enabled.
include [required]
string
A Datadog search query used to filter events for this specific remap rule.
name [required]
string
A descriptive name for this remap rule.
source [required]
string
The VRL script source code that defines the processing logic.
type [required]
enum
The processor type. The value should always be custom_processor.
Allowed enum values: custom_processor
default: custom_processor
Option 6
object
The datadog_tags processor includes or excludes specific Datadog tags in your logs.
Supported pipeline types: logs
action [required]
enum
The action to take on tags with matching keys.
Allowed enum values: include,exclude
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
keys [required]
[string]
A list of tag keys.
mode [required]
enum
The processing mode.
Allowed enum values: filter
type [required]
enum
The processor type. The value should always be datadog_tags.
Allowed enum values: datadog_tags
default: datadog_tags
Option 7
object
The dedupe processor removes duplicate fields in log events.
Supported pipeline types: logs
cache
object
Configuration for the cache used to detect duplicates.
num_events [required]
int64
The number of events to cache for duplicate detection.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[string]
A list of log field paths to check for duplicates.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
mode [required]
enum
The deduplication mode to apply to the fields.
Allowed enum values: match,ignore
type [required]
enum
The processor type. The value should always be dedupe.
Allowed enum values: dedupe
default: dedupe
Option 8
object
The enrichment_table processor enriches logs using a static CSV file, GeoIP database, or reference table. Exactly one of file, geoip, or reference_table must be configured.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
file
object
Defines a static enrichment table loaded from a CSV file.
encoding [required]
object
File encoding format.
delimiter [required]
string
The encodingdelimiter.
includes_headers [required]
boolean
The encodingincludes_headers.
type [required]
enum
Specifies the encoding format (e.g., CSV) used for enrichment tables.
Allowed enum values: csv
key [required]
[object]
Key fields used to look up enrichment values.
column [required]
string
The itemscolumn.
comparison [required]
enum
Defines how to compare key fields for enrichment table lookups.
Allowed enum values: equals
field [required]
string
The itemsfield.
path [required]
string
Path to the CSV file.
schema [required]
[object]
Schema defining column names and their types.
column [required]
string
The itemscolumn.
type [required]
enum
Declares allowed data types for enrichment table columns.
Allowed enum values: string,boolean,integer,float,date,timestamp
geoip
object
Uses a GeoIP database to enrich logs based on an IP field.
key_field [required]
string
Path to the IP field in the log.
locale [required]
string
Locale used to resolve geographical names.
path [required]
string
Path to the GeoIP database file.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
reference_table
object
Uses a Datadog reference table to enrich logs.
app_key_key
string
Name of the environment variable or secret that holds the Datadog application key used to access the reference table.
columns
[string]
List of column names to include from the reference table. If not provided, all columns are included.
key_field [required]
string
Path to the field in the log event to match against the reference table.
table_id [required]
string
The unique identifier of the reference table.
target [required]
string
Path where enrichment results should be stored in the log.
type [required]
enum
The processor type. The value should always be enrichment_table.
Allowed enum values: enrichment_table
default: enrichment_table
Option 9
object
The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog.
Metrics can be counters, gauges, or distributions and optionally grouped by log fields.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline.
include
string
A Datadog search query used to determine which logs this processor targets.
metrics
[object]
Configuration for generating individual metrics.
group_by
[string]
Optional fields used to group the metric series.
include [required]
string
Datadog filter query to match logs for metric generation.
metric_type [required]
enum
Type of metric to create.
Allowed enum values: count,gauge,distribution
name [required]
string
Name of the custom metric to be created.
value [required]
<oneOf>
Specifies how the value of the generated metric is computed.
Option 1
object
Strategy that increments a generated metric by one for each matching event.
strategy [required]
enum
Increments the metric by 1 for each matching event.
Allowed enum values: increment_by_one
Option 2
object
Strategy that increments a generated metric based on the value of a log field.
field [required]
string
Name of the log field containing the numeric value to increment the metric by.
strategy [required]
enum
Uses a numeric field in the log event as the metric increment.
Allowed enum values: increment_by_field
type [required]
enum
The processor type. Always generate_datadog_metrics.
Allowed enum values: generate_datadog_metrics
default: generate_datadog_metrics
Option 10
object
The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
mappings [required]
[object]
A list of mapping rules to convert events to the OCSF format.
include [required]
string
A Datadog search query used to select the logs that this mapping should apply to.
mapping [required]
<oneOf>
Defines a single mapping rule for transforming logs into the OCSF schema.
Custom OCSF mapping configuration for transforming logs.
mapping [required]
[object]
A list of field mapping rules for transforming log fields to OCSF schema fields.
default
The default value to use if the source field is missing or empty.
dest [required]
string
The destination OCSF field path.
lookup
object
Lookup table configuration for mapping source values to destination values.
source
The source field path from the log event.
sources
Multiple source field paths for combined mapping.
value
A static value to use for the destination field.
metadata [required]
object
Metadata for the custom OCSF mapping.
class [required]
string
The OCSF event class name.
profiles
[string]
A list of OCSF profiles to apply.
version [required]
string
The OCSF schema version.
version [required]
int64
The version of the custom mapping configuration.
type [required]
enum
The processor type. The value should always be ocsf_mapper.
Allowed enum values: ocsf_mapper
default: ocsf_mapper
Option 11
object
The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.
Supported pipeline types: logs
disable_library_rules
boolean
If set to true, disables the default Grok rules provided by Datadog.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
A unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
rules [required]
[object]
The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.
match_rules [required]
[object]
A list of Grok parsing rules that define how to extract fields from the source field.
Each rule must contain a name and a valid Grok pattern.
name [required]
string
The name of the rule.
rule [required]
string
The definition of the Grok rule.
source [required]
string
The name of the field in the log event to apply the Grok rules to.
support_rules
[object]
A list of Grok helper rules that can be referenced by the parsing rules.
name [required]
string
The name of the Grok helper rule.
rule [required]
string
The definition of the Grok helper rule.
type [required]
enum
The processor type. The value should always be parse_grok.
Allowed enum values: parse_grok
default: parse_grok
Option 12
object
The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
field [required]
string
The name of the log field that contains a JSON string.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be parse_json.
Allowed enum values: parse_json
default: parse_json
Option 13
object
The parse_xml processor parses XML from a specified field and extracts it into the event.
Supported pipeline types: logs
always_use_text_key
boolean
Whether to always use a text key for element content.
attr_prefix
string
The prefix to use for XML attributes in the parsed output.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
field [required]
string
The name of the log field that contains an XML string.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
include_attr
boolean
Whether to include XML attributes in the parsed output.
parse_bool
boolean
Whether to parse boolean values from strings.
parse_null
boolean
Whether to parse null values.
parse_number
boolean
Whether to parse numeric values from strings.
text_key
string
The key name to use for text content within XML elements. Must be at least 1 character if specified.
type [required]
enum
The processor type. The value should always be parse_xml.
Allowed enum values: parse_xml
default: parse_xml
Option 14
object
The quota processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.
Supported pipeline types: logs
display_name
string
The display name for a component.
drop_events
boolean
If set to true, logs that match the quota filter and are sent after the quota is exceeded are dropped. Logs that do not match the filter continue through the pipeline. Note: You can set either drop_events or overflow_action, but not both.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
ignore_when_missing_partitions
boolean
If true, the processor skips quota checks when partition fields are missing from the logs.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
name [required]
string
Name of the quota.
overflow_action
enum
The action to take when the quota or bucket limit is exceeded. Options:
drop: Drop the event.
no_action: Let the event pass through.
overflow_routing: Route to an overflow destination.
Allowed enum values: drop,no_action,overflow_routing
overrides
[object]
A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.
fields [required]
[object]
A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.
name [required]
string
The field name.
value [required]
string
The field value.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
partition_fields
[string]
A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.
too_many_buckets_action
enum
The action to take when the quota or bucket limit is exceeded. Options:
drop: Drop the event.
no_action: Let the event pass through.
overflow_routing: Route to an overflow destination.
Allowed enum values: drop,no_action,overflow_routing
type [required]
enum
The processor type. The value should always be quota.
Allowed enum values: quota
default: quota
Option 15
object
The reduce processor aggregates and merges logs based on matching keys and merge strategies.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by [required]
[string]
A list of fields used to group log events for merging.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
merge_strategies [required]
[object]
List of merge strategies defining how values from grouped events should be combined.
path [required]
string
The field path in the log event.
strategy [required]
enum
The merge strategy to apply.
Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique
type [required]
enum
The processor type. The value should always be reduce.
Allowed enum values: reduce
default: reduce
Option 16
object
The remove_fields processor deletes specified fields from logs.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[string]
A list of field names to be removed from each log event.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be remove_fields.
Allowed enum values: remove_fields
default: remove_fields
Option 17
object
The rename_fields processor changes field names.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[object]
A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.
destination [required]
string
The field name to assign the renamed value to.
preserve_source [required]
boolean
Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.
source [required]
string
The original field name in the log event that should be renamed.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be rename_fields.
Allowed enum values: rename_fields
default: rename_fields
Option 18
object
The sample processor allows probabilistic sampling of logs at a fixed rate.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by
[string]
Optional list of fields to group events by. Each group is sampled independently.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
percentage [required]
double
The percentage of logs to sample.
type [required]
enum
The processor type. The value should always be sample.
Allowed enum values: sample
default: sample
Option 19
object
The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
rules [required]
[object]
A list of rules for identifying and acting on sensitive data patterns.
keyword_options
object
Configuration for keywords used to reinforce sensitive data pattern detection.
keywords [required]
[string]
A list of keywords to match near the sensitive pattern.
proximity [required]
int64
Maximum number of tokens between a keyword and a sensitive value match.
name [required]
string
A name identifying the rule.
on_match [required]
<oneOf>
Defines what action to take when sensitive data is matched.
Option 1
object
Configuration for completely redacting matched sensitive data.
action [required]
enum
Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility.
Allowed enum values: redact
options [required]
object
Configuration for fully redacting sensitive data.
replace [required]
string
The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptionsreplace.
Option 2
object
Configuration for hashing matched sensitive values.
action [required]
enum
Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content.
Allowed enum values: hash
options
object
The ObservabilityPipelineSensitiveDataScannerProcessorActionHashoptions.
Option 3
object
Configuration for partially redacting matched sensitive data.
action [required]
enum
Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card).
Allowed enum values: partial_redact
options [required]
object
Controls how partial redaction is applied, including character count and direction.
characters [required]
int64
The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptionscharacters.
direction [required]
enum
Indicates whether to redact characters from the first or last part of the matched value.
Allowed enum values: first,last
pattern [required]
<oneOf>
Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.
Option 1
object
Defines a custom regex-based pattern for identifying sensitive data in logs.
options [required]
object
Options for defining a custom regex pattern.
description
string
Human-readable description providing context about a sensitive data scanner rule
rule [required]
string
A regular expression used to detect sensitive values. Must be a valid regex.
type [required]
enum
Indicates a custom regular expression is used for matching.
Allowed enum values: custom
Option 2
object
Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.
options [required]
object
Options for selecting a predefined library pattern and enabling keyword support.
description
string
Human-readable description providing context about a sensitive data scanner rule
id [required]
string
Identifier for a predefined pattern from the sensitive data scanner pattern library.
use_recommended_keywords
boolean
Whether to augment the pattern with recommended keywords (optional).
type [required]
enum
Indicates that a predefined library pattern is used.
Allowed enum values: library
scope [required]
<oneOf>
Determines which parts of the log the pattern-matching rule should be applied to.
Option 1
object
Includes only specific fields for sensitive data scanning.
options [required]
object
Fields to which the scope rule applies.
fields [required]
[string]
The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptionsfields.
target [required]
enum
Applies the rule only to included fields.
Allowed enum values: include
Option 2
object
Excludes specific fields from sensitive data scanning.
options [required]
object
Fields to which the scope rule applies.
fields [required]
[string]
The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptionsfields.
target [required]
enum
Excludes specific fields from processing.
Allowed enum values: exclude
Option 3
object
Applies scanning across all available fields.
target [required]
enum
Applies the rule to all fields.
Allowed enum values: all
tags [required]
[string]
Tags assigned to this rule for filtering and classification.
type [required]
enum
The processor type. The value should always be sensitive_data_scanner.
Allowed enum values: sensitive_data_scanner
default: sensitive_data_scanner
Option 20
object
The split_array processor splits array fields into separate events based on configured rules.
Supported pipeline types: logs
arrays [required]
[object]
A list of array split configurations.
field [required]
string
The path to the array field to split.
include [required]
string
A Datadog search query used to determine which logs this array split operation targets.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets. For split_array, this should typically be *.
type [required]
enum
The processor type. The value should always be split_array.
Allowed enum values: split_array
default: split_array
Option 21
object
The throttle processor limits the number of events that pass through over a given time window.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by
[string]
Optional list of fields used to group events before the threshold has been reached.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
threshold [required]
int64
the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.
type [required]
enum
The processor type. The value should always be throttle.
Allowed enum values: throttle
default: throttle
window [required]
double
The time window in seconds over which the threshold applies.
Option 22
object
The metric_tags processor filters metrics based on their tags using Datadog tag key patterns.
Supported pipeline types: metrics
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query that determines which metrics the processor targets.
rules [required]
[object]
A list of rules for filtering metric tags.
action [required]
enum
The action to take on tags with matching keys.
Allowed enum values: include,exclude
include [required]
string
A Datadog search query used to determine which metrics this rule targets.
keys [required]
[string]
A list of tag keys to include or exclude.
mode [required]
enum
The processing mode for tag filtering.
Allowed enum values: filter
type [required]
enum
The processor type. The value should always be metric_tags.
Allowed enum values: metric_tags
default: metric_tags
processors
[object]
DEPRECATED: A list of processor groups that transform or enrich log data.
Deprecated: This field is deprecated, you should now use the processor_groups field.
display_name
string
The display name for a component.
enabled [required]
boolean
Whether this processor group is enabled.
id [required]
string
The unique identifier for the processor group.
include [required]
string
Conditional expression for when this processor group should execute.
inputs [required]
[string]
A list of IDs for components whose output is used as the input for this processor group.
processors [required]
[ <oneOf>]
Processors applied sequentially within this group. Events flow through each processor in order.
Option 1
object
The filter processor allows conditional processing of logs/metrics based on a Datadog search query. Logs/metrics that match the include query are passed through; others are discarded.
Supported pipeline types: logs, metrics
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs/metrics should pass through the filter. Logs/metrics that match this query continue to downstream components; others are dropped.
type [required]
enum
The processor type. The value should always be filter.
Allowed enum values: filter
default: filter
Option 2
object
The add_env_vars processor adds environment variable values to log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this processor in the pipeline.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_env_vars.
Allowed enum values: add_env_vars
default: add_env_vars
variables [required]
[object]
A list of environment variable mappings to apply to log fields.
field [required]
string
The target field in the log event.
name [required]
string
The name of the environment variable to read.
Option 3
object
The add_fields processor adds static key-value fields to logs.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[object]
A list of static fields (key-value pairs) that is added to each log event processed by this component.
name [required]
string
The field name.
value [required]
string
The field value.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_fields.
Allowed enum values: add_fields
default: add_fields
Option 4
object
The add_hostname processor adds the hostname to log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_hostname.
Allowed enum values: add_hostname
default: add_hostname
Option 5
object
The custom_processor processor transforms events using Vector Remap Language (VRL) scripts with advanced filtering capabilities.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets. This field should always be set to * for the custom_processor processor.
default: *
remaps [required]
[object]
Array of VRL remap rules.
drop_on_error [required]
boolean
Whether to drop events that caused errors during processing.
enabled
boolean
Whether this remap rule is enabled.
include [required]
string
A Datadog search query used to filter events for this specific remap rule.
name [required]
string
A descriptive name for this remap rule.
source [required]
string
The VRL script source code that defines the processing logic.
type [required]
enum
The processor type. The value should always be custom_processor.
Allowed enum values: custom_processor
default: custom_processor
Option 6
object
The datadog_tags processor includes or excludes specific Datadog tags in your logs.
Supported pipeline types: logs
action [required]
enum
The action to take on tags with matching keys.
Allowed enum values: include,exclude
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
keys [required]
[string]
A list of tag keys.
mode [required]
enum
The processing mode.
Allowed enum values: filter
type [required]
enum
The processor type. The value should always be datadog_tags.
Allowed enum values: datadog_tags
default: datadog_tags
Option 7
object
The dedupe processor removes duplicate fields in log events.
Supported pipeline types: logs
cache
object
Configuration for the cache used to detect duplicates.
num_events [required]
int64
The number of events to cache for duplicate detection.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[string]
A list of log field paths to check for duplicates.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
mode [required]
enum
The deduplication mode to apply to the fields.
Allowed enum values: match,ignore
type [required]
enum
The processor type. The value should always be dedupe.
Allowed enum values: dedupe
default: dedupe
Option 8
object
The enrichment_table processor enriches logs using a static CSV file, GeoIP database, or reference table. Exactly one of file, geoip, or reference_table must be configured.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
file
object
Defines a static enrichment table loaded from a CSV file.
encoding [required]
object
File encoding format.
delimiter [required]
string
The encodingdelimiter.
includes_headers [required]
boolean
The encodingincludes_headers.
type [required]
enum
Specifies the encoding format (e.g., CSV) used for enrichment tables.
Allowed enum values: csv
key [required]
[object]
Key fields used to look up enrichment values.
column [required]
string
The itemscolumn.
comparison [required]
enum
Defines how to compare key fields for enrichment table lookups.
Allowed enum values: equals
field [required]
string
The itemsfield.
path [required]
string
Path to the CSV file.
schema [required]
[object]
Schema defining column names and their types.
column [required]
string
The itemscolumn.
type [required]
enum
Declares allowed data types for enrichment table columns.
Allowed enum values: string,boolean,integer,float,date,timestamp
geoip
object
Uses a GeoIP database to enrich logs based on an IP field.
key_field [required]
string
Path to the IP field in the log.
locale [required]
string
Locale used to resolve geographical names.
path [required]
string
Path to the GeoIP database file.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
reference_table
object
Uses a Datadog reference table to enrich logs.
app_key_key
string
Name of the environment variable or secret that holds the Datadog application key used to access the reference table.
columns
[string]
List of column names to include from the reference table. If not provided, all columns are included.
key_field [required]
string
Path to the field in the log event to match against the reference table.
table_id [required]
string
The unique identifier of the reference table.
target [required]
string
Path where enrichment results should be stored in the log.
type [required]
enum
The processor type. The value should always be enrichment_table.
Allowed enum values: enrichment_table
default: enrichment_table
Option 9
object
The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog.
Metrics can be counters, gauges, or distributions and optionally grouped by log fields.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline.
include
string
A Datadog search query used to determine which logs this processor targets.
metrics
[object]
Configuration for generating individual metrics.
group_by
[string]
Optional fields used to group the metric series.
include [required]
string
Datadog filter query to match logs for metric generation.
metric_type [required]
enum
Type of metric to create.
Allowed enum values: count,gauge,distribution
name [required]
string
Name of the custom metric to be created.
value [required]
<oneOf>
Specifies how the value of the generated metric is computed.
Option 1
object
Strategy that increments a generated metric by one for each matching event.
strategy [required]
enum
Increments the metric by 1 for each matching event.
Allowed enum values: increment_by_one
Option 2
object
Strategy that increments a generated metric based on the value of a log field.
field [required]
string
Name of the log field containing the numeric value to increment the metric by.
strategy [required]
enum
Uses a numeric field in the log event as the metric increment.
Allowed enum values: increment_by_field
type [required]
enum
The processor type. Always generate_datadog_metrics.
Allowed enum values: generate_datadog_metrics
default: generate_datadog_metrics
Option 10
object
The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
mappings [required]
[object]
A list of mapping rules to convert events to the OCSF format.
include [required]
string
A Datadog search query used to select the logs that this mapping should apply to.
mapping [required]
<oneOf>
Defines a single mapping rule for transforming logs into the OCSF schema.
Custom OCSF mapping configuration for transforming logs.
mapping [required]
[object]
A list of field mapping rules for transforming log fields to OCSF schema fields.
default
The default value to use if the source field is missing or empty.
dest [required]
string
The destination OCSF field path.
lookup
object
Lookup table configuration for mapping source values to destination values.
source
The source field path from the log event.
sources
Multiple source field paths for combined mapping.
value
A static value to use for the destination field.
metadata [required]
object
Metadata for the custom OCSF mapping.
class [required]
string
The OCSF event class name.
profiles
[string]
A list of OCSF profiles to apply.
version [required]
string
The OCSF schema version.
version [required]
int64
The version of the custom mapping configuration.
type [required]
enum
The processor type. The value should always be ocsf_mapper.
Allowed enum values: ocsf_mapper
default: ocsf_mapper
Option 11
object
The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.
Supported pipeline types: logs
disable_library_rules
boolean
If set to true, disables the default Grok rules provided by Datadog.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
A unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
rules [required]
[object]
The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.
match_rules [required]
[object]
A list of Grok parsing rules that define how to extract fields from the source field.
Each rule must contain a name and a valid Grok pattern.
name [required]
string
The name of the rule.
rule [required]
string
The definition of the Grok rule.
source [required]
string
The name of the field in the log event to apply the Grok rules to.
support_rules
[object]
A list of Grok helper rules that can be referenced by the parsing rules.
name [required]
string
The name of the Grok helper rule.
rule [required]
string
The definition of the Grok helper rule.
type [required]
enum
The processor type. The value should always be parse_grok.
Allowed enum values: parse_grok
default: parse_grok
Option 12
object
The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
field [required]
string
The name of the log field that contains a JSON string.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be parse_json.
Allowed enum values: parse_json
default: parse_json
Option 13
object
The parse_xml processor parses XML from a specified field and extracts it into the event.
Supported pipeline types: logs
always_use_text_key
boolean
Whether to always use a text key for element content.
attr_prefix
string
The prefix to use for XML attributes in the parsed output.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
field [required]
string
The name of the log field that contains an XML string.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
include_attr
boolean
Whether to include XML attributes in the parsed output.
parse_bool
boolean
Whether to parse boolean values from strings.
parse_null
boolean
Whether to parse null values.
parse_number
boolean
Whether to parse numeric values from strings.
text_key
string
The key name to use for text content within XML elements. Must be at least 1 character if specified.
type [required]
enum
The processor type. The value should always be parse_xml.
Allowed enum values: parse_xml
default: parse_xml
Option 14
object
The quota processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.
Supported pipeline types: logs
display_name
string
The display name for a component.
drop_events
boolean
If set to true, logs that match the quota filter and are sent after the quota is exceeded are dropped. Logs that do not match the filter continue through the pipeline. Note: You can set either drop_events or overflow_action, but not both.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
ignore_when_missing_partitions
boolean
If true, the processor skips quota checks when partition fields are missing from the logs.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
name [required]
string
Name of the quota.
overflow_action
enum
The action to take when the quota or bucket limit is exceeded. Options:
drop: Drop the event.
no_action: Let the event pass through.
overflow_routing: Route to an overflow destination.
Allowed enum values: drop,no_action,overflow_routing
overrides
[object]
A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.
fields [required]
[object]
A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.
name [required]
string
The field name.
value [required]
string
The field value.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
partition_fields
[string]
A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.
too_many_buckets_action
enum
The action to take when the quota or bucket limit is exceeded. Options:
drop: Drop the event.
no_action: Let the event pass through.
overflow_routing: Route to an overflow destination.
Allowed enum values: drop,no_action,overflow_routing
type [required]
enum
The processor type. The value should always be quota.
Allowed enum values: quota
default: quota
Option 15
object
The reduce processor aggregates and merges logs based on matching keys and merge strategies.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by [required]
[string]
A list of fields used to group log events for merging.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
merge_strategies [required]
[object]
List of merge strategies defining how values from grouped events should be combined.
path [required]
string
The field path in the log event.
strategy [required]
enum
The merge strategy to apply.
Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique
type [required]
enum
The processor type. The value should always be reduce.
Allowed enum values: reduce
default: reduce
Option 16
object
The remove_fields processor deletes specified fields from logs.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[string]
A list of field names to be removed from each log event.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be remove_fields.
Allowed enum values: remove_fields
default: remove_fields
Option 17
object
The rename_fields processor changes field names.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[object]
A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.
destination [required]
string
The field name to assign the renamed value to.
preserve_source [required]
boolean
Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.
source [required]
string
The original field name in the log event that should be renamed.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be rename_fields.
Allowed enum values: rename_fields
default: rename_fields
Option 18
object
The sample processor allows probabilistic sampling of logs at a fixed rate.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by
[string]
Optional list of fields to group events by. Each group is sampled independently.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
percentage [required]
double
The percentage of logs to sample.
type [required]
enum
The processor type. The value should always be sample.
Allowed enum values: sample
default: sample
Option 19
object
The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
rules [required]
[object]
A list of rules for identifying and acting on sensitive data patterns.
keyword_options
object
Configuration for keywords used to reinforce sensitive data pattern detection.
keywords [required]
[string]
A list of keywords to match near the sensitive pattern.
proximity [required]
int64
Maximum number of tokens between a keyword and a sensitive value match.
name [required]
string
A name identifying the rule.
on_match [required]
<oneOf>
Defines what action to take when sensitive data is matched.
Option 1
object
Configuration for completely redacting matched sensitive data.
action [required]
enum
Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility.
Allowed enum values: redact
options [required]
object
Configuration for fully redacting sensitive data.
replace [required]
string
The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptionsreplace.
Option 2
object
Configuration for hashing matched sensitive values.
action [required]
enum
Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content.
Allowed enum values: hash
options
object
The ObservabilityPipelineSensitiveDataScannerProcessorActionHashoptions.
Option 3
object
Configuration for partially redacting matched sensitive data.
action [required]
enum
Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card).
Allowed enum values: partial_redact
options [required]
object
Controls how partial redaction is applied, including character count and direction.
characters [required]
int64
The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptionscharacters.
direction [required]
enum
Indicates whether to redact characters from the first or last part of the matched value.
Allowed enum values: first,last
pattern [required]
<oneOf>
Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.
Option 1
object
Defines a custom regex-based pattern for identifying sensitive data in logs.
options [required]
object
Options for defining a custom regex pattern.
description
string
Human-readable description providing context about a sensitive data scanner rule
rule [required]
string
A regular expression used to detect sensitive values. Must be a valid regex.
type [required]
enum
Indicates a custom regular expression is used for matching.
Allowed enum values: custom
Option 2
object
Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.
options [required]
object
Options for selecting a predefined library pattern and enabling keyword support.
description
string
Human-readable description providing context about a sensitive data scanner rule
id [required]
string
Identifier for a predefined pattern from the sensitive data scanner pattern library.
use_recommended_keywords
boolean
Whether to augment the pattern with recommended keywords (optional).
type [required]
enum
Indicates that a predefined library pattern is used.
Allowed enum values: library
scope [required]
<oneOf>
Determines which parts of the log the pattern-matching rule should be applied to.
Option 1
object
Includes only specific fields for sensitive data scanning.
options [required]
object
Fields to which the scope rule applies.
fields [required]
[string]
The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptionsfields.
target [required]
enum
Applies the rule only to included fields.
Allowed enum values: include
Option 2
object
Excludes specific fields from sensitive data scanning.
options [required]
object
Fields to which the scope rule applies.
fields [required]
[string]
The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptionsfields.
target [required]
enum
Excludes specific fields from processing.
Allowed enum values: exclude
Option 3
object
Applies scanning across all available fields.
target [required]
enum
Applies the rule to all fields.
Allowed enum values: all
tags [required]
[string]
Tags assigned to this rule for filtering and classification.
type [required]
enum
The processor type. The value should always be sensitive_data_scanner.
Allowed enum values: sensitive_data_scanner
default: sensitive_data_scanner
Option 20
object
The split_array processor splits array fields into separate events based on configured rules.
Supported pipeline types: logs
arrays [required]
[object]
A list of array split configurations.
field [required]
string
The path to the array field to split.
include [required]
string
A Datadog search query used to determine which logs this array split operation targets.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets. For split_array, this should typically be *.
type [required]
enum
The processor type. The value should always be split_array.
Allowed enum values: split_array
default: split_array
Option 21
object
The throttle processor limits the number of events that pass through over a given time window.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by
[string]
Optional list of fields used to group events before the threshold has been reached.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
threshold [required]
int64
the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.
type [required]
enum
The processor type. The value should always be throttle.
Allowed enum values: throttle
default: throttle
window [required]
double
The time window in seconds over which the threshold applies.
Option 22
object
The metric_tags processor filters metrics based on their tags using Datadog tag key patterns.
Supported pipeline types: metrics
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query that determines which metrics the processor targets.
rules [required]
[object]
A list of rules for filtering metric tags.
action [required]
enum
The action to take on tags with matching keys.
Allowed enum values: include,exclude
include [required]
string
A Datadog search query used to determine which metrics this rule targets.
keys [required]
[string]
A list of tag keys to include or exclude.
mode [required]
enum
The processing mode for tag filtering.
Allowed enum values: filter
type [required]
enum
The processor type. The value should always be metric_tags.
Allowed enum values: metric_tags
default: metric_tags
sources [required]
[ <oneOf>]
A list of configured data sources for the pipeline.
Option 1
object
The datadog_agent source collects logs/metrics from the Datadog Agent.
Supported pipeline types: logs, metrics
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be datadog_agent.
Allowed enum values: datadog_agent
default: datadog_agent
Option 2
object
The amazon_data_firehose source ingests logs from AWS Data Firehose.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the Firehose delivery stream address.
auth
object
AWS authentication credentials used for accessing AWS services such as S3.
If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).
assume_role
string
The Amazon Resource Name (ARN) of the role to assume.
external_id
string
A unique identifier for cross-account role assumption.
session_name
string
A session identifier used for logging and tracing the assumed role session.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be amazon_data_firehose.
Allowed enum values: amazon_data_firehose
default: amazon_data_firehose
Option 3
object
The amazon_s3 source ingests logs from an Amazon S3 bucket.
It supports AWS authentication and TLS encryption.
Supported pipeline types: logs
auth
object
AWS authentication credentials used for accessing AWS services such as S3.
If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).
assume_role
string
The Amazon Resource Name (ARN) of the role to assume.
external_id
string
A unique identifier for cross-account role assumption.
session_name
string
A session identifier used for logging and tracing the assumed role session.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
region [required]
string
AWS region where the S3 bucket resides.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. Always amazon_s3.
Allowed enum values: amazon_s3
default: amazon_s3
url_key
string
Name of the environment variable or secret that holds the S3 bucket URL.
Option 4
object
The fluent_bit source ingests logs from Fluent Bit.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Fluent Bit receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be fluent_bit.
Allowed enum values: fluent_bit
default: fluent_bit
Option 5
object
The fluentd source ingests logs from a Fluentd-compatible service.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Fluent receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be `fluentd.
Allowed enum values: fluentd
default: fluentd
Option 6
object
The google_pubsub source ingests logs from a Google Cloud Pub/Sub subscription.
Supported pipeline types: logs
auth
object
Google Cloud credentials used to authenticate with Google Cloud Storage.
credentials_file [required]
string
Path to the Google Cloud service account key file.
decoding [required]
enum
The decoding format used to interpret incoming logs.
Allowed enum values: bytes,gelf,json,syslog
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
project [required]
string
The Google Cloud project ID that owns the Pub/Sub subscription.
subscription [required]
string
The Pub/Sub subscription name from which messages are consumed.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be google_pubsub.
Allowed enum values: google_pubsub
default: google_pubsub
Option 7
object
The http_client source scrapes logs from HTTP endpoints at regular intervals.
Supported pipeline types: logs
auth_strategy
enum
Optional authentication strategy for HTTP requests.
Allowed enum values: none,basic,bearer,custom
custom_key
string
Name of the environment variable or secret that holds a custom header value (used with custom auth strategies).
decoding [required]
enum
The decoding format used to interpret incoming logs.
Allowed enum values: bytes,gelf,json,syslog
endpoint_url_key
string
Name of the environment variable or secret that holds the HTTP endpoint URL to scrape.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
password_key
string
Name of the environment variable or secret that holds the password (used when auth_strategy is basic).
scrape_interval_secs
int64
The interval (in seconds) between HTTP scrape requests.
scrape_timeout_secs
int64
The timeout (in seconds) for each scrape request.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
token_key
string
Name of the environment variable or secret that holds the bearer token (used when auth_strategy is bearer).
type [required]
enum
The source type. The value should always be http_client.
Allowed enum values: http_client
default: http_client
username_key
string
Name of the environment variable or secret that holds the username (used when auth_strategy is basic).
Option 8
object
The http_server source collects logs over HTTP POST from external services.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the HTTP server.
Name of the environment variable or secret that holds a custom header value (used with custom auth strategies).
decoding [required]
enum
The decoding format used to interpret incoming logs.
Allowed enum values: bytes,gelf,json,syslog
id [required]
string
Unique ID for the HTTP server source.
password_key
string
Name of the environment variable or secret that holds the password (used when auth_strategy is plain).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be http_server.
Allowed enum values: http_server
default: http_server
username_key
string
Name of the environment variable or secret that holds the username (used when auth_strategy is plain).
Option 9
object
The kafka source ingests data from Apache Kafka topics.
Supported pipeline types: logs
bootstrap_servers_key
string
Name of the environment variable or secret that holds the Kafka bootstrap servers list.
group_id [required]
string
Consumer group ID used by the Kafka client.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
librdkafka_options
[object]
Optional list of advanced Kafka client configuration options, defined as key-value pairs.
name [required]
string
The name of the librdkafka configuration option to set.
value [required]
string
The value assigned to the specified librdkafka configuration option.
sasl
object
Specifies the SASL mechanism for authenticating with a Kafka cluster.
mechanism
enum
SASL mechanism used for Kafka authentication.
Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
password_key
string
Name of the environment variable or secret that holds the SASL password.
username_key
string
Name of the environment variable or secret that holds the SASL username.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
topics [required]
[string]
A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.
type [required]
enum
The source type. The value should always be kafka.
Allowed enum values: kafka
default: kafka
Option 10
object
The logstash source ingests logs from a Logstash forwarder.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Logstash receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be logstash.
Allowed enum values: logstash
default: logstash
Option 11
object
The rsyslog source listens for logs over TCP or UDP from an rsyslog server using the syslog protocol.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the syslog receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
mode [required]
enum
Protocol used by the syslog source to receive messages.
Allowed enum values: tcp,udp
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be rsyslog.
Allowed enum values: rsyslog
default: rsyslog
Option 12
object
The socket source ingests logs over TCP or UDP.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the socket.
framing [required]
<oneOf>
Framing method configuration for the socket source.
Option 1
object
Byte frames which are delimited by a newline character.
method [required]
enum
Byte frames which are delimited by a newline character.
Allowed enum values: newline_delimited
Option 2
object
Byte frames are passed through as-is according to the underlying I/O boundaries (for example, split between messages or stream segments).
method [required]
enum
Byte frames are passed through as-is according to the underlying I/O boundaries (for example, split between messages or stream segments).
Allowed enum values: bytes
Option 3
object
Byte frames which are delimited by a chosen character.
delimiter [required]
string
A single ASCII character used to delimit events.
method [required]
enum
Byte frames which are delimited by a chosen character.
Allowed enum values: character_delimited
Option 4
object
Byte frames according to the octet counting format as per RFC6587.
method [required]
enum
Byte frames according to the octet counting format as per RFC6587.
Allowed enum values: octet_counting
Option 5
object
Byte frames which are chunked GELF messages.
method [required]
enum
Byte frames which are chunked GELF messages.
Allowed enum values: chunked_gelf
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
mode [required]
enum
Protocol used to receive logs.
Allowed enum values: tcp,udp
tls
object
TLS configuration. Relevant only when mode is tcp.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be socket.
Allowed enum values: socket
default: socket
Option 13
object
The splunk_hec source implements the Splunk HTTP Event Collector (HEC) API.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the HEC API.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. Always splunk_hec.
Allowed enum values: splunk_hec
default: splunk_hec
Option 14
object
The splunk_tcp source receives logs from a Splunk Universal Forwarder over TCP.
TLS is supported for secure transmission.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Splunk TCP receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. Always splunk_tcp.
Allowed enum values: splunk_tcp
default: splunk_tcp
Option 15
object
The sumo_logic source receives logs from Sumo Logic collectors.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Sumo Logic receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
type [required]
enum
The source type. The value should always be sumo_logic.
Allowed enum values: sumo_logic
default: sumo_logic
Option 16
object
The syslog_ng source listens for logs over TCP or UDP from a syslog-ng server using the syslog protocol.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the syslog-ng receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
mode [required]
enum
Protocol used by the syslog source to receive messages.
Allowed enum values: tcp,udp
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be syslog_ng.
Allowed enum values: syslog_ng
default: syslog_ng
Option 17
object
The opentelemetry source receives telemetry data using the OpenTelemetry Protocol (OTLP) over gRPC and HTTP.
Supported pipeline types: logs
grpc_address_key
string
Environment variable name containing the gRPC server address for receiving OTLP data. Must be a valid environment variable name (alphanumeric characters and underscores only).
http_address_key
string
Environment variable name containing the HTTP server address for receiving OTLP data. Must be a valid environment variable name (alphanumeric characters and underscores only).
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be opentelemetry.
Allowed enum values: opentelemetry
default: opentelemetry
use_legacy_search_syntax
boolean
Set to true to continue using the legacy search syntax while migrating filter queries. After migrating all queries to the new syntax, set to false.
The legacy syntax is deprecated and will eventually be removed.
Requires Observability Pipelines Worker 2.11 or later.
See Upgrade Your Filter Queries to the New Search Syntax for more information.
name [required]
string
Name of the pipeline.
id [required]
string
Unique identifier for the pipeline.
type [required]
string
The resource type identifier. For pipeline resources, this should always be set to pipelines.
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com"DD_API_KEY="<API-KEY>"DD_APP_KEY="<APP-KEY>"cargo run
/**
* List pipelines returns "OK" response
*/import{client,v2}from"@datadog/datadog-api-client";constconfiguration=client.createConfiguration();constapiInstance=newv2.ObservabilityPipelinesApi(configuration);apiInstance.listPipelines().then((data: v2.ListPipelinesResponse)=>{console.log("API called successfully. Returned data: "+JSON.stringify(data));}).catch((error: any)=>console.error(error));
Name of the environment variable or secret that holds a custom header value (used with custom auth strategies).
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
password_key
string
Name of the environment variable or secret that holds the password (used when auth_strategy is basic).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
token_key
string
Name of the environment variable or secret that holds the bearer token (used when auth_strategy is bearer).
type [required]
enum
The destination type. The value should always be http_client.
Allowed enum values: http_client
default: http_client
uri_key
string
Name of the environment variable or secret that holds the HTTP endpoint URI.
username_key
string
Name of the environment variable or secret that holds the username (used when auth_strategy is basic).
Option 2
object
The amazon_opensearch destination writes logs to Amazon OpenSearch.
Supported pipeline types: logs
auth [required]
object
Authentication settings for the Amazon OpenSearch destination.
The strategy field determines whether basic or AWS-based authentication is used.
assume_role
string
The ARN of the role to assume (used with aws strategy).
aws_region
string
AWS region
external_id
string
External ID for the assumed role (used with aws strategy).
session_name
string
Session name for the assumed role (used with aws strategy).
strategy [required]
enum
The authentication strategy to use.
Allowed enum values: basic,aws
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
bulk_index
string
The index to write logs to.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be amazon_opensearch.
Allowed enum values: amazon_opensearch
default: amazon_opensearch
Option 3
object
The amazon_s3 destination sends your logs in Datadog-rehydratable format to an Amazon S3 bucket for archiving.
Supported pipeline types: logs
auth
object
AWS authentication credentials used for accessing AWS services such as S3.
If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).
assume_role
string
The Amazon Resource Name (ARN) of the role to assume.
external_id
string
A unique identifier for cross-account role assumption.
session_name
string
A session identifier used for logging and tracing the assumed role session.
bucket [required]
string
S3 bucket name.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
Unique identifier for the destination component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. Always amazon_s3.
Allowed enum values: amazon_s3
default: amazon_s3
Option 4
object
The amazon_security_lake destination sends your logs to Amazon Security Lake.
Supported pipeline types: logs
auth
object
AWS authentication credentials used for accessing AWS services such as S3.
If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).
assume_role
string
The Amazon Resource Name (ARN) of the role to assume.
external_id
string
A unique identifier for cross-account role assumption.
session_name
string
A session identifier used for logging and tracing the assumed role session.
bucket [required]
string
Name of the Amazon S3 bucket in Security Lake (3-63 characters).
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
custom_source_name [required]
string
Custom source name for the logs in Security Lake.
id [required]
string
Unique identifier for the destination component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
region [required]
string
AWS region of the S3 bucket.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. Always amazon_security_lake.
Allowed enum values: amazon_security_lake
default: amazon_security_lake
Option 5
object
The azure_storage destination forwards logs to an Azure Blob Storage container.
Supported pipeline types: logs
blob_prefix
string
Optional prefix for blobs written to the container.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
connection_string_key
string
Name of the environment variable or secret that holds the Azure Storage connection string.
container_name [required]
string
The name of the Azure Blob Storage container to store logs in.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be azure_storage.
Allowed enum values: azure_storage
default: azure_storage
Option 6
object
The cloud_prem destination sends logs to Datadog CloudPrem.
Supported pipeline types: logs
endpoint_url_key
string
Name of the environment variable or secret that holds the CloudPrem endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be cloud_prem.
Allowed enum values: cloud_prem
default: cloud_prem
Option 7
object
The crowdstrike_next_gen_siem destination forwards logs to CrowdStrike Next Gen SIEM.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
compression
object
Compression configuration for log events.
algorithm [required]
enum
Compression algorithm for log events.
Allowed enum values: gzip,zlib
level
int64
Compression level.
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json,raw_message
endpoint_url_key
string
Name of the environment variable or secret that holds the CrowdStrike endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
token_key
string
Name of the environment variable or secret that holds the CrowdStrike API token.
type [required]
enum
The destination type. The value should always be crowdstrike_next_gen_siem.
Allowed enum values: crowdstrike_next_gen_siem
default: crowdstrike_next_gen_siem
Option 8
object
The datadog_logs destination forwards logs to Datadog Log Management.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
routes
[object]
A list of routing rules that forward matching logs to Datadog using dedicated API keys.
api_key_key
string
Name of the environment variable or secret that stores the Datadog API key used by this route.
include
string
A Datadog search query that determines which logs are forwarded using this route.
route_id
string
Unique identifier for this route within the destination.
site
string
Datadog site where matching logs are sent (for example, us1).
type [required]
enum
The destination type. The value should always be datadog_logs.
Allowed enum values: datadog_logs
default: datadog_logs
Option 9
object
The elasticsearch destination writes logs to an Elasticsearch cluster.
Supported pipeline types: logs
api_version
enum
The Elasticsearch API version to use. Set to auto to auto-detect.
Allowed enum values: auto,v6,v7,v8
auth
object
Authentication settings for the Elasticsearch destination.
When strategy is basic, use username_key and password_key to reference credentials stored in environment variables or secrets.
password_key
string
Name of the environment variable or secret that holds the Elasticsearch password (used when strategy is basic).
strategy [required]
enum
The authentication strategy to use.
Allowed enum values: basic,aws
username_key
string
Name of the environment variable or secret that holds the Elasticsearch username (used when strategy is basic).
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
bulk_index
string
The index to write logs to in Elasticsearch.
data_stream
object
Configuration options for writing to Elasticsearch Data Streams instead of a fixed index.
dataset
string
The data stream dataset for your logs. This groups logs by their source or application.
dtype
string
The data stream type for your logs. This determines how logs are categorized within the data stream.
namespace
string
The data stream namespace for your logs. This separates logs into different environments or domains.
endpoint_url_key
string
Name of the environment variable or secret that holds the Elasticsearch endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be elasticsearch.
Allowed enum values: elasticsearch
default: elasticsearch
Option 10
object
The google_chronicle destination sends logs to Google Chronicle.
Supported pipeline types: logs
auth
object
Google Cloud credentials used to authenticate with Google Cloud Storage.
credentials_file [required]
string
Path to the Google Cloud service account key file.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
customer_id [required]
string
The Google Chronicle customer ID.
encoding
enum
The encoding format for the logs sent to Chronicle.
Allowed enum values: json,raw_message
endpoint_url_key
string
Name of the environment variable or secret that holds the Google Chronicle endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
log_type
string
The log type metadata associated with the Chronicle destination.
type [required]
enum
The destination type. The value should always be google_chronicle.
Allowed enum values: google_chronicle
default: google_chronicle
Option 11
object
The google_cloud_storage destination stores logs in a Google Cloud Storage (GCS) bucket.
It requires a bucket name, Google Cloud authentication, and metadata fields.
Supported pipeline types: logs
acl
enum
Access control list setting for objects written to the bucket.
Allowed enum values: private,project-private,public-read,authenticated-read,bucket-owner-read,bucket-owner-full-control
auth
object
Google Cloud credentials used to authenticate with Google Cloud Storage.
credentials_file [required]
string
Path to the Google Cloud service account key file.
bucket [required]
string
Name of the GCS bucket.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
Unique identifier for the destination component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
key_prefix
string
Optional prefix for object keys within the GCS bucket.
metadata
[object]
Custom metadata to attach to each object uploaded to the GCS bucket.
name [required]
string
The metadata key.
value [required]
string
The metadata value.
storage_class [required]
enum
Storage class used for objects stored in GCS.
Allowed enum values: STANDARD,NEARLINE,COLDLINE,ARCHIVE
type [required]
enum
The destination type. Always google_cloud_storage.
Allowed enum values: google_cloud_storage
default: google_cloud_storage
Option 12
object
The google_pubsub destination publishes logs to a Google Cloud Pub/Sub topic.
Supported pipeline types: logs
auth
object
Google Cloud credentials used to authenticate with Google Cloud Storage.
credentials_file [required]
string
Path to the Google Cloud service account key file.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json,raw_message
endpoint_url_key
string
Name of the environment variable or secret that holds the Google Cloud Pub/Sub endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
project [required]
string
The Google Cloud project ID that owns the Pub/Sub topic.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
topic [required]
string
The Pub/Sub topic name to publish logs to.
type [required]
enum
The destination type. The value should always be google_pubsub.
Allowed enum values: google_pubsub
default: google_pubsub
Option 13
object
The kafka destination sends logs to Apache Kafka topics.
Supported pipeline types: logs
bootstrap_servers_key
string
Name of the environment variable or secret that holds the Kafka bootstrap servers list.
compression
enum
Compression codec for Kafka messages.
Allowed enum values: none,gzip,snappy,lz4,zstd
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json,raw_message
headers_key
string
The field name to use for Kafka message headers.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
key_field
string
The field name to use as the Kafka message key.
librdkafka_options
[object]
Optional list of advanced Kafka producer configuration options, defined as key-value pairs.
name [required]
string
The name of the librdkafka configuration option to set.
value [required]
string
The value assigned to the specified librdkafka configuration option.
message_timeout_ms
int64
Maximum time in milliseconds to wait for message delivery confirmation.
rate_limit_duration_secs
int64
Duration in seconds for the rate limit window.
rate_limit_num
int64
Maximum number of messages allowed per rate limit duration.
sasl
object
Specifies the SASL mechanism for authenticating with a Kafka cluster.
mechanism
enum
SASL mechanism used for Kafka authentication.
Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
password_key
string
Name of the environment variable or secret that holds the SASL password.
username_key
string
Name of the environment variable or secret that holds the SASL username.
socket_timeout_ms
int64
Socket timeout in milliseconds for network requests.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
topic [required]
string
The Kafka topic name to publish logs to.
type [required]
enum
The destination type. The value should always be kafka.
Allowed enum values: kafka
default: kafka
Option 14
object
The microsoft_sentinel destination forwards logs to Microsoft Sentinel.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
client_id [required]
string
Azure AD client ID used for authentication.
client_secret_key
string
Name of the environment variable or secret that holds the Azure AD client secret.
dce_uri_key
string
Name of the environment variable or secret that holds the Data Collection Endpoint (DCE) URI.
dcr_immutable_id [required]
string
The immutable ID of the Data Collection Rule (DCR).
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
table [required]
string
The name of the Log Analytics table where logs are sent.
tenant_id [required]
string
Azure AD tenant ID.
type [required]
enum
The destination type. The value should always be microsoft_sentinel.
Allowed enum values: microsoft_sentinel
default: microsoft_sentinel
Option 15
object
The new_relic destination sends logs to the New Relic platform.
Supported pipeline types: logs
account_id_key
string
Name of the environment variable or secret that holds the New Relic account ID.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
license_key_key
string
Name of the environment variable or secret that holds the New Relic license key.
region [required]
enum
The New Relic region.
Allowed enum values: us,eu
type [required]
enum
The destination type. The value should always be new_relic.
Allowed enum values: new_relic
default: new_relic
Option 16
object
The opensearch destination writes logs to an OpenSearch cluster.
Supported pipeline types: logs
auth
object
Authentication settings for the Elasticsearch destination.
When strategy is basic, use username_key and password_key to reference credentials stored in environment variables or secrets.
password_key
string
Name of the environment variable or secret that holds the Elasticsearch password (used when strategy is basic).
strategy [required]
enum
The authentication strategy to use.
Allowed enum values: basic,aws
username_key
string
Name of the environment variable or secret that holds the Elasticsearch username (used when strategy is basic).
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
bulk_index
string
The index to write logs to.
data_stream
object
Configuration options for writing to OpenSearch Data Streams instead of a fixed index.
dataset
string
The data stream dataset for your logs. This groups logs by their source or application.
dtype
string
The data stream type for your logs. This determines how logs are categorized within the data stream.
namespace
string
The data stream namespace for your logs. This separates logs into different environments or domains.
endpoint_url_key
string
Name of the environment variable or secret that holds the OpenSearch endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be opensearch.
Allowed enum values: opensearch
default: opensearch
Option 17
object
The rsyslog destination forwards logs to an external rsyslog server over TCP or UDP using the syslog protocol.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
endpoint_url_key
string
Name of the environment variable or secret that holds the syslog server endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
keepalive
int64
Optional socket keepalive duration in milliseconds.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. The value should always be rsyslog.
Allowed enum values: rsyslog
default: rsyslog
Option 18
object
The sentinel_one destination sends logs to SentinelOne.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
region [required]
enum
The SentinelOne region to send logs to.
Allowed enum values: us,eu,ca,data_set_us
token_key
string
Name of the environment variable or secret that holds the SentinelOne API token.
type [required]
enum
The destination type. The value should always be sentinel_one.
Allowed enum values: sentinel_one
default: sentinel_one
Option 19
object
The socket destination sends logs over TCP or UDP to a remote server.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the socket address (host:port).
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json,raw_message
framing [required]
<oneOf>
Framing method configuration.
Option 1
object
Each log event is delimited by a newline character.
method [required]
enum
The definition of ObservabilityPipelineSocketDestinationFramingNewlineDelimitedMethod object.
Allowed enum values: newline_delimited
Option 2
object
Event data is not delimited at all.
method [required]
enum
The definition of ObservabilityPipelineSocketDestinationFramingBytesMethod object.
Allowed enum values: bytes
Option 3
object
Each log event is separated using the specified delimiter character.
delimiter [required]
string
A single ASCII character used as a delimiter.
method [required]
enum
The definition of ObservabilityPipelineSocketDestinationFramingCharacterDelimitedMethod object.
Allowed enum values: character_delimited
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
mode [required]
enum
Protocol used to send logs.
Allowed enum values: tcp,udp
tls
object
TLS configuration. Relevant only when mode is tcp.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. The value should always be socket.
Allowed enum values: socket
default: socket
Option 20
object
The splunk_hec destination forwards logs to Splunk using the HTTP Event Collector (HEC).
Supported pipeline types: logs
auto_extract_timestamp
boolean
If true, Splunk tries to extract timestamps from incoming log events.
If false, Splunk assigns the time the event was received.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
encoding
enum
Encoding format for log events.
Allowed enum values: json,raw_message
endpoint_url_key
string
Name of the environment variable or secret that holds the Splunk HEC endpoint URL.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
index
string
Optional name of the Splunk index where logs are written.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
sourcetype
string
The Splunk sourcetype to assign to log events.
token_key
string
Name of the environment variable or secret that holds the Splunk HEC token.
type [required]
enum
The destination type. Always splunk_hec.
Allowed enum values: splunk_hec
default: splunk_hec
Option 21
object
The sumo_logic destination forwards logs to Sumo Logic.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
encoding
enum
The output encoding format.
Allowed enum values: json,raw_message,logfmt
endpoint_url_key
string
Name of the environment variable or secret that holds the Sumo Logic HTTP endpoint URL.
header_custom_fields
[object]
A list of custom headers to include in the request to Sumo Logic.
name [required]
string
The header field name.
value [required]
string
The header field value.
header_host_name
string
Optional override for the host name header.
header_source_category
string
Optional override for the source category header.
header_source_name
string
Optional override for the source name header.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be sumo_logic.
Allowed enum values: sumo_logic
default: sumo_logic
Option 22
object
The syslog_ng destination forwards logs to an external syslog-ng server over TCP or UDP using the syslog protocol.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
endpoint_url_key
string
Name of the environment variable or secret that holds the syslog-ng server endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
keepalive
int64
Optional socket keepalive duration in milliseconds.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. The value should always be syslog_ng.
Allowed enum values: syslog_ng
default: syslog_ng
Option 23
object
The datadog_metrics destination forwards metrics to Datadog.
Supported pipeline types: metrics
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be datadog_metrics.
Allowed enum values: datadog_metrics
default: datadog_metrics
pipeline_type
enum
The type of data being ingested. Defaults to logs if not specified.
Allowed enum values: logs,metrics
default: logs
processor_groups
[object]
A list of processor groups that transform or enrich log data.
display_name
string
The display name for a component.
enabled [required]
boolean
Whether this processor group is enabled.
id [required]
string
The unique identifier for the processor group.
include [required]
string
Conditional expression for when this processor group should execute.
inputs [required]
[string]
A list of IDs for components whose output is used as the input for this processor group.
processors [required]
[ <oneOf>]
Processors applied sequentially within this group. Events flow through each processor in order.
Option 1
object
The filter processor allows conditional processing of logs/metrics based on a Datadog search query. Logs/metrics that match the include query are passed through; others are discarded.
Supported pipeline types: logs, metrics
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs/metrics should pass through the filter. Logs/metrics that match this query continue to downstream components; others are dropped.
type [required]
enum
The processor type. The value should always be filter.
Allowed enum values: filter
default: filter
Option 2
object
The add_env_vars processor adds environment variable values to log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this processor in the pipeline.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_env_vars.
Allowed enum values: add_env_vars
default: add_env_vars
variables [required]
[object]
A list of environment variable mappings to apply to log fields.
field [required]
string
The target field in the log event.
name [required]
string
The name of the environment variable to read.
Option 3
object
The add_fields processor adds static key-value fields to logs.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[object]
A list of static fields (key-value pairs) that is added to each log event processed by this component.
name [required]
string
The field name.
value [required]
string
The field value.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_fields.
Allowed enum values: add_fields
default: add_fields
Option 4
object
The add_hostname processor adds the hostname to log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_hostname.
Allowed enum values: add_hostname
default: add_hostname
Option 5
object
The custom_processor processor transforms events using Vector Remap Language (VRL) scripts with advanced filtering capabilities.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets. This field should always be set to * for the custom_processor processor.
default: *
remaps [required]
[object]
Array of VRL remap rules.
drop_on_error [required]
boolean
Whether to drop events that caused errors during processing.
enabled
boolean
Whether this remap rule is enabled.
include [required]
string
A Datadog search query used to filter events for this specific remap rule.
name [required]
string
A descriptive name for this remap rule.
source [required]
string
The VRL script source code that defines the processing logic.
type [required]
enum
The processor type. The value should always be custom_processor.
Allowed enum values: custom_processor
default: custom_processor
Option 6
object
The datadog_tags processor includes or excludes specific Datadog tags in your logs.
Supported pipeline types: logs
action [required]
enum
The action to take on tags with matching keys.
Allowed enum values: include,exclude
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
keys [required]
[string]
A list of tag keys.
mode [required]
enum
The processing mode.
Allowed enum values: filter
type [required]
enum
The processor type. The value should always be datadog_tags.
Allowed enum values: datadog_tags
default: datadog_tags
Option 7
object
The dedupe processor removes duplicate fields in log events.
Supported pipeline types: logs
cache
object
Configuration for the cache used to detect duplicates.
num_events [required]
int64
The number of events to cache for duplicate detection.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[string]
A list of log field paths to check for duplicates.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
mode [required]
enum
The deduplication mode to apply to the fields.
Allowed enum values: match,ignore
type [required]
enum
The processor type. The value should always be dedupe.
Allowed enum values: dedupe
default: dedupe
Option 8
object
The enrichment_table processor enriches logs using a static CSV file, GeoIP database, or reference table. Exactly one of file, geoip, or reference_table must be configured.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
file
object
Defines a static enrichment table loaded from a CSV file.
encoding [required]
object
File encoding format.
delimiter [required]
string
The encodingdelimiter.
includes_headers [required]
boolean
The encodingincludes_headers.
type [required]
enum
Specifies the encoding format (e.g., CSV) used for enrichment tables.
Allowed enum values: csv
key [required]
[object]
Key fields used to look up enrichment values.
column [required]
string
The itemscolumn.
comparison [required]
enum
Defines how to compare key fields for enrichment table lookups.
Allowed enum values: equals
field [required]
string
The itemsfield.
path [required]
string
Path to the CSV file.
schema [required]
[object]
Schema defining column names and their types.
column [required]
string
The itemscolumn.
type [required]
enum
Declares allowed data types for enrichment table columns.
Allowed enum values: string,boolean,integer,float,date,timestamp
geoip
object
Uses a GeoIP database to enrich logs based on an IP field.
key_field [required]
string
Path to the IP field in the log.
locale [required]
string
Locale used to resolve geographical names.
path [required]
string
Path to the GeoIP database file.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
reference_table
object
Uses a Datadog reference table to enrich logs.
app_key_key
string
Name of the environment variable or secret that holds the Datadog application key used to access the reference table.
columns
[string]
List of column names to include from the reference table. If not provided, all columns are included.
key_field [required]
string
Path to the field in the log event to match against the reference table.
table_id [required]
string
The unique identifier of the reference table.
target [required]
string
Path where enrichment results should be stored in the log.
type [required]
enum
The processor type. The value should always be enrichment_table.
Allowed enum values: enrichment_table
default: enrichment_table
Option 9
object
The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog.
Metrics can be counters, gauges, or distributions and optionally grouped by log fields.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline.
include
string
A Datadog search query used to determine which logs this processor targets.
metrics
[object]
Configuration for generating individual metrics.
group_by
[string]
Optional fields used to group the metric series.
include [required]
string
Datadog filter query to match logs for metric generation.
metric_type [required]
enum
Type of metric to create.
Allowed enum values: count,gauge,distribution
name [required]
string
Name of the custom metric to be created.
value [required]
<oneOf>
Specifies how the value of the generated metric is computed.
Option 1
object
Strategy that increments a generated metric by one for each matching event.
strategy [required]
enum
Increments the metric by 1 for each matching event.
Allowed enum values: increment_by_one
Option 2
object
Strategy that increments a generated metric based on the value of a log field.
field [required]
string
Name of the log field containing the numeric value to increment the metric by.
strategy [required]
enum
Uses a numeric field in the log event as the metric increment.
Allowed enum values: increment_by_field
type [required]
enum
The processor type. Always generate_datadog_metrics.
Allowed enum values: generate_datadog_metrics
default: generate_datadog_metrics
Option 10
object
The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
mappings [required]
[object]
A list of mapping rules to convert events to the OCSF format.
include [required]
string
A Datadog search query used to select the logs that this mapping should apply to.
mapping [required]
<oneOf>
Defines a single mapping rule for transforming logs into the OCSF schema.
Custom OCSF mapping configuration for transforming logs.
mapping [required]
[object]
A list of field mapping rules for transforming log fields to OCSF schema fields.
default
The default value to use if the source field is missing or empty.
dest [required]
string
The destination OCSF field path.
lookup
object
Lookup table configuration for mapping source values to destination values.
source
The source field path from the log event.
sources
Multiple source field paths for combined mapping.
value
A static value to use for the destination field.
metadata [required]
object
Metadata for the custom OCSF mapping.
class [required]
string
The OCSF event class name.
profiles
[string]
A list of OCSF profiles to apply.
version [required]
string
The OCSF schema version.
version [required]
int64
The version of the custom mapping configuration.
type [required]
enum
The processor type. The value should always be ocsf_mapper.
Allowed enum values: ocsf_mapper
default: ocsf_mapper
Option 11
object
The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.
Supported pipeline types: logs
disable_library_rules
boolean
If set to true, disables the default Grok rules provided by Datadog.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
A unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
rules [required]
[object]
The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.
match_rules [required]
[object]
A list of Grok parsing rules that define how to extract fields from the source field.
Each rule must contain a name and a valid Grok pattern.
name [required]
string
The name of the rule.
rule [required]
string
The definition of the Grok rule.
source [required]
string
The name of the field in the log event to apply the Grok rules to.
support_rules
[object]
A list of Grok helper rules that can be referenced by the parsing rules.
name [required]
string
The name of the Grok helper rule.
rule [required]
string
The definition of the Grok helper rule.
type [required]
enum
The processor type. The value should always be parse_grok.
Allowed enum values: parse_grok
default: parse_grok
Option 12
object
The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
field [required]
string
The name of the log field that contains a JSON string.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be parse_json.
Allowed enum values: parse_json
default: parse_json
Option 13
object
The parse_xml processor parses XML from a specified field and extracts it into the event.
Supported pipeline types: logs
always_use_text_key
boolean
Whether to always use a text key for element content.
attr_prefix
string
The prefix to use for XML attributes in the parsed output.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
field [required]
string
The name of the log field that contains an XML string.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
include_attr
boolean
Whether to include XML attributes in the parsed output.
parse_bool
boolean
Whether to parse boolean values from strings.
parse_null
boolean
Whether to parse null values.
parse_number
boolean
Whether to parse numeric values from strings.
text_key
string
The key name to use for text content within XML elements. Must be at least 1 character if specified.
type [required]
enum
The processor type. The value should always be parse_xml.
Allowed enum values: parse_xml
default: parse_xml
Option 14
object
The quota processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.
Supported pipeline types: logs
display_name
string
The display name for a component.
drop_events
boolean
If set to true, logs that match the quota filter and are sent after the quota is exceeded are dropped. Logs that do not match the filter continue through the pipeline. Note: You can set either drop_events or overflow_action, but not both.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
ignore_when_missing_partitions
boolean
If true, the processor skips quota checks when partition fields are missing from the logs.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
name [required]
string
Name of the quota.
overflow_action
enum
The action to take when the quota or bucket limit is exceeded. Options:
drop: Drop the event.
no_action: Let the event pass through.
overflow_routing: Route to an overflow destination.
Allowed enum values: drop,no_action,overflow_routing
overrides
[object]
A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.
fields [required]
[object]
A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.
name [required]
string
The field name.
value [required]
string
The field value.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
partition_fields
[string]
A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.
too_many_buckets_action
enum
The action to take when the quota or bucket limit is exceeded. Options:
drop: Drop the event.
no_action: Let the event pass through.
overflow_routing: Route to an overflow destination.
Allowed enum values: drop,no_action,overflow_routing
type [required]
enum
The processor type. The value should always be quota.
Allowed enum values: quota
default: quota
Option 15
object
The reduce processor aggregates and merges logs based on matching keys and merge strategies.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by [required]
[string]
A list of fields used to group log events for merging.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
merge_strategies [required]
[object]
List of merge strategies defining how values from grouped events should be combined.
path [required]
string
The field path in the log event.
strategy [required]
enum
The merge strategy to apply.
Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique
type [required]
enum
The processor type. The value should always be reduce.
Allowed enum values: reduce
default: reduce
Option 16
object
The remove_fields processor deletes specified fields from logs.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[string]
A list of field names to be removed from each log event.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be remove_fields.
Allowed enum values: remove_fields
default: remove_fields
Option 17
object
The rename_fields processor changes field names.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[object]
A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.
destination [required]
string
The field name to assign the renamed value to.
preserve_source [required]
boolean
Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.
source [required]
string
The original field name in the log event that should be renamed.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be rename_fields.
Allowed enum values: rename_fields
default: rename_fields
Option 18
object
The sample processor allows probabilistic sampling of logs at a fixed rate.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by
[string]
Optional list of fields to group events by. Each group is sampled independently.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
percentage [required]
double
The percentage of logs to sample.
type [required]
enum
The processor type. The value should always be sample.
Allowed enum values: sample
default: sample
Option 19
object
The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
rules [required]
[object]
A list of rules for identifying and acting on sensitive data patterns.
keyword_options
object
Configuration for keywords used to reinforce sensitive data pattern detection.
keywords [required]
[string]
A list of keywords to match near the sensitive pattern.
proximity [required]
int64
Maximum number of tokens between a keyword and a sensitive value match.
name [required]
string
A name identifying the rule.
on_match [required]
<oneOf>
Defines what action to take when sensitive data is matched.
Option 1
object
Configuration for completely redacting matched sensitive data.
action [required]
enum
Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility.
Allowed enum values: redact
options [required]
object
Configuration for fully redacting sensitive data.
replace [required]
string
The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptionsreplace.
Option 2
object
Configuration for hashing matched sensitive values.
action [required]
enum
Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content.
Allowed enum values: hash
options
object
The ObservabilityPipelineSensitiveDataScannerProcessorActionHashoptions.
Option 3
object
Configuration for partially redacting matched sensitive data.
action [required]
enum
Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card).
Allowed enum values: partial_redact
options [required]
object
Controls how partial redaction is applied, including character count and direction.
characters [required]
int64
The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptionscharacters.
direction [required]
enum
Indicates whether to redact characters from the first or last part of the matched value.
Allowed enum values: first,last
pattern [required]
<oneOf>
Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.
Option 1
object
Defines a custom regex-based pattern for identifying sensitive data in logs.
options [required]
object
Options for defining a custom regex pattern.
description
string
Human-readable description providing context about a sensitive data scanner rule
rule [required]
string
A regular expression used to detect sensitive values. Must be a valid regex.
type [required]
enum
Indicates a custom regular expression is used for matching.
Allowed enum values: custom
Option 2
object
Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.
options [required]
object
Options for selecting a predefined library pattern and enabling keyword support.
description
string
Human-readable description providing context about a sensitive data scanner rule
id [required]
string
Identifier for a predefined pattern from the sensitive data scanner pattern library.
use_recommended_keywords
boolean
Whether to augment the pattern with recommended keywords (optional).
type [required]
enum
Indicates that a predefined library pattern is used.
Allowed enum values: library
scope [required]
<oneOf>
Determines which parts of the log the pattern-matching rule should be applied to.
Option 1
object
Includes only specific fields for sensitive data scanning.
options [required]
object
Fields to which the scope rule applies.
fields [required]
[string]
The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptionsfields.
target [required]
enum
Applies the rule only to included fields.
Allowed enum values: include
Option 2
object
Excludes specific fields from sensitive data scanning.
options [required]
object
Fields to which the scope rule applies.
fields [required]
[string]
The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptionsfields.
target [required]
enum
Excludes specific fields from processing.
Allowed enum values: exclude
Option 3
object
Applies scanning across all available fields.
target [required]
enum
Applies the rule to all fields.
Allowed enum values: all
tags [required]
[string]
Tags assigned to this rule for filtering and classification.
type [required]
enum
The processor type. The value should always be sensitive_data_scanner.
Allowed enum values: sensitive_data_scanner
default: sensitive_data_scanner
Option 20
object
The split_array processor splits array fields into separate events based on configured rules.
Supported pipeline types: logs
arrays [required]
[object]
A list of array split configurations.
field [required]
string
The path to the array field to split.
include [required]
string
A Datadog search query used to determine which logs this array split operation targets.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets. For split_array, this should typically be *.
type [required]
enum
The processor type. The value should always be split_array.
Allowed enum values: split_array
default: split_array
Option 21
object
The throttle processor limits the number of events that pass through over a given time window.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by
[string]
Optional list of fields used to group events before the threshold has been reached.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
threshold [required]
int64
the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.
type [required]
enum
The processor type. The value should always be throttle.
Allowed enum values: throttle
default: throttle
window [required]
double
The time window in seconds over which the threshold applies.
Option 22
object
The metric_tags processor filters metrics based on their tags using Datadog tag key patterns.
Supported pipeline types: metrics
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query that determines which metrics the processor targets.
rules [required]
[object]
A list of rules for filtering metric tags.
action [required]
enum
The action to take on tags with matching keys.
Allowed enum values: include,exclude
include [required]
string
A Datadog search query used to determine which metrics this rule targets.
keys [required]
[string]
A list of tag keys to include or exclude.
mode [required]
enum
The processing mode for tag filtering.
Allowed enum values: filter
type [required]
enum
The processor type. The value should always be metric_tags.
Allowed enum values: metric_tags
default: metric_tags
processors
[object]
DEPRECATED: A list of processor groups that transform or enrich log data.
Deprecated: This field is deprecated, you should now use the processor_groups field.
display_name
string
The display name for a component.
enabled [required]
boolean
Whether this processor group is enabled.
id [required]
string
The unique identifier for the processor group.
include [required]
string
Conditional expression for when this processor group should execute.
inputs [required]
[string]
A list of IDs for components whose output is used as the input for this processor group.
processors [required]
[ <oneOf>]
Processors applied sequentially within this group. Events flow through each processor in order.
Option 1
object
The filter processor allows conditional processing of logs/metrics based on a Datadog search query. Logs/metrics that match the include query are passed through; others are discarded.
Supported pipeline types: logs, metrics
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs/metrics should pass through the filter. Logs/metrics that match this query continue to downstream components; others are dropped.
type [required]
enum
The processor type. The value should always be filter.
Allowed enum values: filter
default: filter
Option 2
object
The add_env_vars processor adds environment variable values to log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this processor in the pipeline.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_env_vars.
Allowed enum values: add_env_vars
default: add_env_vars
variables [required]
[object]
A list of environment variable mappings to apply to log fields.
field [required]
string
The target field in the log event.
name [required]
string
The name of the environment variable to read.
Option 3
object
The add_fields processor adds static key-value fields to logs.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[object]
A list of static fields (key-value pairs) that is added to each log event processed by this component.
name [required]
string
The field name.
value [required]
string
The field value.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_fields.
Allowed enum values: add_fields
default: add_fields
Option 4
object
The add_hostname processor adds the hostname to log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_hostname.
Allowed enum values: add_hostname
default: add_hostname
Option 5
object
The custom_processor processor transforms events using Vector Remap Language (VRL) scripts with advanced filtering capabilities.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets. This field should always be set to * for the custom_processor processor.
default: *
remaps [required]
[object]
Array of VRL remap rules.
drop_on_error [required]
boolean
Whether to drop events that caused errors during processing.
enabled
boolean
Whether this remap rule is enabled.
include [required]
string
A Datadog search query used to filter events for this specific remap rule.
name [required]
string
A descriptive name for this remap rule.
source [required]
string
The VRL script source code that defines the processing logic.
type [required]
enum
The processor type. The value should always be custom_processor.
Allowed enum values: custom_processor
default: custom_processor
Option 6
object
The datadog_tags processor includes or excludes specific Datadog tags in your logs.
Supported pipeline types: logs
action [required]
enum
The action to take on tags with matching keys.
Allowed enum values: include,exclude
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
keys [required]
[string]
A list of tag keys.
mode [required]
enum
The processing mode.
Allowed enum values: filter
type [required]
enum
The processor type. The value should always be datadog_tags.
Allowed enum values: datadog_tags
default: datadog_tags
Option 7
object
The dedupe processor removes duplicate fields in log events.
Supported pipeline types: logs
cache
object
Configuration for the cache used to detect duplicates.
num_events [required]
int64
The number of events to cache for duplicate detection.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[string]
A list of log field paths to check for duplicates.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
mode [required]
enum
The deduplication mode to apply to the fields.
Allowed enum values: match,ignore
type [required]
enum
The processor type. The value should always be dedupe.
Allowed enum values: dedupe
default: dedupe
Option 8
object
The enrichment_table processor enriches logs using a static CSV file, GeoIP database, or reference table. Exactly one of file, geoip, or reference_table must be configured.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
file
object
Defines a static enrichment table loaded from a CSV file.
encoding [required]
object
File encoding format.
delimiter [required]
string
The encodingdelimiter.
includes_headers [required]
boolean
The encodingincludes_headers.
type [required]
enum
Specifies the encoding format (e.g., CSV) used for enrichment tables.
Allowed enum values: csv
key [required]
[object]
Key fields used to look up enrichment values.
column [required]
string
The itemscolumn.
comparison [required]
enum
Defines how to compare key fields for enrichment table lookups.
Allowed enum values: equals
field [required]
string
The itemsfield.
path [required]
string
Path to the CSV file.
schema [required]
[object]
Schema defining column names and their types.
column [required]
string
The itemscolumn.
type [required]
enum
Declares allowed data types for enrichment table columns.
Allowed enum values: string,boolean,integer,float,date,timestamp
geoip
object
Uses a GeoIP database to enrich logs based on an IP field.
key_field [required]
string
Path to the IP field in the log.
locale [required]
string
Locale used to resolve geographical names.
path [required]
string
Path to the GeoIP database file.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
reference_table
object
Uses a Datadog reference table to enrich logs.
app_key_key
string
Name of the environment variable or secret that holds the Datadog application key used to access the reference table.
columns
[string]
List of column names to include from the reference table. If not provided, all columns are included.
key_field [required]
string
Path to the field in the log event to match against the reference table.
table_id [required]
string
The unique identifier of the reference table.
target [required]
string
Path where enrichment results should be stored in the log.
type [required]
enum
The processor type. The value should always be enrichment_table.
Allowed enum values: enrichment_table
default: enrichment_table
Option 9
object
The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog.
Metrics can be counters, gauges, or distributions and optionally grouped by log fields.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline.
include
string
A Datadog search query used to determine which logs this processor targets.
metrics
[object]
Configuration for generating individual metrics.
group_by
[string]
Optional fields used to group the metric series.
include [required]
string
Datadog filter query to match logs for metric generation.
metric_type [required]
enum
Type of metric to create.
Allowed enum values: count,gauge,distribution
name [required]
string
Name of the custom metric to be created.
value [required]
<oneOf>
Specifies how the value of the generated metric is computed.
Option 1
object
Strategy that increments a generated metric by one for each matching event.
strategy [required]
enum
Increments the metric by 1 for each matching event.
Allowed enum values: increment_by_one
Option 2
object
Strategy that increments a generated metric based on the value of a log field.
field [required]
string
Name of the log field containing the numeric value to increment the metric by.
strategy [required]
enum
Uses a numeric field in the log event as the metric increment.
Allowed enum values: increment_by_field
type [required]
enum
The processor type. Always generate_datadog_metrics.
Allowed enum values: generate_datadog_metrics
default: generate_datadog_metrics
Option 10
object
The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
mappings [required]
[object]
A list of mapping rules to convert events to the OCSF format.
include [required]
string
A Datadog search query used to select the logs that this mapping should apply to.
mapping [required]
<oneOf>
Defines a single mapping rule for transforming logs into the OCSF schema.
Custom OCSF mapping configuration for transforming logs.
mapping [required]
[object]
A list of field mapping rules for transforming log fields to OCSF schema fields.
default
The default value to use if the source field is missing or empty.
dest [required]
string
The destination OCSF field path.
lookup
object
Lookup table configuration for mapping source values to destination values.
source
The source field path from the log event.
sources
Multiple source field paths for combined mapping.
value
A static value to use for the destination field.
metadata [required]
object
Metadata for the custom OCSF mapping.
class [required]
string
The OCSF event class name.
profiles
[string]
A list of OCSF profiles to apply.
version [required]
string
The OCSF schema version.
version [required]
int64
The version of the custom mapping configuration.
type [required]
enum
The processor type. The value should always be ocsf_mapper.
Allowed enum values: ocsf_mapper
default: ocsf_mapper
Option 11
object
The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.
Supported pipeline types: logs
disable_library_rules
boolean
If set to true, disables the default Grok rules provided by Datadog.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
A unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
rules [required]
[object]
The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.
match_rules [required]
[object]
A list of Grok parsing rules that define how to extract fields from the source field.
Each rule must contain a name and a valid Grok pattern.
name [required]
string
The name of the rule.
rule [required]
string
The definition of the Grok rule.
source [required]
string
The name of the field in the log event to apply the Grok rules to.
support_rules
[object]
A list of Grok helper rules that can be referenced by the parsing rules.
name [required]
string
The name of the Grok helper rule.
rule [required]
string
The definition of the Grok helper rule.
type [required]
enum
The processor type. The value should always be parse_grok.
Allowed enum values: parse_grok
default: parse_grok
Option 12
object
The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
field [required]
string
The name of the log field that contains a JSON string.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be parse_json.
Allowed enum values: parse_json
default: parse_json
Option 13
object
The parse_xml processor parses XML from a specified field and extracts it into the event.
Supported pipeline types: logs
always_use_text_key
boolean
Whether to always use a text key for element content.
attr_prefix
string
The prefix to use for XML attributes in the parsed output.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
field [required]
string
The name of the log field that contains an XML string.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
include_attr
boolean
Whether to include XML attributes in the parsed output.
parse_bool
boolean
Whether to parse boolean values from strings.
parse_null
boolean
Whether to parse null values.
parse_number
boolean
Whether to parse numeric values from strings.
text_key
string
The key name to use for text content within XML elements. Must be at least 1 character if specified.
type [required]
enum
The processor type. The value should always be parse_xml.
Allowed enum values: parse_xml
default: parse_xml
Option 14
object
The quota processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.
Supported pipeline types: logs
display_name
string
The display name for a component.
drop_events
boolean
If set to true, logs that match the quota filter and are sent after the quota is exceeded are dropped. Logs that do not match the filter continue through the pipeline. Note: You can set either drop_events or overflow_action, but not both.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
ignore_when_missing_partitions
boolean
If true, the processor skips quota checks when partition fields are missing from the logs.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
name [required]
string
Name of the quota.
overflow_action
enum
The action to take when the quota or bucket limit is exceeded. Options:
drop: Drop the event.
no_action: Let the event pass through.
overflow_routing: Route to an overflow destination.
Allowed enum values: drop,no_action,overflow_routing
overrides
[object]
A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.
fields [required]
[object]
A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.
name [required]
string
The field name.
value [required]
string
The field value.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
partition_fields
[string]
A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.
too_many_buckets_action
enum
The action to take when the quota or bucket limit is exceeded. Options:
drop: Drop the event.
no_action: Let the event pass through.
overflow_routing: Route to an overflow destination.
Allowed enum values: drop,no_action,overflow_routing
type [required]
enum
The processor type. The value should always be quota.
Allowed enum values: quota
default: quota
Option 15
object
The reduce processor aggregates and merges logs based on matching keys and merge strategies.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by [required]
[string]
A list of fields used to group log events for merging.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
merge_strategies [required]
[object]
List of merge strategies defining how values from grouped events should be combined.
path [required]
string
The field path in the log event.
strategy [required]
enum
The merge strategy to apply.
Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique
type [required]
enum
The processor type. The value should always be reduce.
Allowed enum values: reduce
default: reduce
Option 16
object
The remove_fields processor deletes specified fields from logs.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[string]
A list of field names to be removed from each log event.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be remove_fields.
Allowed enum values: remove_fields
default: remove_fields
Option 17
object
The rename_fields processor changes field names.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[object]
A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.
destination [required]
string
The field name to assign the renamed value to.
preserve_source [required]
boolean
Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.
source [required]
string
The original field name in the log event that should be renamed.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be rename_fields.
Allowed enum values: rename_fields
default: rename_fields
Option 18
object
The sample processor allows probabilistic sampling of logs at a fixed rate.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by
[string]
Optional list of fields to group events by. Each group is sampled independently.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
percentage [required]
double
The percentage of logs to sample.
type [required]
enum
The processor type. The value should always be sample.
Allowed enum values: sample
default: sample
Option 19
object
The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
rules [required]
[object]
A list of rules for identifying and acting on sensitive data patterns.
keyword_options
object
Configuration for keywords used to reinforce sensitive data pattern detection.
keywords [required]
[string]
A list of keywords to match near the sensitive pattern.
proximity [required]
int64
Maximum number of tokens between a keyword and a sensitive value match.
name [required]
string
A name identifying the rule.
on_match [required]
<oneOf>
Defines what action to take when sensitive data is matched.
Option 1
object
Configuration for completely redacting matched sensitive data.
action [required]
enum
Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility.
Allowed enum values: redact
options [required]
object
Configuration for fully redacting sensitive data.
replace [required]
string
The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptionsreplace.
Option 2
object
Configuration for hashing matched sensitive values.
action [required]
enum
Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content.
Allowed enum values: hash
options
object
The ObservabilityPipelineSensitiveDataScannerProcessorActionHashoptions.
Option 3
object
Configuration for partially redacting matched sensitive data.
action [required]
enum
Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card).
Allowed enum values: partial_redact
options [required]
object
Controls how partial redaction is applied, including character count and direction.
characters [required]
int64
The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptionscharacters.
direction [required]
enum
Indicates whether to redact characters from the first or last part of the matched value.
Allowed enum values: first,last
pattern [required]
<oneOf>
Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.
Option 1
object
Defines a custom regex-based pattern for identifying sensitive data in logs.
options [required]
object
Options for defining a custom regex pattern.
description
string
Human-readable description providing context about a sensitive data scanner rule
rule [required]
string
A regular expression used to detect sensitive values. Must be a valid regex.
type [required]
enum
Indicates a custom regular expression is used for matching.
Allowed enum values: custom
Option 2
object
Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.
options [required]
object
Options for selecting a predefined library pattern and enabling keyword support.
description
string
Human-readable description providing context about a sensitive data scanner rule
id [required]
string
Identifier for a predefined pattern from the sensitive data scanner pattern library.
use_recommended_keywords
boolean
Whether to augment the pattern with recommended keywords (optional).
type [required]
enum
Indicates that a predefined library pattern is used.
Allowed enum values: library
scope [required]
<oneOf>
Determines which parts of the log the pattern-matching rule should be applied to.
Option 1
object
Includes only specific fields for sensitive data scanning.
options [required]
object
Fields to which the scope rule applies.
fields [required]
[string]
The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptionsfields.
target [required]
enum
Applies the rule only to included fields.
Allowed enum values: include
Option 2
object
Excludes specific fields from sensitive data scanning.
options [required]
object
Fields to which the scope rule applies.
fields [required]
[string]
The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptionsfields.
target [required]
enum
Excludes specific fields from processing.
Allowed enum values: exclude
Option 3
object
Applies scanning across all available fields.
target [required]
enum
Applies the rule to all fields.
Allowed enum values: all
tags [required]
[string]
Tags assigned to this rule for filtering and classification.
type [required]
enum
The processor type. The value should always be sensitive_data_scanner.
Allowed enum values: sensitive_data_scanner
default: sensitive_data_scanner
Option 20
object
The split_array processor splits array fields into separate events based on configured rules.
Supported pipeline types: logs
arrays [required]
[object]
A list of array split configurations.
field [required]
string
The path to the array field to split.
include [required]
string
A Datadog search query used to determine which logs this array split operation targets.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets. For split_array, this should typically be *.
type [required]
enum
The processor type. The value should always be split_array.
Allowed enum values: split_array
default: split_array
Option 21
object
The throttle processor limits the number of events that pass through over a given time window.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by
[string]
Optional list of fields used to group events before the threshold has been reached.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
threshold [required]
int64
the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.
type [required]
enum
The processor type. The value should always be throttle.
Allowed enum values: throttle
default: throttle
window [required]
double
The time window in seconds over which the threshold applies.
Option 22
object
The metric_tags processor filters metrics based on their tags using Datadog tag key patterns.
Supported pipeline types: metrics
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query that determines which metrics the processor targets.
rules [required]
[object]
A list of rules for filtering metric tags.
action [required]
enum
The action to take on tags with matching keys.
Allowed enum values: include,exclude
include [required]
string
A Datadog search query used to determine which metrics this rule targets.
keys [required]
[string]
A list of tag keys to include or exclude.
mode [required]
enum
The processing mode for tag filtering.
Allowed enum values: filter
type [required]
enum
The processor type. The value should always be metric_tags.
Allowed enum values: metric_tags
default: metric_tags
sources [required]
[ <oneOf>]
A list of configured data sources for the pipeline.
Option 1
object
The datadog_agent source collects logs/metrics from the Datadog Agent.
Supported pipeline types: logs, metrics
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be datadog_agent.
Allowed enum values: datadog_agent
default: datadog_agent
Option 2
object
The amazon_data_firehose source ingests logs from AWS Data Firehose.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the Firehose delivery stream address.
auth
object
AWS authentication credentials used for accessing AWS services such as S3.
If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).
assume_role
string
The Amazon Resource Name (ARN) of the role to assume.
external_id
string
A unique identifier for cross-account role assumption.
session_name
string
A session identifier used for logging and tracing the assumed role session.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be amazon_data_firehose.
Allowed enum values: amazon_data_firehose
default: amazon_data_firehose
Option 3
object
The amazon_s3 source ingests logs from an Amazon S3 bucket.
It supports AWS authentication and TLS encryption.
Supported pipeline types: logs
auth
object
AWS authentication credentials used for accessing AWS services such as S3.
If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).
assume_role
string
The Amazon Resource Name (ARN) of the role to assume.
external_id
string
A unique identifier for cross-account role assumption.
session_name
string
A session identifier used for logging and tracing the assumed role session.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
region [required]
string
AWS region where the S3 bucket resides.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. Always amazon_s3.
Allowed enum values: amazon_s3
default: amazon_s3
url_key
string
Name of the environment variable or secret that holds the S3 bucket URL.
Option 4
object
The fluent_bit source ingests logs from Fluent Bit.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Fluent Bit receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be fluent_bit.
Allowed enum values: fluent_bit
default: fluent_bit
Option 5
object
The fluentd source ingests logs from a Fluentd-compatible service.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Fluent receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be `fluentd.
Allowed enum values: fluentd
default: fluentd
Option 6
object
The google_pubsub source ingests logs from a Google Cloud Pub/Sub subscription.
Supported pipeline types: logs
auth
object
Google Cloud credentials used to authenticate with Google Cloud Storage.
credentials_file [required]
string
Path to the Google Cloud service account key file.
decoding [required]
enum
The decoding format used to interpret incoming logs.
Allowed enum values: bytes,gelf,json,syslog
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
project [required]
string
The Google Cloud project ID that owns the Pub/Sub subscription.
subscription [required]
string
The Pub/Sub subscription name from which messages are consumed.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be google_pubsub.
Allowed enum values: google_pubsub
default: google_pubsub
Option 7
object
The http_client source scrapes logs from HTTP endpoints at regular intervals.
Supported pipeline types: logs
auth_strategy
enum
Optional authentication strategy for HTTP requests.
Allowed enum values: none,basic,bearer,custom
custom_key
string
Name of the environment variable or secret that holds a custom header value (used with custom auth strategies).
decoding [required]
enum
The decoding format used to interpret incoming logs.
Allowed enum values: bytes,gelf,json,syslog
endpoint_url_key
string
Name of the environment variable or secret that holds the HTTP endpoint URL to scrape.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
password_key
string
Name of the environment variable or secret that holds the password (used when auth_strategy is basic).
scrape_interval_secs
int64
The interval (in seconds) between HTTP scrape requests.
scrape_timeout_secs
int64
The timeout (in seconds) for each scrape request.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
token_key
string
Name of the environment variable or secret that holds the bearer token (used when auth_strategy is bearer).
type [required]
enum
The source type. The value should always be http_client.
Allowed enum values: http_client
default: http_client
username_key
string
Name of the environment variable or secret that holds the username (used when auth_strategy is basic).
Option 8
object
The http_server source collects logs over HTTP POST from external services.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the HTTP server.
Name of the environment variable or secret that holds a custom header value (used with custom auth strategies).
decoding [required]
enum
The decoding format used to interpret incoming logs.
Allowed enum values: bytes,gelf,json,syslog
id [required]
string
Unique ID for the HTTP server source.
password_key
string
Name of the environment variable or secret that holds the password (used when auth_strategy is plain).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be http_server.
Allowed enum values: http_server
default: http_server
username_key
string
Name of the environment variable or secret that holds the username (used when auth_strategy is plain).
Option 9
object
The kafka source ingests data from Apache Kafka topics.
Supported pipeline types: logs
bootstrap_servers_key
string
Name of the environment variable or secret that holds the Kafka bootstrap servers list.
group_id [required]
string
Consumer group ID used by the Kafka client.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
librdkafka_options
[object]
Optional list of advanced Kafka client configuration options, defined as key-value pairs.
name [required]
string
The name of the librdkafka configuration option to set.
value [required]
string
The value assigned to the specified librdkafka configuration option.
sasl
object
Specifies the SASL mechanism for authenticating with a Kafka cluster.
mechanism
enum
SASL mechanism used for Kafka authentication.
Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
password_key
string
Name of the environment variable or secret that holds the SASL password.
username_key
string
Name of the environment variable or secret that holds the SASL username.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
topics [required]
[string]
A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.
type [required]
enum
The source type. The value should always be kafka.
Allowed enum values: kafka
default: kafka
Option 10
object
The logstash source ingests logs from a Logstash forwarder.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Logstash receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be logstash.
Allowed enum values: logstash
default: logstash
Option 11
object
The rsyslog source listens for logs over TCP or UDP from an rsyslog server using the syslog protocol.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the syslog receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
mode [required]
enum
Protocol used by the syslog source to receive messages.
Allowed enum values: tcp,udp
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be rsyslog.
Allowed enum values: rsyslog
default: rsyslog
Option 12
object
The socket source ingests logs over TCP or UDP.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the socket.
framing [required]
<oneOf>
Framing method configuration for the socket source.
Option 1
object
Byte frames which are delimited by a newline character.
method [required]
enum
Byte frames which are delimited by a newline character.
Allowed enum values: newline_delimited
Option 2
object
Byte frames are passed through as-is according to the underlying I/O boundaries (for example, split between messages or stream segments).
method [required]
enum
Byte frames are passed through as-is according to the underlying I/O boundaries (for example, split between messages or stream segments).
Allowed enum values: bytes
Option 3
object
Byte frames which are delimited by a chosen character.
delimiter [required]
string
A single ASCII character used to delimit events.
method [required]
enum
Byte frames which are delimited by a chosen character.
Allowed enum values: character_delimited
Option 4
object
Byte frames according to the octet counting format as per RFC6587.
method [required]
enum
Byte frames according to the octet counting format as per RFC6587.
Allowed enum values: octet_counting
Option 5
object
Byte frames which are chunked GELF messages.
method [required]
enum
Byte frames which are chunked GELF messages.
Allowed enum values: chunked_gelf
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
mode [required]
enum
Protocol used to receive logs.
Allowed enum values: tcp,udp
tls
object
TLS configuration. Relevant only when mode is tcp.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be socket.
Allowed enum values: socket
default: socket
Option 13
object
The splunk_hec source implements the Splunk HTTP Event Collector (HEC) API.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the HEC API.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. Always splunk_hec.
Allowed enum values: splunk_hec
default: splunk_hec
Option 14
object
The splunk_tcp source receives logs from a Splunk Universal Forwarder over TCP.
TLS is supported for secure transmission.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Splunk TCP receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. Always splunk_tcp.
Allowed enum values: splunk_tcp
default: splunk_tcp
Option 15
object
The sumo_logic source receives logs from Sumo Logic collectors.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Sumo Logic receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
type [required]
enum
The source type. The value should always be sumo_logic.
Allowed enum values: sumo_logic
default: sumo_logic
Option 16
object
The syslog_ng source listens for logs over TCP or UDP from a syslog-ng server using the syslog protocol.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the syslog-ng receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
mode [required]
enum
Protocol used by the syslog source to receive messages.
Allowed enum values: tcp,udp
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be syslog_ng.
Allowed enum values: syslog_ng
default: syslog_ng
Option 17
object
The opentelemetry source receives telemetry data using the OpenTelemetry Protocol (OTLP) over gRPC and HTTP.
Supported pipeline types: logs
grpc_address_key
string
Environment variable name containing the gRPC server address for receiving OTLP data. Must be a valid environment variable name (alphanumeric characters and underscores only).
http_address_key
string
Environment variable name containing the HTTP server address for receiving OTLP data. Must be a valid environment variable name (alphanumeric characters and underscores only).
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be opentelemetry.
Allowed enum values: opentelemetry
default: opentelemetry
use_legacy_search_syntax
boolean
Set to true to continue using the legacy search syntax while migrating filter queries. After migrating all queries to the new syntax, set to false.
The legacy syntax is deprecated and will eventually be removed.
Requires Observability Pipelines Worker 2.11 or later.
See Upgrade Your Filter Queries to the New Search Syntax for more information.
name [required]
string
Name of the pipeline.
type [required]
string
The resource type identifier. For pipeline resources, this should always be set to pipelines.
{"data":{"attributes":{"config":{"destinations":[{"id":"datadog-logs-destination","inputs":["my-processor-group"],"type":"datadog_logs"}],"processor_groups":[{"enabled":true,"id":"my-processor-group","include":"service:my-service","inputs":["datadog-agent-source"],"processors":[{"enabled":true,"id":"dedupe-processor","include":"service:my-service","type":"dedupe","fields":["message"],"mode":"match","cache":{"num_events":5000}}]}],"sources":[{"id":"datadog-agent-source","type":"datadog_agent"}]},"name":"Pipeline with Dedupe Cache"},"type":"pipelines"}}
{"data":{"attributes":{"config":{"destinations":[{"id":"datadog-logs-destination","inputs":["my-processor-group"],"type":"datadog_logs"}],"processor_groups":[{"enabled":true,"id":"my-processor-group","include":"service:my-service","inputs":["datadog-agent-source"],"processors":[{"enabled":true,"id":"dedupe-processor","include":"service:my-service","type":"dedupe","fields":["message"],"mode":"match"}]}],"sources":[{"id":"datadog-agent-source","type":"datadog_agent"}]},"name":"Pipeline with Dedupe No Cache"},"type":"pipelines"}}
Name of the environment variable or secret that holds a custom header value (used with custom auth strategies).
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
password_key
string
Name of the environment variable or secret that holds the password (used when auth_strategy is basic).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
token_key
string
Name of the environment variable or secret that holds the bearer token (used when auth_strategy is bearer).
type [required]
enum
The destination type. The value should always be http_client.
Allowed enum values: http_client
default: http_client
uri_key
string
Name of the environment variable or secret that holds the HTTP endpoint URI.
username_key
string
Name of the environment variable or secret that holds the username (used when auth_strategy is basic).
Option 2
object
The amazon_opensearch destination writes logs to Amazon OpenSearch.
Supported pipeline types: logs
auth [required]
object
Authentication settings for the Amazon OpenSearch destination.
The strategy field determines whether basic or AWS-based authentication is used.
assume_role
string
The ARN of the role to assume (used with aws strategy).
aws_region
string
AWS region
external_id
string
External ID for the assumed role (used with aws strategy).
session_name
string
Session name for the assumed role (used with aws strategy).
strategy [required]
enum
The authentication strategy to use.
Allowed enum values: basic,aws
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
bulk_index
string
The index to write logs to.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be amazon_opensearch.
Allowed enum values: amazon_opensearch
default: amazon_opensearch
Option 3
object
The amazon_s3 destination sends your logs in Datadog-rehydratable format to an Amazon S3 bucket for archiving.
Supported pipeline types: logs
auth
object
AWS authentication credentials used for accessing AWS services such as S3.
If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).
assume_role
string
The Amazon Resource Name (ARN) of the role to assume.
external_id
string
A unique identifier for cross-account role assumption.
session_name
string
A session identifier used for logging and tracing the assumed role session.
bucket [required]
string
S3 bucket name.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
Unique identifier for the destination component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. Always amazon_s3.
Allowed enum values: amazon_s3
default: amazon_s3
Option 4
object
The amazon_security_lake destination sends your logs to Amazon Security Lake.
Supported pipeline types: logs
auth
object
AWS authentication credentials used for accessing AWS services such as S3.
If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).
assume_role
string
The Amazon Resource Name (ARN) of the role to assume.
external_id
string
A unique identifier for cross-account role assumption.
session_name
string
A session identifier used for logging and tracing the assumed role session.
bucket [required]
string
Name of the Amazon S3 bucket in Security Lake (3-63 characters).
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
custom_source_name [required]
string
Custom source name for the logs in Security Lake.
id [required]
string
Unique identifier for the destination component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
region [required]
string
AWS region of the S3 bucket.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. Always amazon_security_lake.
Allowed enum values: amazon_security_lake
default: amazon_security_lake
Option 5
object
The azure_storage destination forwards logs to an Azure Blob Storage container.
Supported pipeline types: logs
blob_prefix
string
Optional prefix for blobs written to the container.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
connection_string_key
string
Name of the environment variable or secret that holds the Azure Storage connection string.
container_name [required]
string
The name of the Azure Blob Storage container to store logs in.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be azure_storage.
Allowed enum values: azure_storage
default: azure_storage
Option 6
object
The cloud_prem destination sends logs to Datadog CloudPrem.
Supported pipeline types: logs
endpoint_url_key
string
Name of the environment variable or secret that holds the CloudPrem endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be cloud_prem.
Allowed enum values: cloud_prem
default: cloud_prem
Option 7
object
The crowdstrike_next_gen_siem destination forwards logs to CrowdStrike Next Gen SIEM.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
compression
object
Compression configuration for log events.
algorithm [required]
enum
Compression algorithm for log events.
Allowed enum values: gzip,zlib
level
int64
Compression level.
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json,raw_message
endpoint_url_key
string
Name of the environment variable or secret that holds the CrowdStrike endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
token_key
string
Name of the environment variable or secret that holds the CrowdStrike API token.
type [required]
enum
The destination type. The value should always be crowdstrike_next_gen_siem.
Allowed enum values: crowdstrike_next_gen_siem
default: crowdstrike_next_gen_siem
Option 8
object
The datadog_logs destination forwards logs to Datadog Log Management.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
routes
[object]
A list of routing rules that forward matching logs to Datadog using dedicated API keys.
api_key_key
string
Name of the environment variable or secret that stores the Datadog API key used by this route.
include
string
A Datadog search query that determines which logs are forwarded using this route.
route_id
string
Unique identifier for this route within the destination.
site
string
Datadog site where matching logs are sent (for example, us1).
type [required]
enum
The destination type. The value should always be datadog_logs.
Allowed enum values: datadog_logs
default: datadog_logs
Option 9
object
The elasticsearch destination writes logs to an Elasticsearch cluster.
Supported pipeline types: logs
api_version
enum
The Elasticsearch API version to use. Set to auto to auto-detect.
Allowed enum values: auto,v6,v7,v8
auth
object
Authentication settings for the Elasticsearch destination.
When strategy is basic, use username_key and password_key to reference credentials stored in environment variables or secrets.
password_key
string
Name of the environment variable or secret that holds the Elasticsearch password (used when strategy is basic).
strategy [required]
enum
The authentication strategy to use.
Allowed enum values: basic,aws
username_key
string
Name of the environment variable or secret that holds the Elasticsearch username (used when strategy is basic).
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
bulk_index
string
The index to write logs to in Elasticsearch.
data_stream
object
Configuration options for writing to Elasticsearch Data Streams instead of a fixed index.
dataset
string
The data stream dataset for your logs. This groups logs by their source or application.
dtype
string
The data stream type for your logs. This determines how logs are categorized within the data stream.
namespace
string
The data stream namespace for your logs. This separates logs into different environments or domains.
endpoint_url_key
string
Name of the environment variable or secret that holds the Elasticsearch endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be elasticsearch.
Allowed enum values: elasticsearch
default: elasticsearch
Option 10
object
The google_chronicle destination sends logs to Google Chronicle.
Supported pipeline types: logs
auth
object
Google Cloud credentials used to authenticate with Google Cloud Storage.
credentials_file [required]
string
Path to the Google Cloud service account key file.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
customer_id [required]
string
The Google Chronicle customer ID.
encoding
enum
The encoding format for the logs sent to Chronicle.
Allowed enum values: json,raw_message
endpoint_url_key
string
Name of the environment variable or secret that holds the Google Chronicle endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
log_type
string
The log type metadata associated with the Chronicle destination.
type [required]
enum
The destination type. The value should always be google_chronicle.
Allowed enum values: google_chronicle
default: google_chronicle
Option 11
object
The google_cloud_storage destination stores logs in a Google Cloud Storage (GCS) bucket.
It requires a bucket name, Google Cloud authentication, and metadata fields.
Supported pipeline types: logs
acl
enum
Access control list setting for objects written to the bucket.
Allowed enum values: private,project-private,public-read,authenticated-read,bucket-owner-read,bucket-owner-full-control
auth
object
Google Cloud credentials used to authenticate with Google Cloud Storage.
credentials_file [required]
string
Path to the Google Cloud service account key file.
bucket [required]
string
Name of the GCS bucket.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
Unique identifier for the destination component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
key_prefix
string
Optional prefix for object keys within the GCS bucket.
metadata
[object]
Custom metadata to attach to each object uploaded to the GCS bucket.
name [required]
string
The metadata key.
value [required]
string
The metadata value.
storage_class [required]
enum
Storage class used for objects stored in GCS.
Allowed enum values: STANDARD,NEARLINE,COLDLINE,ARCHIVE
type [required]
enum
The destination type. Always google_cloud_storage.
Allowed enum values: google_cloud_storage
default: google_cloud_storage
Option 12
object
The google_pubsub destination publishes logs to a Google Cloud Pub/Sub topic.
Supported pipeline types: logs
auth
object
Google Cloud credentials used to authenticate with Google Cloud Storage.
credentials_file [required]
string
Path to the Google Cloud service account key file.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json,raw_message
endpoint_url_key
string
Name of the environment variable or secret that holds the Google Cloud Pub/Sub endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
project [required]
string
The Google Cloud project ID that owns the Pub/Sub topic.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
topic [required]
string
The Pub/Sub topic name to publish logs to.
type [required]
enum
The destination type. The value should always be google_pubsub.
Allowed enum values: google_pubsub
default: google_pubsub
Option 13
object
The kafka destination sends logs to Apache Kafka topics.
Supported pipeline types: logs
bootstrap_servers_key
string
Name of the environment variable or secret that holds the Kafka bootstrap servers list.
compression
enum
Compression codec for Kafka messages.
Allowed enum values: none,gzip,snappy,lz4,zstd
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json,raw_message
headers_key
string
The field name to use for Kafka message headers.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
key_field
string
The field name to use as the Kafka message key.
librdkafka_options
[object]
Optional list of advanced Kafka producer configuration options, defined as key-value pairs.
name [required]
string
The name of the librdkafka configuration option to set.
value [required]
string
The value assigned to the specified librdkafka configuration option.
message_timeout_ms
int64
Maximum time in milliseconds to wait for message delivery confirmation.
rate_limit_duration_secs
int64
Duration in seconds for the rate limit window.
rate_limit_num
int64
Maximum number of messages allowed per rate limit duration.
sasl
object
Specifies the SASL mechanism for authenticating with a Kafka cluster.
mechanism
enum
SASL mechanism used for Kafka authentication.
Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
password_key
string
Name of the environment variable or secret that holds the SASL password.
username_key
string
Name of the environment variable or secret that holds the SASL username.
socket_timeout_ms
int64
Socket timeout in milliseconds for network requests.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
topic [required]
string
The Kafka topic name to publish logs to.
type [required]
enum
The destination type. The value should always be kafka.
Allowed enum values: kafka
default: kafka
Option 14
object
The microsoft_sentinel destination forwards logs to Microsoft Sentinel.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
client_id [required]
string
Azure AD client ID used for authentication.
client_secret_key
string
Name of the environment variable or secret that holds the Azure AD client secret.
dce_uri_key
string
Name of the environment variable or secret that holds the Data Collection Endpoint (DCE) URI.
dcr_immutable_id [required]
string
The immutable ID of the Data Collection Rule (DCR).
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
table [required]
string
The name of the Log Analytics table where logs are sent.
tenant_id [required]
string
Azure AD tenant ID.
type [required]
enum
The destination type. The value should always be microsoft_sentinel.
Allowed enum values: microsoft_sentinel
default: microsoft_sentinel
Option 15
object
The new_relic destination sends logs to the New Relic platform.
Supported pipeline types: logs
account_id_key
string
Name of the environment variable or secret that holds the New Relic account ID.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
license_key_key
string
Name of the environment variable or secret that holds the New Relic license key.
region [required]
enum
The New Relic region.
Allowed enum values: us,eu
type [required]
enum
The destination type. The value should always be new_relic.
Allowed enum values: new_relic
default: new_relic
Option 16
object
The opensearch destination writes logs to an OpenSearch cluster.
Supported pipeline types: logs
auth
object
Authentication settings for the Elasticsearch destination.
When strategy is basic, use username_key and password_key to reference credentials stored in environment variables or secrets.
password_key
string
Name of the environment variable or secret that holds the Elasticsearch password (used when strategy is basic).
strategy [required]
enum
The authentication strategy to use.
Allowed enum values: basic,aws
username_key
string
Name of the environment variable or secret that holds the Elasticsearch username (used when strategy is basic).
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
bulk_index
string
The index to write logs to.
data_stream
object
Configuration options for writing to OpenSearch Data Streams instead of a fixed index.
dataset
string
The data stream dataset for your logs. This groups logs by their source or application.
dtype
string
The data stream type for your logs. This determines how logs are categorized within the data stream.
namespace
string
The data stream namespace for your logs. This separates logs into different environments or domains.
endpoint_url_key
string
Name of the environment variable or secret that holds the OpenSearch endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be opensearch.
Allowed enum values: opensearch
default: opensearch
Option 17
object
The rsyslog destination forwards logs to an external rsyslog server over TCP or UDP using the syslog protocol.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
endpoint_url_key
string
Name of the environment variable or secret that holds the syslog server endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
keepalive
int64
Optional socket keepalive duration in milliseconds.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. The value should always be rsyslog.
Allowed enum values: rsyslog
default: rsyslog
Option 18
object
The sentinel_one destination sends logs to SentinelOne.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
region [required]
enum
The SentinelOne region to send logs to.
Allowed enum values: us,eu,ca,data_set_us
token_key
string
Name of the environment variable or secret that holds the SentinelOne API token.
type [required]
enum
The destination type. The value should always be sentinel_one.
Allowed enum values: sentinel_one
default: sentinel_one
Option 19
object
The socket destination sends logs over TCP or UDP to a remote server.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the socket address (host:port).
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json,raw_message
framing [required]
<oneOf>
Framing method configuration.
Option 1
object
Each log event is delimited by a newline character.
method [required]
enum
The definition of ObservabilityPipelineSocketDestinationFramingNewlineDelimitedMethod object.
Allowed enum values: newline_delimited
Option 2
object
Event data is not delimited at all.
method [required]
enum
The definition of ObservabilityPipelineSocketDestinationFramingBytesMethod object.
Allowed enum values: bytes
Option 3
object
Each log event is separated using the specified delimiter character.
delimiter [required]
string
A single ASCII character used as a delimiter.
method [required]
enum
The definition of ObservabilityPipelineSocketDestinationFramingCharacterDelimitedMethod object.
Allowed enum values: character_delimited
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
mode [required]
enum
Protocol used to send logs.
Allowed enum values: tcp,udp
tls
object
TLS configuration. Relevant only when mode is tcp.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. The value should always be socket.
Allowed enum values: socket
default: socket
Option 20
object
The splunk_hec destination forwards logs to Splunk using the HTTP Event Collector (HEC).
Supported pipeline types: logs
auto_extract_timestamp
boolean
If true, Splunk tries to extract timestamps from incoming log events.
If false, Splunk assigns the time the event was received.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
encoding
enum
Encoding format for log events.
Allowed enum values: json,raw_message
endpoint_url_key
string
Name of the environment variable or secret that holds the Splunk HEC endpoint URL.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
index
string
Optional name of the Splunk index where logs are written.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
sourcetype
string
The Splunk sourcetype to assign to log events.
token_key
string
Name of the environment variable or secret that holds the Splunk HEC token.
type [required]
enum
The destination type. Always splunk_hec.
Allowed enum values: splunk_hec
default: splunk_hec
Option 21
object
The sumo_logic destination forwards logs to Sumo Logic.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
encoding
enum
The output encoding format.
Allowed enum values: json,raw_message,logfmt
endpoint_url_key
string
Name of the environment variable or secret that holds the Sumo Logic HTTP endpoint URL.
header_custom_fields
[object]
A list of custom headers to include in the request to Sumo Logic.
name [required]
string
The header field name.
value [required]
string
The header field value.
header_host_name
string
Optional override for the host name header.
header_source_category
string
Optional override for the source category header.
header_source_name
string
Optional override for the source name header.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be sumo_logic.
Allowed enum values: sumo_logic
default: sumo_logic
Option 22
object
The syslog_ng destination forwards logs to an external syslog-ng server over TCP or UDP using the syslog protocol.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
endpoint_url_key
string
Name of the environment variable or secret that holds the syslog-ng server endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
keepalive
int64
Optional socket keepalive duration in milliseconds.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. The value should always be syslog_ng.
Allowed enum values: syslog_ng
default: syslog_ng
Option 23
object
The datadog_metrics destination forwards metrics to Datadog.
Supported pipeline types: metrics
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be datadog_metrics.
Allowed enum values: datadog_metrics
default: datadog_metrics
pipeline_type
enum
The type of data being ingested. Defaults to logs if not specified.
Allowed enum values: logs,metrics
default: logs
processor_groups
[object]
A list of processor groups that transform or enrich log data.
display_name
string
The display name for a component.
enabled [required]
boolean
Whether this processor group is enabled.
id [required]
string
The unique identifier for the processor group.
include [required]
string
Conditional expression for when this processor group should execute.
inputs [required]
[string]
A list of IDs for components whose output is used as the input for this processor group.
processors [required]
[ <oneOf>]
Processors applied sequentially within this group. Events flow through each processor in order.
Option 1
object
The filter processor allows conditional processing of logs/metrics based on a Datadog search query. Logs/metrics that match the include query are passed through; others are discarded.
Supported pipeline types: logs, metrics
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs/metrics should pass through the filter. Logs/metrics that match this query continue to downstream components; others are dropped.
type [required]
enum
The processor type. The value should always be filter.
Allowed enum values: filter
default: filter
Option 2
object
The add_env_vars processor adds environment variable values to log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this processor in the pipeline.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_env_vars.
Allowed enum values: add_env_vars
default: add_env_vars
variables [required]
[object]
A list of environment variable mappings to apply to log fields.
field [required]
string
The target field in the log event.
name [required]
string
The name of the environment variable to read.
Option 3
object
The add_fields processor adds static key-value fields to logs.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[object]
A list of static fields (key-value pairs) that is added to each log event processed by this component.
name [required]
string
The field name.
value [required]
string
The field value.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_fields.
Allowed enum values: add_fields
default: add_fields
Option 4
object
The add_hostname processor adds the hostname to log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_hostname.
Allowed enum values: add_hostname
default: add_hostname
Option 5
object
The custom_processor processor transforms events using Vector Remap Language (VRL) scripts with advanced filtering capabilities.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets. This field should always be set to * for the custom_processor processor.
default: *
remaps [required]
[object]
Array of VRL remap rules.
drop_on_error [required]
boolean
Whether to drop events that caused errors during processing.
enabled
boolean
Whether this remap rule is enabled.
include [required]
string
A Datadog search query used to filter events for this specific remap rule.
name [required]
string
A descriptive name for this remap rule.
source [required]
string
The VRL script source code that defines the processing logic.
type [required]
enum
The processor type. The value should always be custom_processor.
Allowed enum values: custom_processor
default: custom_processor
Option 6
object
The datadog_tags processor includes or excludes specific Datadog tags in your logs.
Supported pipeline types: logs
action [required]
enum
The action to take on tags with matching keys.
Allowed enum values: include,exclude
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
keys [required]
[string]
A list of tag keys.
mode [required]
enum
The processing mode.
Allowed enum values: filter
type [required]
enum
The processor type. The value should always be datadog_tags.
Allowed enum values: datadog_tags
default: datadog_tags
Option 7
object
The dedupe processor removes duplicate fields in log events.
Supported pipeline types: logs
cache
object
Configuration for the cache used to detect duplicates.
num_events [required]
int64
The number of events to cache for duplicate detection.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[string]
A list of log field paths to check for duplicates.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
mode [required]
enum
The deduplication mode to apply to the fields.
Allowed enum values: match,ignore
type [required]
enum
The processor type. The value should always be dedupe.
Allowed enum values: dedupe
default: dedupe
Option 8
object
The enrichment_table processor enriches logs using a static CSV file, GeoIP database, or reference table. Exactly one of file, geoip, or reference_table must be configured.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
file
object
Defines a static enrichment table loaded from a CSV file.
encoding [required]
object
File encoding format.
delimiter [required]
string
The encodingdelimiter.
includes_headers [required]
boolean
The encodingincludes_headers.
type [required]
enum
Specifies the encoding format (e.g., CSV) used for enrichment tables.
Allowed enum values: csv
key [required]
[object]
Key fields used to look up enrichment values.
column [required]
string
The itemscolumn.
comparison [required]
enum
Defines how to compare key fields for enrichment table lookups.
Allowed enum values: equals
field [required]
string
The itemsfield.
path [required]
string
Path to the CSV file.
schema [required]
[object]
Schema defining column names and their types.
column [required]
string
The itemscolumn.
type [required]
enum
Declares allowed data types for enrichment table columns.
Allowed enum values: string,boolean,integer,float,date,timestamp
geoip
object
Uses a GeoIP database to enrich logs based on an IP field.
key_field [required]
string
Path to the IP field in the log.
locale [required]
string
Locale used to resolve geographical names.
path [required]
string
Path to the GeoIP database file.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
reference_table
object
Uses a Datadog reference table to enrich logs.
app_key_key
string
Name of the environment variable or secret that holds the Datadog application key used to access the reference table.
columns
[string]
List of column names to include from the reference table. If not provided, all columns are included.
key_field [required]
string
Path to the field in the log event to match against the reference table.
table_id [required]
string
The unique identifier of the reference table.
target [required]
string
Path where enrichment results should be stored in the log.
type [required]
enum
The processor type. The value should always be enrichment_table.
Allowed enum values: enrichment_table
default: enrichment_table
Option 9
object
The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog.
Metrics can be counters, gauges, or distributions and optionally grouped by log fields.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline.
include
string
A Datadog search query used to determine which logs this processor targets.
metrics
[object]
Configuration for generating individual metrics.
group_by
[string]
Optional fields used to group the metric series.
include [required]
string
Datadog filter query to match logs for metric generation.
metric_type [required]
enum
Type of metric to create.
Allowed enum values: count,gauge,distribution
name [required]
string
Name of the custom metric to be created.
value [required]
<oneOf>
Specifies how the value of the generated metric is computed.
Option 1
object
Strategy that increments a generated metric by one for each matching event.
strategy [required]
enum
Increments the metric by 1 for each matching event.
Allowed enum values: increment_by_one
Option 2
object
Strategy that increments a generated metric based on the value of a log field.
field [required]
string
Name of the log field containing the numeric value to increment the metric by.
strategy [required]
enum
Uses a numeric field in the log event as the metric increment.
Allowed enum values: increment_by_field
type [required]
enum
The processor type. Always generate_datadog_metrics.
Allowed enum values: generate_datadog_metrics
default: generate_datadog_metrics
Option 10
object
The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
mappings [required]
[object]
A list of mapping rules to convert events to the OCSF format.
include [required]
string
A Datadog search query used to select the logs that this mapping should apply to.
mapping [required]
<oneOf>
Defines a single mapping rule for transforming logs into the OCSF schema.
Custom OCSF mapping configuration for transforming logs.
mapping [required]
[object]
A list of field mapping rules for transforming log fields to OCSF schema fields.
default
The default value to use if the source field is missing or empty.
dest [required]
string
The destination OCSF field path.
lookup
object
Lookup table configuration for mapping source values to destination values.
source
The source field path from the log event.
sources
Multiple source field paths for combined mapping.
value
A static value to use for the destination field.
metadata [required]
object
Metadata for the custom OCSF mapping.
class [required]
string
The OCSF event class name.
profiles
[string]
A list of OCSF profiles to apply.
version [required]
string
The OCSF schema version.
version [required]
int64
The version of the custom mapping configuration.
type [required]
enum
The processor type. The value should always be ocsf_mapper.
Allowed enum values: ocsf_mapper
default: ocsf_mapper
Option 11
object
The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.
Supported pipeline types: logs
disable_library_rules
boolean
If set to true, disables the default Grok rules provided by Datadog.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
A unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
rules [required]
[object]
The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.
match_rules [required]
[object]
A list of Grok parsing rules that define how to extract fields from the source field.
Each rule must contain a name and a valid Grok pattern.
name [required]
string
The name of the rule.
rule [required]
string
The definition of the Grok rule.
source [required]
string
The name of the field in the log event to apply the Grok rules to.
support_rules
[object]
A list of Grok helper rules that can be referenced by the parsing rules.
name [required]
string
The name of the Grok helper rule.
rule [required]
string
The definition of the Grok helper rule.
type [required]
enum
The processor type. The value should always be parse_grok.
Allowed enum values: parse_grok
default: parse_grok
Option 12
object
The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
field [required]
string
The name of the log field that contains a JSON string.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be parse_json.
Allowed enum values: parse_json
default: parse_json
Option 13
object
The parse_xml processor parses XML from a specified field and extracts it into the event.
Supported pipeline types: logs
always_use_text_key
boolean
Whether to always use a text key for element content.
attr_prefix
string
The prefix to use for XML attributes in the parsed output.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
field [required]
string
The name of the log field that contains an XML string.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
include_attr
boolean
Whether to include XML attributes in the parsed output.
parse_bool
boolean
Whether to parse boolean values from strings.
parse_null
boolean
Whether to parse null values.
parse_number
boolean
Whether to parse numeric values from strings.
text_key
string
The key name to use for text content within XML elements. Must be at least 1 character if specified.
type [required]
enum
The processor type. The value should always be parse_xml.
Allowed enum values: parse_xml
default: parse_xml
Option 14
object
The quota processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.
Supported pipeline types: logs
display_name
string
The display name for a component.
drop_events
boolean
If set to true, logs that match the quota filter and are sent after the quota is exceeded are dropped. Logs that do not match the filter continue through the pipeline. Note: You can set either drop_events or overflow_action, but not both.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
ignore_when_missing_partitions
boolean
If true, the processor skips quota checks when partition fields are missing from the logs.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
name [required]
string
Name of the quota.
overflow_action
enum
The action to take when the quota or bucket limit is exceeded. Options:
drop: Drop the event.
no_action: Let the event pass through.
overflow_routing: Route to an overflow destination.
Allowed enum values: drop,no_action,overflow_routing
overrides
[object]
A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.
fields [required]
[object]
A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.
name [required]
string
The field name.
value [required]
string
The field value.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
partition_fields
[string]
A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.
too_many_buckets_action
enum
The action to take when the quota or bucket limit is exceeded. Options:
drop: Drop the event.
no_action: Let the event pass through.
overflow_routing: Route to an overflow destination.
Allowed enum values: drop,no_action,overflow_routing
type [required]
enum
The processor type. The value should always be quota.
Allowed enum values: quota
default: quota
Option 15
object
The reduce processor aggregates and merges logs based on matching keys and merge strategies.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by [required]
[string]
A list of fields used to group log events for merging.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
merge_strategies [required]
[object]
List of merge strategies defining how values from grouped events should be combined.
path [required]
string
The field path in the log event.
strategy [required]
enum
The merge strategy to apply.
Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique
type [required]
enum
The processor type. The value should always be reduce.
Allowed enum values: reduce
default: reduce
Option 16
object
The remove_fields processor deletes specified fields from logs.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[string]
A list of field names to be removed from each log event.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be remove_fields.
Allowed enum values: remove_fields
default: remove_fields
Option 17
object
The rename_fields processor changes field names.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[object]
A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.
destination [required]
string
The field name to assign the renamed value to.
preserve_source [required]
boolean
Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.
source [required]
string
The original field name in the log event that should be renamed.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be rename_fields.
Allowed enum values: rename_fields
default: rename_fields
Option 18
object
The sample processor allows probabilistic sampling of logs at a fixed rate.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by
[string]
Optional list of fields to group events by. Each group is sampled independently.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
percentage [required]
double
The percentage of logs to sample.
type [required]
enum
The processor type. The value should always be sample.
Allowed enum values: sample
default: sample
Option 19
object
The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
rules [required]
[object]
A list of rules for identifying and acting on sensitive data patterns.
keyword_options
object
Configuration for keywords used to reinforce sensitive data pattern detection.
keywords [required]
[string]
A list of keywords to match near the sensitive pattern.
proximity [required]
int64
Maximum number of tokens between a keyword and a sensitive value match.
name [required]
string
A name identifying the rule.
on_match [required]
<oneOf>
Defines what action to take when sensitive data is matched.
Option 1
object
Configuration for completely redacting matched sensitive data.
action [required]
enum
Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility.
Allowed enum values: redact
options [required]
object
Configuration for fully redacting sensitive data.
replace [required]
string
The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptionsreplace.
Option 2
object
Configuration for hashing matched sensitive values.
action [required]
enum
Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content.
Allowed enum values: hash
options
object
The ObservabilityPipelineSensitiveDataScannerProcessorActionHashoptions.
Option 3
object
Configuration for partially redacting matched sensitive data.
action [required]
enum
Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card).
Allowed enum values: partial_redact
options [required]
object
Controls how partial redaction is applied, including character count and direction.
characters [required]
int64
The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptionscharacters.
direction [required]
enum
Indicates whether to redact characters from the first or last part of the matched value.
Allowed enum values: first,last
pattern [required]
<oneOf>
Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.
Option 1
object
Defines a custom regex-based pattern for identifying sensitive data in logs.
options [required]
object
Options for defining a custom regex pattern.
description
string
Human-readable description providing context about a sensitive data scanner rule
rule [required]
string
A regular expression used to detect sensitive values. Must be a valid regex.
type [required]
enum
Indicates a custom regular expression is used for matching.
Allowed enum values: custom
Option 2
object
Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.
options [required]
object
Options for selecting a predefined library pattern and enabling keyword support.
description
string
Human-readable description providing context about a sensitive data scanner rule
id [required]
string
Identifier for a predefined pattern from the sensitive data scanner pattern library.
use_recommended_keywords
boolean
Whether to augment the pattern with recommended keywords (optional).
type [required]
enum
Indicates that a predefined library pattern is used.
Allowed enum values: library
scope [required]
<oneOf>
Determines which parts of the log the pattern-matching rule should be applied to.
Option 1
object
Includes only specific fields for sensitive data scanning.
options [required]
object
Fields to which the scope rule applies.
fields [required]
[string]
The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptionsfields.
target [required]
enum
Applies the rule only to included fields.
Allowed enum values: include
Option 2
object
Excludes specific fields from sensitive data scanning.
options [required]
object
Fields to which the scope rule applies.
fields [required]
[string]
The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptionsfields.
target [required]
enum
Excludes specific fields from processing.
Allowed enum values: exclude
Option 3
object
Applies scanning across all available fields.
target [required]
enum
Applies the rule to all fields.
Allowed enum values: all
tags [required]
[string]
Tags assigned to this rule for filtering and classification.
type [required]
enum
The processor type. The value should always be sensitive_data_scanner.
Allowed enum values: sensitive_data_scanner
default: sensitive_data_scanner
Option 20
object
The split_array processor splits array fields into separate events based on configured rules.
Supported pipeline types: logs
arrays [required]
[object]
A list of array split configurations.
field [required]
string
The path to the array field to split.
include [required]
string
A Datadog search query used to determine which logs this array split operation targets.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets. For split_array, this should typically be *.
type [required]
enum
The processor type. The value should always be split_array.
Allowed enum values: split_array
default: split_array
Option 21
object
The throttle processor limits the number of events that pass through over a given time window.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by
[string]
Optional list of fields used to group events before the threshold has been reached.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
threshold [required]
int64
the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.
type [required]
enum
The processor type. The value should always be throttle.
Allowed enum values: throttle
default: throttle
window [required]
double
The time window in seconds over which the threshold applies.
Option 22
object
The metric_tags processor filters metrics based on their tags using Datadog tag key patterns.
Supported pipeline types: metrics
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query that determines which metrics the processor targets.
rules [required]
[object]
A list of rules for filtering metric tags.
action [required]
enum
The action to take on tags with matching keys.
Allowed enum values: include,exclude
include [required]
string
A Datadog search query used to determine which metrics this rule targets.
keys [required]
[string]
A list of tag keys to include or exclude.
mode [required]
enum
The processing mode for tag filtering.
Allowed enum values: filter
type [required]
enum
The processor type. The value should always be metric_tags.
Allowed enum values: metric_tags
default: metric_tags
processors
[object]
DEPRECATED: A list of processor groups that transform or enrich log data.
Deprecated: This field is deprecated, you should now use the processor_groups field.
display_name
string
The display name for a component.
enabled [required]
boolean
Whether this processor group is enabled.
id [required]
string
The unique identifier for the processor group.
include [required]
string
Conditional expression for when this processor group should execute.
inputs [required]
[string]
A list of IDs for components whose output is used as the input for this processor group.
processors [required]
[ <oneOf>]
Processors applied sequentially within this group. Events flow through each processor in order.
Option 1
object
The filter processor allows conditional processing of logs/metrics based on a Datadog search query. Logs/metrics that match the include query are passed through; others are discarded.
Supported pipeline types: logs, metrics
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs/metrics should pass through the filter. Logs/metrics that match this query continue to downstream components; others are dropped.
type [required]
enum
The processor type. The value should always be filter.
Allowed enum values: filter
default: filter
Option 2
object
The add_env_vars processor adds environment variable values to log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this processor in the pipeline.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_env_vars.
Allowed enum values: add_env_vars
default: add_env_vars
variables [required]
[object]
A list of environment variable mappings to apply to log fields.
field [required]
string
The target field in the log event.
name [required]
string
The name of the environment variable to read.
Option 3
object
The add_fields processor adds static key-value fields to logs.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[object]
A list of static fields (key-value pairs) that is added to each log event processed by this component.
name [required]
string
The field name.
value [required]
string
The field value.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_fields.
Allowed enum values: add_fields
default: add_fields
Option 4
object
The add_hostname processor adds the hostname to log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_hostname.
Allowed enum values: add_hostname
default: add_hostname
Option 5
object
The custom_processor processor transforms events using Vector Remap Language (VRL) scripts with advanced filtering capabilities.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets. This field should always be set to * for the custom_processor processor.
default: *
remaps [required]
[object]
Array of VRL remap rules.
drop_on_error [required]
boolean
Whether to drop events that caused errors during processing.
enabled
boolean
Whether this remap rule is enabled.
include [required]
string
A Datadog search query used to filter events for this specific remap rule.
name [required]
string
A descriptive name for this remap rule.
source [required]
string
The VRL script source code that defines the processing logic.
type [required]
enum
The processor type. The value should always be custom_processor.
Allowed enum values: custom_processor
default: custom_processor
Option 6
object
The datadog_tags processor includes or excludes specific Datadog tags in your logs.
Supported pipeline types: logs
action [required]
enum
The action to take on tags with matching keys.
Allowed enum values: include,exclude
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
keys [required]
[string]
A list of tag keys.
mode [required]
enum
The processing mode.
Allowed enum values: filter
type [required]
enum
The processor type. The value should always be datadog_tags.
Allowed enum values: datadog_tags
default: datadog_tags
Option 7
object
The dedupe processor removes duplicate fields in log events.
Supported pipeline types: logs
cache
object
Configuration for the cache used to detect duplicates.
num_events [required]
int64
The number of events to cache for duplicate detection.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[string]
A list of log field paths to check for duplicates.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
mode [required]
enum
The deduplication mode to apply to the fields.
Allowed enum values: match,ignore
type [required]
enum
The processor type. The value should always be dedupe.
Allowed enum values: dedupe
default: dedupe
Option 8
object
The enrichment_table processor enriches logs using a static CSV file, GeoIP database, or reference table. Exactly one of file, geoip, or reference_table must be configured.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
file
object
Defines a static enrichment table loaded from a CSV file.
encoding [required]
object
File encoding format.
delimiter [required]
string
The encodingdelimiter.
includes_headers [required]
boolean
The encodingincludes_headers.
type [required]
enum
Specifies the encoding format (e.g., CSV) used for enrichment tables.
Allowed enum values: csv
key [required]
[object]
Key fields used to look up enrichment values.
column [required]
string
The itemscolumn.
comparison [required]
enum
Defines how to compare key fields for enrichment table lookups.
Allowed enum values: equals
field [required]
string
The itemsfield.
path [required]
string
Path to the CSV file.
schema [required]
[object]
Schema defining column names and their types.
column [required]
string
The itemscolumn.
type [required]
enum
Declares allowed data types for enrichment table columns.
Allowed enum values: string,boolean,integer,float,date,timestamp
geoip
object
Uses a GeoIP database to enrich logs based on an IP field.
key_field [required]
string
Path to the IP field in the log.
locale [required]
string
Locale used to resolve geographical names.
path [required]
string
Path to the GeoIP database file.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
reference_table
object
Uses a Datadog reference table to enrich logs.
app_key_key
string
Name of the environment variable or secret that holds the Datadog application key used to access the reference table.
columns
[string]
List of column names to include from the reference table. If not provided, all columns are included.
key_field [required]
string
Path to the field in the log event to match against the reference table.
table_id [required]
string
The unique identifier of the reference table.
target [required]
string
Path where enrichment results should be stored in the log.
type [required]
enum
The processor type. The value should always be enrichment_table.
Allowed enum values: enrichment_table
default: enrichment_table
Option 9
object
The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog.
Metrics can be counters, gauges, or distributions and optionally grouped by log fields.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline.
include
string
A Datadog search query used to determine which logs this processor targets.
metrics
[object]
Configuration for generating individual metrics.
group_by
[string]
Optional fields used to group the metric series.
include [required]
string
Datadog filter query to match logs for metric generation.
metric_type [required]
enum
Type of metric to create.
Allowed enum values: count,gauge,distribution
name [required]
string
Name of the custom metric to be created.
value [required]
<oneOf>
Specifies how the value of the generated metric is computed.
Option 1
object
Strategy that increments a generated metric by one for each matching event.
strategy [required]
enum
Increments the metric by 1 for each matching event.
Allowed enum values: increment_by_one
Option 2
object
Strategy that increments a generated metric based on the value of a log field.
field [required]
string
Name of the log field containing the numeric value to increment the metric by.
strategy [required]
enum
Uses a numeric field in the log event as the metric increment.
Allowed enum values: increment_by_field
type [required]
enum
The processor type. Always generate_datadog_metrics.
Allowed enum values: generate_datadog_metrics
default: generate_datadog_metrics
Option 10
object
The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
mappings [required]
[object]
A list of mapping rules to convert events to the OCSF format.
include [required]
string
A Datadog search query used to select the logs that this mapping should apply to.
mapping [required]
<oneOf>
Defines a single mapping rule for transforming logs into the OCSF schema.
Custom OCSF mapping configuration for transforming logs.
mapping [required]
[object]
A list of field mapping rules for transforming log fields to OCSF schema fields.
default
The default value to use if the source field is missing or empty.
dest [required]
string
The destination OCSF field path.
lookup
object
Lookup table configuration for mapping source values to destination values.
source
The source field path from the log event.
sources
Multiple source field paths for combined mapping.
value
A static value to use for the destination field.
metadata [required]
object
Metadata for the custom OCSF mapping.
class [required]
string
The OCSF event class name.
profiles
[string]
A list of OCSF profiles to apply.
version [required]
string
The OCSF schema version.
version [required]
int64
The version of the custom mapping configuration.
type [required]
enum
The processor type. The value should always be ocsf_mapper.
Allowed enum values: ocsf_mapper
default: ocsf_mapper
Option 11
object
The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.
Supported pipeline types: logs
disable_library_rules
boolean
If set to true, disables the default Grok rules provided by Datadog.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
A unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
rules [required]
[object]
The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.
match_rules [required]
[object]
A list of Grok parsing rules that define how to extract fields from the source field.
Each rule must contain a name and a valid Grok pattern.
name [required]
string
The name of the rule.
rule [required]
string
The definition of the Grok rule.
source [required]
string
The name of the field in the log event to apply the Grok rules to.
support_rules
[object]
A list of Grok helper rules that can be referenced by the parsing rules.
name [required]
string
The name of the Grok helper rule.
rule [required]
string
The definition of the Grok helper rule.
type [required]
enum
The processor type. The value should always be parse_grok.
Allowed enum values: parse_grok
default: parse_grok
Option 12
object
The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
field [required]
string
The name of the log field that contains a JSON string.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be parse_json.
Allowed enum values: parse_json
default: parse_json
Option 13
object
The parse_xml processor parses XML from a specified field and extracts it into the event.
Supported pipeline types: logs
always_use_text_key
boolean
Whether to always use a text key for element content.
attr_prefix
string
The prefix to use for XML attributes in the parsed output.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
field [required]
string
The name of the log field that contains an XML string.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
include_attr
boolean
Whether to include XML attributes in the parsed output.
parse_bool
boolean
Whether to parse boolean values from strings.
parse_null
boolean
Whether to parse null values.
parse_number
boolean
Whether to parse numeric values from strings.
text_key
string
The key name to use for text content within XML elements. Must be at least 1 character if specified.
type [required]
enum
The processor type. The value should always be parse_xml.
Allowed enum values: parse_xml
default: parse_xml
Option 14
object
The quota processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.
Supported pipeline types: logs
display_name
string
The display name for a component.
drop_events
boolean
If set to true, logs that match the quota filter and are sent after the quota is exceeded are dropped. Logs that do not match the filter continue through the pipeline. Note: You can set either drop_events or overflow_action, but not both.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
ignore_when_missing_partitions
boolean
If true, the processor skips quota checks when partition fields are missing from the logs.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
name [required]
string
Name of the quota.
overflow_action
enum
The action to take when the quota or bucket limit is exceeded. Options:
drop: Drop the event.
no_action: Let the event pass through.
overflow_routing: Route to an overflow destination.
Allowed enum values: drop,no_action,overflow_routing
overrides
[object]
A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.
fields [required]
[object]
A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.
name [required]
string
The field name.
value [required]
string
The field value.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
partition_fields
[string]
A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.
too_many_buckets_action
enum
The action to take when the quota or bucket limit is exceeded. Options:
drop: Drop the event.
no_action: Let the event pass through.
overflow_routing: Route to an overflow destination.
Allowed enum values: drop,no_action,overflow_routing
type [required]
enum
The processor type. The value should always be quota.
Allowed enum values: quota
default: quota
Option 15
object
The reduce processor aggregates and merges logs based on matching keys and merge strategies.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by [required]
[string]
A list of fields used to group log events for merging.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
merge_strategies [required]
[object]
List of merge strategies defining how values from grouped events should be combined.
path [required]
string
The field path in the log event.
strategy [required]
enum
The merge strategy to apply.
Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique
type [required]
enum
The processor type. The value should always be reduce.
Allowed enum values: reduce
default: reduce
Option 16
object
The remove_fields processor deletes specified fields from logs.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[string]
A list of field names to be removed from each log event.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be remove_fields.
Allowed enum values: remove_fields
default: remove_fields
Option 17
object
The rename_fields processor changes field names.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[object]
A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.
destination [required]
string
The field name to assign the renamed value to.
preserve_source [required]
boolean
Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.
source [required]
string
The original field name in the log event that should be renamed.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be rename_fields.
Allowed enum values: rename_fields
default: rename_fields
Option 18
object
The sample processor allows probabilistic sampling of logs at a fixed rate.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by
[string]
Optional list of fields to group events by. Each group is sampled independently.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
percentage [required]
double
The percentage of logs to sample.
type [required]
enum
The processor type. The value should always be sample.
Allowed enum values: sample
default: sample
Option 19
object
The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
rules [required]
[object]
A list of rules for identifying and acting on sensitive data patterns.
keyword_options
object
Configuration for keywords used to reinforce sensitive data pattern detection.
keywords [required]
[string]
A list of keywords to match near the sensitive pattern.
proximity [required]
int64
Maximum number of tokens between a keyword and a sensitive value match.
name [required]
string
A name identifying the rule.
on_match [required]
<oneOf>
Defines what action to take when sensitive data is matched.
Option 1
object
Configuration for completely redacting matched sensitive data.
action [required]
enum
Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility.
Allowed enum values: redact
options [required]
object
Configuration for fully redacting sensitive data.
replace [required]
string
The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptionsreplace.
Option 2
object
Configuration for hashing matched sensitive values.
action [required]
enum
Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content.
Allowed enum values: hash
options
object
The ObservabilityPipelineSensitiveDataScannerProcessorActionHashoptions.
Option 3
object
Configuration for partially redacting matched sensitive data.
action [required]
enum
Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card).
Allowed enum values: partial_redact
options [required]
object
Controls how partial redaction is applied, including character count and direction.
characters [required]
int64
The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptionscharacters.
direction [required]
enum
Indicates whether to redact characters from the first or last part of the matched value.
Allowed enum values: first,last
pattern [required]
<oneOf>
Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.
Option 1
object
Defines a custom regex-based pattern for identifying sensitive data in logs.
options [required]
object
Options for defining a custom regex pattern.
description
string
Human-readable description providing context about a sensitive data scanner rule
rule [required]
string
A regular expression used to detect sensitive values. Must be a valid regex.
type [required]
enum
Indicates a custom regular expression is used for matching.
Allowed enum values: custom
Option 2
object
Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.
options [required]
object
Options for selecting a predefined library pattern and enabling keyword support.
description
string
Human-readable description providing context about a sensitive data scanner rule
id [required]
string
Identifier for a predefined pattern from the sensitive data scanner pattern library.
use_recommended_keywords
boolean
Whether to augment the pattern with recommended keywords (optional).
type [required]
enum
Indicates that a predefined library pattern is used.
Allowed enum values: library
scope [required]
<oneOf>
Determines which parts of the log the pattern-matching rule should be applied to.
Option 1
object
Includes only specific fields for sensitive data scanning.
options [required]
object
Fields to which the scope rule applies.
fields [required]
[string]
The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptionsfields.
target [required]
enum
Applies the rule only to included fields.
Allowed enum values: include
Option 2
object
Excludes specific fields from sensitive data scanning.
options [required]
object
Fields to which the scope rule applies.
fields [required]
[string]
The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptionsfields.
target [required]
enum
Excludes specific fields from processing.
Allowed enum values: exclude
Option 3
object
Applies scanning across all available fields.
target [required]
enum
Applies the rule to all fields.
Allowed enum values: all
tags [required]
[string]
Tags assigned to this rule for filtering and classification.
type [required]
enum
The processor type. The value should always be sensitive_data_scanner.
Allowed enum values: sensitive_data_scanner
default: sensitive_data_scanner
Option 20
object
The split_array processor splits array fields into separate events based on configured rules.
Supported pipeline types: logs
arrays [required]
[object]
A list of array split configurations.
field [required]
string
The path to the array field to split.
include [required]
string
A Datadog search query used to determine which logs this array split operation targets.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets. For split_array, this should typically be *.
type [required]
enum
The processor type. The value should always be split_array.
Allowed enum values: split_array
default: split_array
Option 21
object
The throttle processor limits the number of events that pass through over a given time window.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by
[string]
Optional list of fields used to group events before the threshold has been reached.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
threshold [required]
int64
the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.
type [required]
enum
The processor type. The value should always be throttle.
Allowed enum values: throttle
default: throttle
window [required]
double
The time window in seconds over which the threshold applies.
Option 22
object
The metric_tags processor filters metrics based on their tags using Datadog tag key patterns.
Supported pipeline types: metrics
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query that determines which metrics the processor targets.
rules [required]
[object]
A list of rules for filtering metric tags.
action [required]
enum
The action to take on tags with matching keys.
Allowed enum values: include,exclude
include [required]
string
A Datadog search query used to determine which metrics this rule targets.
keys [required]
[string]
A list of tag keys to include or exclude.
mode [required]
enum
The processing mode for tag filtering.
Allowed enum values: filter
type [required]
enum
The processor type. The value should always be metric_tags.
Allowed enum values: metric_tags
default: metric_tags
sources [required]
[ <oneOf>]
A list of configured data sources for the pipeline.
Option 1
object
The datadog_agent source collects logs/metrics from the Datadog Agent.
Supported pipeline types: logs, metrics
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be datadog_agent.
Allowed enum values: datadog_agent
default: datadog_agent
Option 2
object
The amazon_data_firehose source ingests logs from AWS Data Firehose.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the Firehose delivery stream address.
auth
object
AWS authentication credentials used for accessing AWS services such as S3.
If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).
assume_role
string
The Amazon Resource Name (ARN) of the role to assume.
external_id
string
A unique identifier for cross-account role assumption.
session_name
string
A session identifier used for logging and tracing the assumed role session.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be amazon_data_firehose.
Allowed enum values: amazon_data_firehose
default: amazon_data_firehose
Option 3
object
The amazon_s3 source ingests logs from an Amazon S3 bucket.
It supports AWS authentication and TLS encryption.
Supported pipeline types: logs
auth
object
AWS authentication credentials used for accessing AWS services such as S3.
If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).
assume_role
string
The Amazon Resource Name (ARN) of the role to assume.
external_id
string
A unique identifier for cross-account role assumption.
session_name
string
A session identifier used for logging and tracing the assumed role session.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
region [required]
string
AWS region where the S3 bucket resides.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. Always amazon_s3.
Allowed enum values: amazon_s3
default: amazon_s3
url_key
string
Name of the environment variable or secret that holds the S3 bucket URL.
Option 4
object
The fluent_bit source ingests logs from Fluent Bit.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Fluent Bit receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be fluent_bit.
Allowed enum values: fluent_bit
default: fluent_bit
Option 5
object
The fluentd source ingests logs from a Fluentd-compatible service.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Fluent receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be `fluentd.
Allowed enum values: fluentd
default: fluentd
Option 6
object
The google_pubsub source ingests logs from a Google Cloud Pub/Sub subscription.
Supported pipeline types: logs
auth
object
Google Cloud credentials used to authenticate with Google Cloud Storage.
credentials_file [required]
string
Path to the Google Cloud service account key file.
decoding [required]
enum
The decoding format used to interpret incoming logs.
Allowed enum values: bytes,gelf,json,syslog
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
project [required]
string
The Google Cloud project ID that owns the Pub/Sub subscription.
subscription [required]
string
The Pub/Sub subscription name from which messages are consumed.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be google_pubsub.
Allowed enum values: google_pubsub
default: google_pubsub
Option 7
object
The http_client source scrapes logs from HTTP endpoints at regular intervals.
Supported pipeline types: logs
auth_strategy
enum
Optional authentication strategy for HTTP requests.
Allowed enum values: none,basic,bearer,custom
custom_key
string
Name of the environment variable or secret that holds a custom header value (used with custom auth strategies).
decoding [required]
enum
The decoding format used to interpret incoming logs.
Allowed enum values: bytes,gelf,json,syslog
endpoint_url_key
string
Name of the environment variable or secret that holds the HTTP endpoint URL to scrape.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
password_key
string
Name of the environment variable or secret that holds the password (used when auth_strategy is basic).
scrape_interval_secs
int64
The interval (in seconds) between HTTP scrape requests.
scrape_timeout_secs
int64
The timeout (in seconds) for each scrape request.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
token_key
string
Name of the environment variable or secret that holds the bearer token (used when auth_strategy is bearer).
type [required]
enum
The source type. The value should always be http_client.
Allowed enum values: http_client
default: http_client
username_key
string
Name of the environment variable or secret that holds the username (used when auth_strategy is basic).
Option 8
object
The http_server source collects logs over HTTP POST from external services.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the HTTP server.
Name of the environment variable or secret that holds a custom header value (used with custom auth strategies).
decoding [required]
enum
The decoding format used to interpret incoming logs.
Allowed enum values: bytes,gelf,json,syslog
id [required]
string
Unique ID for the HTTP server source.
password_key
string
Name of the environment variable or secret that holds the password (used when auth_strategy is plain).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be http_server.
Allowed enum values: http_server
default: http_server
username_key
string
Name of the environment variable or secret that holds the username (used when auth_strategy is plain).
Option 9
object
The kafka source ingests data from Apache Kafka topics.
Supported pipeline types: logs
bootstrap_servers_key
string
Name of the environment variable or secret that holds the Kafka bootstrap servers list.
group_id [required]
string
Consumer group ID used by the Kafka client.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
librdkafka_options
[object]
Optional list of advanced Kafka client configuration options, defined as key-value pairs.
name [required]
string
The name of the librdkafka configuration option to set.
value [required]
string
The value assigned to the specified librdkafka configuration option.
sasl
object
Specifies the SASL mechanism for authenticating with a Kafka cluster.
mechanism
enum
SASL mechanism used for Kafka authentication.
Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
password_key
string
Name of the environment variable or secret that holds the SASL password.
username_key
string
Name of the environment variable or secret that holds the SASL username.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
topics [required]
[string]
A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.
type [required]
enum
The source type. The value should always be kafka.
Allowed enum values: kafka
default: kafka
Option 10
object
The logstash source ingests logs from a Logstash forwarder.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Logstash receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be logstash.
Allowed enum values: logstash
default: logstash
Option 11
object
The rsyslog source listens for logs over TCP or UDP from an rsyslog server using the syslog protocol.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the syslog receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
mode [required]
enum
Protocol used by the syslog source to receive messages.
Allowed enum values: tcp,udp
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be rsyslog.
Allowed enum values: rsyslog
default: rsyslog
Option 12
object
The socket source ingests logs over TCP or UDP.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the socket.
framing [required]
<oneOf>
Framing method configuration for the socket source.
Option 1
object
Byte frames which are delimited by a newline character.
method [required]
enum
Byte frames which are delimited by a newline character.
Allowed enum values: newline_delimited
Option 2
object
Byte frames are passed through as-is according to the underlying I/O boundaries (for example, split between messages or stream segments).
method [required]
enum
Byte frames are passed through as-is according to the underlying I/O boundaries (for example, split between messages or stream segments).
Allowed enum values: bytes
Option 3
object
Byte frames which are delimited by a chosen character.
delimiter [required]
string
A single ASCII character used to delimit events.
method [required]
enum
Byte frames which are delimited by a chosen character.
Allowed enum values: character_delimited
Option 4
object
Byte frames according to the octet counting format as per RFC6587.
method [required]
enum
Byte frames according to the octet counting format as per RFC6587.
Allowed enum values: octet_counting
Option 5
object
Byte frames which are chunked GELF messages.
method [required]
enum
Byte frames which are chunked GELF messages.
Allowed enum values: chunked_gelf
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
mode [required]
enum
Protocol used to receive logs.
Allowed enum values: tcp,udp
tls
object
TLS configuration. Relevant only when mode is tcp.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be socket.
Allowed enum values: socket
default: socket
Option 13
object
The splunk_hec source implements the Splunk HTTP Event Collector (HEC) API.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the HEC API.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. Always splunk_hec.
Allowed enum values: splunk_hec
default: splunk_hec
Option 14
object
The splunk_tcp source receives logs from a Splunk Universal Forwarder over TCP.
TLS is supported for secure transmission.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Splunk TCP receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. Always splunk_tcp.
Allowed enum values: splunk_tcp
default: splunk_tcp
Option 15
object
The sumo_logic source receives logs from Sumo Logic collectors.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Sumo Logic receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
type [required]
enum
The source type. The value should always be sumo_logic.
Allowed enum values: sumo_logic
default: sumo_logic
Option 16
object
The syslog_ng source listens for logs over TCP or UDP from a syslog-ng server using the syslog protocol.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the syslog-ng receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
mode [required]
enum
Protocol used by the syslog source to receive messages.
Allowed enum values: tcp,udp
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be syslog_ng.
Allowed enum values: syslog_ng
default: syslog_ng
Option 17
object
The opentelemetry source receives telemetry data using the OpenTelemetry Protocol (OTLP) over gRPC and HTTP.
Supported pipeline types: logs
grpc_address_key
string
Environment variable name containing the gRPC server address for receiving OTLP data. Must be a valid environment variable name (alphanumeric characters and underscores only).
http_address_key
string
Environment variable name containing the HTTP server address for receiving OTLP data. Must be a valid environment variable name (alphanumeric characters and underscores only).
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be opentelemetry.
Allowed enum values: opentelemetry
default: opentelemetry
use_legacy_search_syntax
boolean
Set to true to continue using the legacy search syntax while migrating filter queries. After migrating all queries to the new syntax, set to false.
The legacy syntax is deprecated and will eventually be removed.
Requires Observability Pipelines Worker 2.11 or later.
See Upgrade Your Filter Queries to the New Search Syntax for more information.
name [required]
string
Name of the pipeline.
id [required]
string
Unique identifier for the pipeline.
type [required]
string
The resource type identifier. For pipeline resources, this should always be set to pipelines.
// Create a new pipeline returns "OK" responsepackagemainimport("context""encoding/json""fmt""os""github.com/DataDog/datadog-api-client-go/v2/api/datadog""github.com/DataDog/datadog-api-client-go/v2/api/datadogV2")funcmain(){body:=datadogV2.ObservabilityPipelineSpec{Data:datadogV2.ObservabilityPipelineSpecData{Attributes:datadogV2.ObservabilityPipelineDataAttributes{Config:datadogV2.ObservabilityPipelineConfig{Destinations:[]datadogV2.ObservabilityPipelineConfigDestinationItem{datadogV2.ObservabilityPipelineConfigDestinationItem{ObservabilityPipelineDatadogLogsDestination:&datadogV2.ObservabilityPipelineDatadogLogsDestination{Id:"datadog-logs-destination",Inputs:[]string{"my-processor-group",},Type:datadogV2.OBSERVABILITYPIPELINEDATADOGLOGSDESTINATIONTYPE_DATADOG_LOGS,}},},ProcessorGroups:[]datadogV2.ObservabilityPipelineConfigProcessorGroup{{Enabled:true,Id:"my-processor-group",Include:"service:my-service",Inputs:[]string{"datadog-agent-source",},Processors:[]datadogV2.ObservabilityPipelineConfigProcessorItem{datadogV2.ObservabilityPipelineConfigProcessorItem{ObservabilityPipelineFilterProcessor:&datadogV2.ObservabilityPipelineFilterProcessor{Enabled:true,Id:"filter-processor",Include:"status:error",Type:datadogV2.OBSERVABILITYPIPELINEFILTERPROCESSORTYPE_FILTER,}},},},},Sources:[]datadogV2.ObservabilityPipelineConfigSourceItem{datadogV2.ObservabilityPipelineConfigSourceItem{ObservabilityPipelineDatadogAgentSource:&datadogV2.ObservabilityPipelineDatadogAgentSource{Id:"datadog-agent-source",Type:datadogV2.OBSERVABILITYPIPELINEDATADOGAGENTSOURCETYPE_DATADOG_AGENT,}},},},Name:"Main Observability Pipeline",},Type:"pipelines",},}ctx:=datadog.NewDefaultContext(context.Background())configuration:=datadog.NewConfiguration()apiClient:=datadog.NewAPIClient(configuration)api:=datadogV2.NewObservabilityPipelinesApi(apiClient)resp,r,err:=api.CreatePipeline(ctx,body)iferr!=nil{fmt.Fprintf(os.Stderr,"Error when calling `ObservabilityPipelinesApi.CreatePipeline`: %v\n",err)fmt.Fprintf(os.Stderr,"Full HTTP response: %v\n",r)}responseContent,_:=json.MarshalIndent(resp,""," ")fmt.Fprintf(os.Stdout,"Response from `ObservabilityPipelinesApi.CreatePipeline`:\n%s\n",responseContent)}
// Create a pipeline with dedupe processor with cache returns "OK" responsepackagemainimport("context""encoding/json""fmt""os""github.com/DataDog/datadog-api-client-go/v2/api/datadog""github.com/DataDog/datadog-api-client-go/v2/api/datadogV2")funcmain(){body:=datadogV2.ObservabilityPipelineSpec{Data:datadogV2.ObservabilityPipelineSpecData{Attributes:datadogV2.ObservabilityPipelineDataAttributes{Config:datadogV2.ObservabilityPipelineConfig{Destinations:[]datadogV2.ObservabilityPipelineConfigDestinationItem{datadogV2.ObservabilityPipelineConfigDestinationItem{ObservabilityPipelineDatadogLogsDestination:&datadogV2.ObservabilityPipelineDatadogLogsDestination{Id:"datadog-logs-destination",Inputs:[]string{"my-processor-group",},Type:datadogV2.OBSERVABILITYPIPELINEDATADOGLOGSDESTINATIONTYPE_DATADOG_LOGS,}},},ProcessorGroups:[]datadogV2.ObservabilityPipelineConfigProcessorGroup{{Enabled:true,Id:"my-processor-group",Include:"service:my-service",Inputs:[]string{"datadog-agent-source",},Processors:[]datadogV2.ObservabilityPipelineConfigProcessorItem{datadogV2.ObservabilityPipelineConfigProcessorItem{ObservabilityPipelineDedupeProcessor:&datadogV2.ObservabilityPipelineDedupeProcessor{Enabled:true,Id:"dedupe-processor",Include:"service:my-service",Type:datadogV2.OBSERVABILITYPIPELINEDEDUPEPROCESSORTYPE_DEDUPE,Fields:[]string{"message",},Mode:datadogV2.OBSERVABILITYPIPELINEDEDUPEPROCESSORMODE_MATCH,Cache:&datadogV2.ObservabilityPipelineDedupeProcessorCache{NumEvents:5000,},}},},},},Sources:[]datadogV2.ObservabilityPipelineConfigSourceItem{datadogV2.ObservabilityPipelineConfigSourceItem{ObservabilityPipelineDatadogAgentSource:&datadogV2.ObservabilityPipelineDatadogAgentSource{Id:"datadog-agent-source",Type:datadogV2.OBSERVABILITYPIPELINEDATADOGAGENTSOURCETYPE_DATADOG_AGENT,}},},},Name:"Pipeline with Dedupe Cache",},Type:"pipelines",},}ctx:=datadog.NewDefaultContext(context.Background())configuration:=datadog.NewConfiguration()apiClient:=datadog.NewAPIClient(configuration)api:=datadogV2.NewObservabilityPipelinesApi(apiClient)resp,r,err:=api.CreatePipeline(ctx,body)iferr!=nil{fmt.Fprintf(os.Stderr,"Error when calling `ObservabilityPipelinesApi.CreatePipeline`: %v\n",err)fmt.Fprintf(os.Stderr,"Full HTTP response: %v\n",r)}responseContent,_:=json.MarshalIndent(resp,""," ")fmt.Fprintf(os.Stdout,"Response from `ObservabilityPipelinesApi.CreatePipeline`:\n%s\n",responseContent)}
// Create a pipeline with dedupe processor without cache returns "OK" responsepackagemainimport("context""encoding/json""fmt""os""github.com/DataDog/datadog-api-client-go/v2/api/datadog""github.com/DataDog/datadog-api-client-go/v2/api/datadogV2")funcmain(){body:=datadogV2.ObservabilityPipelineSpec{Data:datadogV2.ObservabilityPipelineSpecData{Attributes:datadogV2.ObservabilityPipelineDataAttributes{Config:datadogV2.ObservabilityPipelineConfig{Destinations:[]datadogV2.ObservabilityPipelineConfigDestinationItem{datadogV2.ObservabilityPipelineConfigDestinationItem{ObservabilityPipelineDatadogLogsDestination:&datadogV2.ObservabilityPipelineDatadogLogsDestination{Id:"datadog-logs-destination",Inputs:[]string{"my-processor-group",},Type:datadogV2.OBSERVABILITYPIPELINEDATADOGLOGSDESTINATIONTYPE_DATADOG_LOGS,}},},ProcessorGroups:[]datadogV2.ObservabilityPipelineConfigProcessorGroup{{Enabled:true,Id:"my-processor-group",Include:"service:my-service",Inputs:[]string{"datadog-agent-source",},Processors:[]datadogV2.ObservabilityPipelineConfigProcessorItem{datadogV2.ObservabilityPipelineConfigProcessorItem{ObservabilityPipelineDedupeProcessor:&datadogV2.ObservabilityPipelineDedupeProcessor{Enabled:true,Id:"dedupe-processor",Include:"service:my-service",Type:datadogV2.OBSERVABILITYPIPELINEDEDUPEPROCESSORTYPE_DEDUPE,Fields:[]string{"message",},Mode:datadogV2.OBSERVABILITYPIPELINEDEDUPEPROCESSORMODE_MATCH,}},},},},Sources:[]datadogV2.ObservabilityPipelineConfigSourceItem{datadogV2.ObservabilityPipelineConfigSourceItem{ObservabilityPipelineDatadogAgentSource:&datadogV2.ObservabilityPipelineDatadogAgentSource{Id:"datadog-agent-source",Type:datadogV2.OBSERVABILITYPIPELINEDATADOGAGENTSOURCETYPE_DATADOG_AGENT,}},},},Name:"Pipeline with Dedupe No Cache",},Type:"pipelines",},}ctx:=datadog.NewDefaultContext(context.Background())configuration:=datadog.NewConfiguration()apiClient:=datadog.NewAPIClient(configuration)api:=datadogV2.NewObservabilityPipelinesApi(apiClient)resp,r,err:=api.CreatePipeline(ctx,body)iferr!=nil{fmt.Fprintf(os.Stderr,"Error when calling `ObservabilityPipelinesApi.CreatePipeline`: %v\n",err)fmt.Fprintf(os.Stderr,"Full HTTP response: %v\n",r)}responseContent,_:=json.MarshalIndent(resp,""," ")fmt.Fprintf(os.Stdout,"Response from `ObservabilityPipelinesApi.CreatePipeline`:\n%s\n",responseContent)}
// Create a new pipeline returns "OK" responseimportcom.datadog.api.client.ApiClient;importcom.datadog.api.client.ApiException;importcom.datadog.api.client.v2.api.ObservabilityPipelinesApi;importcom.datadog.api.client.v2.model.ObservabilityPipeline;importcom.datadog.api.client.v2.model.ObservabilityPipelineConfig;importcom.datadog.api.client.v2.model.ObservabilityPipelineConfigDestinationItem;importcom.datadog.api.client.v2.model.ObservabilityPipelineConfigProcessorGroup;importcom.datadog.api.client.v2.model.ObservabilityPipelineConfigProcessorItem;importcom.datadog.api.client.v2.model.ObservabilityPipelineConfigSourceItem;importcom.datadog.api.client.v2.model.ObservabilityPipelineDataAttributes;importcom.datadog.api.client.v2.model.ObservabilityPipelineDatadogAgentSource;importcom.datadog.api.client.v2.model.ObservabilityPipelineDatadogAgentSourceType;importcom.datadog.api.client.v2.model.ObservabilityPipelineDatadogLogsDestination;importcom.datadog.api.client.v2.model.ObservabilityPipelineDatadogLogsDestinationType;importcom.datadog.api.client.v2.model.ObservabilityPipelineFilterProcessor;importcom.datadog.api.client.v2.model.ObservabilityPipelineFilterProcessorType;importcom.datadog.api.client.v2.model.ObservabilityPipelineSpec;importcom.datadog.api.client.v2.model.ObservabilityPipelineSpecData;importjava.util.Collections;publicclassExample{publicstaticvoidmain(String[]args){ApiClientdefaultClient=ApiClient.getDefaultApiClient();ObservabilityPipelinesApiapiInstance=newObservabilityPipelinesApi(defaultClient);ObservabilityPipelineSpecbody=newObservabilityPipelineSpec().data(newObservabilityPipelineSpecData().attributes(newObservabilityPipelineDataAttributes().config(newObservabilityPipelineConfig().destinations(Collections.singletonList(newObservabilityPipelineConfigDestinationItem(newObservabilityPipelineDatadogLogsDestination().id("datadog-logs-destination").inputs(Collections.singletonList("my-processor-group")).type(ObservabilityPipelineDatadogLogsDestinationType.DATADOG_LOGS)))).processorGroups(Collections.singletonList(newObservabilityPipelineConfigProcessorGroup().enabled(true).id("my-processor-group").include("service:my-service").inputs(Collections.singletonList("datadog-agent-source")).processors(Collections.singletonList(newObservabilityPipelineConfigProcessorItem(newObservabilityPipelineFilterProcessor().enabled(true).id("filter-processor").include("status:error").type(ObservabilityPipelineFilterProcessorType.FILTER)))))).sources(Collections.singletonList(newObservabilityPipelineConfigSourceItem(newObservabilityPipelineDatadogAgentSource().id("datadog-agent-source").type(ObservabilityPipelineDatadogAgentSourceType.DATADOG_AGENT))))).name("Main Observability Pipeline")).type("pipelines"));try{ObservabilityPipelineresult=apiInstance.createPipeline(body);System.out.println(result);}catch(ApiExceptione){System.err.println("Exception when calling ObservabilityPipelinesApi#createPipeline");System.err.println("Status code: "+e.getCode());System.err.println("Reason: "+e.getResponseBody());System.err.println("Response headers: "+e.getResponseHeaders());e.printStackTrace();}}}
// Create a pipeline with dedupe processor with cache returns "OK" responseimportcom.datadog.api.client.ApiClient;importcom.datadog.api.client.ApiException;importcom.datadog.api.client.v2.api.ObservabilityPipelinesApi;importcom.datadog.api.client.v2.model.ObservabilityPipeline;importcom.datadog.api.client.v2.model.ObservabilityPipelineConfig;importcom.datadog.api.client.v2.model.ObservabilityPipelineConfigDestinationItem;importcom.datadog.api.client.v2.model.ObservabilityPipelineConfigProcessorGroup;importcom.datadog.api.client.v2.model.ObservabilityPipelineConfigProcessorItem;importcom.datadog.api.client.v2.model.ObservabilityPipelineConfigSourceItem;importcom.datadog.api.client.v2.model.ObservabilityPipelineDataAttributes;importcom.datadog.api.client.v2.model.ObservabilityPipelineDatadogAgentSource;importcom.datadog.api.client.v2.model.ObservabilityPipelineDatadogAgentSourceType;importcom.datadog.api.client.v2.model.ObservabilityPipelineDatadogLogsDestination;importcom.datadog.api.client.v2.model.ObservabilityPipelineDatadogLogsDestinationType;importcom.datadog.api.client.v2.model.ObservabilityPipelineDedupeProcessor;importcom.datadog.api.client.v2.model.ObservabilityPipelineDedupeProcessorCache;importcom.datadog.api.client.v2.model.ObservabilityPipelineDedupeProcessorMode;importcom.datadog.api.client.v2.model.ObservabilityPipelineDedupeProcessorType;importcom.datadog.api.client.v2.model.ObservabilityPipelineSpec;importcom.datadog.api.client.v2.model.ObservabilityPipelineSpecData;importjava.util.Collections;publicclassExample{publicstaticvoidmain(String[]args){ApiClientdefaultClient=ApiClient.getDefaultApiClient();ObservabilityPipelinesApiapiInstance=newObservabilityPipelinesApi(defaultClient);ObservabilityPipelineSpecbody=newObservabilityPipelineSpec().data(newObservabilityPipelineSpecData().attributes(newObservabilityPipelineDataAttributes().config(newObservabilityPipelineConfig().destinations(Collections.singletonList(newObservabilityPipelineConfigDestinationItem(newObservabilityPipelineDatadogLogsDestination().id("datadog-logs-destination").inputs(Collections.singletonList("my-processor-group")).type(ObservabilityPipelineDatadogLogsDestinationType.DATADOG_LOGS)))).processorGroups(Collections.singletonList(newObservabilityPipelineConfigProcessorGroup().enabled(true).id("my-processor-group").include("service:my-service").inputs(Collections.singletonList("datadog-agent-source")).processors(Collections.singletonList(newObservabilityPipelineConfigProcessorItem(newObservabilityPipelineDedupeProcessor().enabled(true).id("dedupe-processor").include("service:my-service").type(ObservabilityPipelineDedupeProcessorType.DEDUPE).fields(Collections.singletonList("message")).mode(ObservabilityPipelineDedupeProcessorMode.MATCH).cache(newObservabilityPipelineDedupeProcessorCache().numEvents(5000L))))))).sources(Collections.singletonList(newObservabilityPipelineConfigSourceItem(newObservabilityPipelineDatadogAgentSource().id("datadog-agent-source").type(ObservabilityPipelineDatadogAgentSourceType.DATADOG_AGENT))))).name("Pipeline with Dedupe Cache")).type("pipelines"));try{ObservabilityPipelineresult=apiInstance.createPipeline(body);System.out.println(result);}catch(ApiExceptione){System.err.println("Exception when calling ObservabilityPipelinesApi#createPipeline");System.err.println("Status code: "+e.getCode());System.err.println("Reason: "+e.getResponseBody());System.err.println("Response headers: "+e.getResponseHeaders());e.printStackTrace();}}}
// Create a pipeline with dedupe processor without cache returns "OK" responseimportcom.datadog.api.client.ApiClient;importcom.datadog.api.client.ApiException;importcom.datadog.api.client.v2.api.ObservabilityPipelinesApi;importcom.datadog.api.client.v2.model.ObservabilityPipeline;importcom.datadog.api.client.v2.model.ObservabilityPipelineConfig;importcom.datadog.api.client.v2.model.ObservabilityPipelineConfigDestinationItem;importcom.datadog.api.client.v2.model.ObservabilityPipelineConfigProcessorGroup;importcom.datadog.api.client.v2.model.ObservabilityPipelineConfigProcessorItem;importcom.datadog.api.client.v2.model.ObservabilityPipelineConfigSourceItem;importcom.datadog.api.client.v2.model.ObservabilityPipelineDataAttributes;importcom.datadog.api.client.v2.model.ObservabilityPipelineDatadogAgentSource;importcom.datadog.api.client.v2.model.ObservabilityPipelineDatadogAgentSourceType;importcom.datadog.api.client.v2.model.ObservabilityPipelineDatadogLogsDestination;importcom.datadog.api.client.v2.model.ObservabilityPipelineDatadogLogsDestinationType;importcom.datadog.api.client.v2.model.ObservabilityPipelineDedupeProcessor;importcom.datadog.api.client.v2.model.ObservabilityPipelineDedupeProcessorMode;importcom.datadog.api.client.v2.model.ObservabilityPipelineDedupeProcessorType;importcom.datadog.api.client.v2.model.ObservabilityPipelineSpec;importcom.datadog.api.client.v2.model.ObservabilityPipelineSpecData;importjava.util.Collections;publicclassExample{publicstaticvoidmain(String[]args){ApiClientdefaultClient=ApiClient.getDefaultApiClient();ObservabilityPipelinesApiapiInstance=newObservabilityPipelinesApi(defaultClient);ObservabilityPipelineSpecbody=newObservabilityPipelineSpec().data(newObservabilityPipelineSpecData().attributes(newObservabilityPipelineDataAttributes().config(newObservabilityPipelineConfig().destinations(Collections.singletonList(newObservabilityPipelineConfigDestinationItem(newObservabilityPipelineDatadogLogsDestination().id("datadog-logs-destination").inputs(Collections.singletonList("my-processor-group")).type(ObservabilityPipelineDatadogLogsDestinationType.DATADOG_LOGS)))).processorGroups(Collections.singletonList(newObservabilityPipelineConfigProcessorGroup().enabled(true).id("my-processor-group").include("service:my-service").inputs(Collections.singletonList("datadog-agent-source")).processors(Collections.singletonList(newObservabilityPipelineConfigProcessorItem(newObservabilityPipelineDedupeProcessor().enabled(true).id("dedupe-processor").include("service:my-service").type(ObservabilityPipelineDedupeProcessorType.DEDUPE).fields(Collections.singletonList("message")).mode(ObservabilityPipelineDedupeProcessorMode.MATCH)))))).sources(Collections.singletonList(newObservabilityPipelineConfigSourceItem(newObservabilityPipelineDatadogAgentSource().id("datadog-agent-source").type(ObservabilityPipelineDatadogAgentSourceType.DATADOG_AGENT))))).name("Pipeline with Dedupe No Cache")).type("pipelines"));try{ObservabilityPipelineresult=apiInstance.createPipeline(body);System.out.println(result);}catch(ApiExceptione){System.err.println("Exception when calling ObservabilityPipelinesApi#createPipeline");System.err.println("Status code: "+e.getCode());System.err.println("Reason: "+e.getResponseBody());System.err.println("Response headers: "+e.getResponseHeaders());e.printStackTrace();}}}
"""
Create a new pipeline returns "OK" response
"""fromdatadog_api_clientimportApiClient,Configurationfromdatadog_api_client.v2.api.observability_pipelines_apiimportObservabilityPipelinesApifromdatadog_api_client.v2.model.observability_pipeline_configimportObservabilityPipelineConfigfromdatadog_api_client.v2.model.observability_pipeline_config_processor_groupimport(ObservabilityPipelineConfigProcessorGroup,)fromdatadog_api_client.v2.model.observability_pipeline_data_attributesimportObservabilityPipelineDataAttributesfromdatadog_api_client.v2.model.observability_pipeline_datadog_agent_sourceimport(ObservabilityPipelineDatadogAgentSource,)fromdatadog_api_client.v2.model.observability_pipeline_datadog_agent_source_typeimport(ObservabilityPipelineDatadogAgentSourceType,)fromdatadog_api_client.v2.model.observability_pipeline_datadog_logs_destinationimport(ObservabilityPipelineDatadogLogsDestination,)fromdatadog_api_client.v2.model.observability_pipeline_datadog_logs_destination_typeimport(ObservabilityPipelineDatadogLogsDestinationType,)fromdatadog_api_client.v2.model.observability_pipeline_filter_processorimportObservabilityPipelineFilterProcessorfromdatadog_api_client.v2.model.observability_pipeline_filter_processor_typeimport(ObservabilityPipelineFilterProcessorType,)fromdatadog_api_client.v2.model.observability_pipeline_specimportObservabilityPipelineSpecfromdatadog_api_client.v2.model.observability_pipeline_spec_dataimportObservabilityPipelineSpecDatabody=ObservabilityPipelineSpec(data=ObservabilityPipelineSpecData(attributes=ObservabilityPipelineDataAttributes(config=ObservabilityPipelineConfig(destinations=[ObservabilityPipelineDatadogLogsDestination(id="datadog-logs-destination",inputs=["my-processor-group",],type=ObservabilityPipelineDatadogLogsDestinationType.DATADOG_LOGS,),],processor_groups=[ObservabilityPipelineConfigProcessorGroup(enabled=True,id="my-processor-group",include="service:my-service",inputs=["datadog-agent-source",],processors=[ObservabilityPipelineFilterProcessor(enabled=True,id="filter-processor",include="status:error",type=ObservabilityPipelineFilterProcessorType.FILTER,),],),],sources=[ObservabilityPipelineDatadogAgentSource(id="datadog-agent-source",type=ObservabilityPipelineDatadogAgentSourceType.DATADOG_AGENT,),],),name="Main Observability Pipeline",),type="pipelines",),)configuration=Configuration()withApiClient(configuration)asapi_client:api_instance=ObservabilityPipelinesApi(api_client)response=api_instance.create_pipeline(body=body)print(response)
"""
Create a pipeline with dedupe processor with cache returns "OK" response
"""fromdatadog_api_clientimportApiClient,Configurationfromdatadog_api_client.v2.api.observability_pipelines_apiimportObservabilityPipelinesApifromdatadog_api_client.v2.model.observability_pipeline_configimportObservabilityPipelineConfigfromdatadog_api_client.v2.model.observability_pipeline_config_processor_groupimport(ObservabilityPipelineConfigProcessorGroup,)fromdatadog_api_client.v2.model.observability_pipeline_data_attributesimportObservabilityPipelineDataAttributesfromdatadog_api_client.v2.model.observability_pipeline_datadog_agent_sourceimport(ObservabilityPipelineDatadogAgentSource,)fromdatadog_api_client.v2.model.observability_pipeline_datadog_agent_source_typeimport(ObservabilityPipelineDatadogAgentSourceType,)fromdatadog_api_client.v2.model.observability_pipeline_datadog_logs_destinationimport(ObservabilityPipelineDatadogLogsDestination,)fromdatadog_api_client.v2.model.observability_pipeline_datadog_logs_destination_typeimport(ObservabilityPipelineDatadogLogsDestinationType,)fromdatadog_api_client.v2.model.observability_pipeline_dedupe_processorimportObservabilityPipelineDedupeProcessorfromdatadog_api_client.v2.model.observability_pipeline_dedupe_processor_cacheimport(ObservabilityPipelineDedupeProcessorCache,)fromdatadog_api_client.v2.model.observability_pipeline_dedupe_processor_modeimport(ObservabilityPipelineDedupeProcessorMode,)fromdatadog_api_client.v2.model.observability_pipeline_dedupe_processor_typeimport(ObservabilityPipelineDedupeProcessorType,)fromdatadog_api_client.v2.model.observability_pipeline_specimportObservabilityPipelineSpecfromdatadog_api_client.v2.model.observability_pipeline_spec_dataimportObservabilityPipelineSpecDatabody=ObservabilityPipelineSpec(data=ObservabilityPipelineSpecData(attributes=ObservabilityPipelineDataAttributes(config=ObservabilityPipelineConfig(destinations=[ObservabilityPipelineDatadogLogsDestination(id="datadog-logs-destination",inputs=["my-processor-group",],type=ObservabilityPipelineDatadogLogsDestinationType.DATADOG_LOGS,),],processor_groups=[ObservabilityPipelineConfigProcessorGroup(enabled=True,id="my-processor-group",include="service:my-service",inputs=["datadog-agent-source",],processors=[ObservabilityPipelineDedupeProcessor(enabled=True,id="dedupe-processor",include="service:my-service",type=ObservabilityPipelineDedupeProcessorType.DEDUPE,fields=["message",],mode=ObservabilityPipelineDedupeProcessorMode.MATCH,cache=ObservabilityPipelineDedupeProcessorCache(num_events=5000,),),],),],sources=[ObservabilityPipelineDatadogAgentSource(id="datadog-agent-source",type=ObservabilityPipelineDatadogAgentSourceType.DATADOG_AGENT,),],),name="Pipeline with Dedupe Cache",),type="pipelines",),)configuration=Configuration()withApiClient(configuration)asapi_client:api_instance=ObservabilityPipelinesApi(api_client)response=api_instance.create_pipeline(body=body)print(response)
"""
Create a pipeline with dedupe processor without cache returns "OK" response
"""fromdatadog_api_clientimportApiClient,Configurationfromdatadog_api_client.v2.api.observability_pipelines_apiimportObservabilityPipelinesApifromdatadog_api_client.v2.model.observability_pipeline_configimportObservabilityPipelineConfigfromdatadog_api_client.v2.model.observability_pipeline_config_processor_groupimport(ObservabilityPipelineConfigProcessorGroup,)fromdatadog_api_client.v2.model.observability_pipeline_data_attributesimportObservabilityPipelineDataAttributesfromdatadog_api_client.v2.model.observability_pipeline_datadog_agent_sourceimport(ObservabilityPipelineDatadogAgentSource,)fromdatadog_api_client.v2.model.observability_pipeline_datadog_agent_source_typeimport(ObservabilityPipelineDatadogAgentSourceType,)fromdatadog_api_client.v2.model.observability_pipeline_datadog_logs_destinationimport(ObservabilityPipelineDatadogLogsDestination,)fromdatadog_api_client.v2.model.observability_pipeline_datadog_logs_destination_typeimport(ObservabilityPipelineDatadogLogsDestinationType,)fromdatadog_api_client.v2.model.observability_pipeline_dedupe_processorimportObservabilityPipelineDedupeProcessorfromdatadog_api_client.v2.model.observability_pipeline_dedupe_processor_modeimport(ObservabilityPipelineDedupeProcessorMode,)fromdatadog_api_client.v2.model.observability_pipeline_dedupe_processor_typeimport(ObservabilityPipelineDedupeProcessorType,)fromdatadog_api_client.v2.model.observability_pipeline_specimportObservabilityPipelineSpecfromdatadog_api_client.v2.model.observability_pipeline_spec_dataimportObservabilityPipelineSpecDatabody=ObservabilityPipelineSpec(data=ObservabilityPipelineSpecData(attributes=ObservabilityPipelineDataAttributes(config=ObservabilityPipelineConfig(destinations=[ObservabilityPipelineDatadogLogsDestination(id="datadog-logs-destination",inputs=["my-processor-group",],type=ObservabilityPipelineDatadogLogsDestinationType.DATADOG_LOGS,),],processor_groups=[ObservabilityPipelineConfigProcessorGroup(enabled=True,id="my-processor-group",include="service:my-service",inputs=["datadog-agent-source",],processors=[ObservabilityPipelineDedupeProcessor(enabled=True,id="dedupe-processor",include="service:my-service",type=ObservabilityPipelineDedupeProcessorType.DEDUPE,fields=["message",],mode=ObservabilityPipelineDedupeProcessorMode.MATCH,),],),],sources=[ObservabilityPipelineDatadogAgentSource(id="datadog-agent-source",type=ObservabilityPipelineDatadogAgentSourceType.DATADOG_AGENT,),],),name="Pipeline with Dedupe No Cache",),type="pipelines",),)configuration=Configuration()withApiClient(configuration)asapi_client:api_instance=ObservabilityPipelinesApi(api_client)response=api_instance.create_pipeline(body=body)print(response)
# Create a new pipeline returns "OK" responserequire"datadog_api_client"api_instance=DatadogAPIClient::V2::ObservabilityPipelinesAPI.newbody=DatadogAPIClient::V2::ObservabilityPipelineSpec.new({data:DatadogAPIClient::V2::ObservabilityPipelineSpecData.new({attributes:DatadogAPIClient::V2::ObservabilityPipelineDataAttributes.new({config:DatadogAPIClient::V2::ObservabilityPipelineConfig.new({destinations:[DatadogAPIClient::V2::ObservabilityPipelineDatadogLogsDestination.new({id:"datadog-logs-destination",inputs:["my-processor-group",],type:DatadogAPIClient::V2::ObservabilityPipelineDatadogLogsDestinationType::DATADOG_LOGS,}),],processor_groups:[DatadogAPIClient::V2::ObservabilityPipelineConfigProcessorGroup.new({enabled:true,id:"my-processor-group",include:"service:my-service",inputs:["datadog-agent-source",],processors:[DatadogAPIClient::V2::ObservabilityPipelineFilterProcessor.new({enabled:true,id:"filter-processor",include:"status:error",type:DatadogAPIClient::V2::ObservabilityPipelineFilterProcessorType::FILTER,}),],}),],sources:[DatadogAPIClient::V2::ObservabilityPipelineDatadogAgentSource.new({id:"datadog-agent-source",type:DatadogAPIClient::V2::ObservabilityPipelineDatadogAgentSourceType::DATADOG_AGENT,}),],}),name:"Main Observability Pipeline",}),type:"pipelines",}),})papi_instance.create_pipeline(body)
# Create a pipeline with dedupe processor with cache returns "OK" responserequire"datadog_api_client"api_instance=DatadogAPIClient::V2::ObservabilityPipelinesAPI.newbody=DatadogAPIClient::V2::ObservabilityPipelineSpec.new({data:DatadogAPIClient::V2::ObservabilityPipelineSpecData.new({attributes:DatadogAPIClient::V2::ObservabilityPipelineDataAttributes.new({config:DatadogAPIClient::V2::ObservabilityPipelineConfig.new({destinations:[DatadogAPIClient::V2::ObservabilityPipelineDatadogLogsDestination.new({id:"datadog-logs-destination",inputs:["my-processor-group",],type:DatadogAPIClient::V2::ObservabilityPipelineDatadogLogsDestinationType::DATADOG_LOGS,}),],processor_groups:[DatadogAPIClient::V2::ObservabilityPipelineConfigProcessorGroup.new({enabled:true,id:"my-processor-group",include:"service:my-service",inputs:["datadog-agent-source",],processors:[DatadogAPIClient::V2::ObservabilityPipelineDedupeProcessor.new({enabled:true,id:"dedupe-processor",include:"service:my-service",type:DatadogAPIClient::V2::ObservabilityPipelineDedupeProcessorType::DEDUPE,fields:["message",],mode:DatadogAPIClient::V2::ObservabilityPipelineDedupeProcessorMode::MATCH,cache:DatadogAPIClient::V2::ObservabilityPipelineDedupeProcessorCache.new({num_events:5000,}),}),],}),],sources:[DatadogAPIClient::V2::ObservabilityPipelineDatadogAgentSource.new({id:"datadog-agent-source",type:DatadogAPIClient::V2::ObservabilityPipelineDatadogAgentSourceType::DATADOG_AGENT,}),],}),name:"Pipeline with Dedupe Cache",}),type:"pipelines",}),})papi_instance.create_pipeline(body)
# Create a pipeline with dedupe processor without cache returns "OK" responserequire"datadog_api_client"api_instance=DatadogAPIClient::V2::ObservabilityPipelinesAPI.newbody=DatadogAPIClient::V2::ObservabilityPipelineSpec.new({data:DatadogAPIClient::V2::ObservabilityPipelineSpecData.new({attributes:DatadogAPIClient::V2::ObservabilityPipelineDataAttributes.new({config:DatadogAPIClient::V2::ObservabilityPipelineConfig.new({destinations:[DatadogAPIClient::V2::ObservabilityPipelineDatadogLogsDestination.new({id:"datadog-logs-destination",inputs:["my-processor-group",],type:DatadogAPIClient::V2::ObservabilityPipelineDatadogLogsDestinationType::DATADOG_LOGS,}),],processor_groups:[DatadogAPIClient::V2::ObservabilityPipelineConfigProcessorGroup.new({enabled:true,id:"my-processor-group",include:"service:my-service",inputs:["datadog-agent-source",],processors:[DatadogAPIClient::V2::ObservabilityPipelineDedupeProcessor.new({enabled:true,id:"dedupe-processor",include:"service:my-service",type:DatadogAPIClient::V2::ObservabilityPipelineDedupeProcessorType::DEDUPE,fields:["message",],mode:DatadogAPIClient::V2::ObservabilityPipelineDedupeProcessorMode::MATCH,}),],}),],sources:[DatadogAPIClient::V2::ObservabilityPipelineDatadogAgentSource.new({id:"datadog-agent-source",type:DatadogAPIClient::V2::ObservabilityPipelineDatadogAgentSourceType::DATADOG_AGENT,}),],}),name:"Pipeline with Dedupe No Cache",}),type:"pipelines",}),})papi_instance.create_pipeline(body)
// Create a new pipeline returns "OK" response
usedatadog_api_client::datadog;usedatadog_api_client::datadogV2::api_observability_pipelines::ObservabilityPipelinesAPI;usedatadog_api_client::datadogV2::model::ObservabilityPipelineConfig;usedatadog_api_client::datadogV2::model::ObservabilityPipelineConfigDestinationItem;usedatadog_api_client::datadogV2::model::ObservabilityPipelineConfigProcessorGroup;usedatadog_api_client::datadogV2::model::ObservabilityPipelineConfigProcessorItem;usedatadog_api_client::datadogV2::model::ObservabilityPipelineConfigSourceItem;usedatadog_api_client::datadogV2::model::ObservabilityPipelineDataAttributes;usedatadog_api_client::datadogV2::model::ObservabilityPipelineDatadogAgentSource;usedatadog_api_client::datadogV2::model::ObservabilityPipelineDatadogAgentSourceType;usedatadog_api_client::datadogV2::model::ObservabilityPipelineDatadogLogsDestination;usedatadog_api_client::datadogV2::model::ObservabilityPipelineDatadogLogsDestinationType;usedatadog_api_client::datadogV2::model::ObservabilityPipelineFilterProcessor;usedatadog_api_client::datadogV2::model::ObservabilityPipelineFilterProcessorType;usedatadog_api_client::datadogV2::model::ObservabilityPipelineSpec;usedatadog_api_client::datadogV2::model::ObservabilityPipelineSpecData;#[tokio::main]asyncfnmain(){letbody=ObservabilityPipelineSpec::new(ObservabilityPipelineSpecData::new(ObservabilityPipelineDataAttributes::new(ObservabilityPipelineConfig::new(vec![ObservabilityPipelineConfigDestinationItem::ObservabilityPipelineDatadogLogsDestination(Box::new(ObservabilityPipelineDatadogLogsDestination::new("datadog-logs-destination".to_string(),vec!["my-processor-group".to_string()],ObservabilityPipelineDatadogLogsDestinationType::DATADOG_LOGS,),),)],vec![ObservabilityPipelineConfigSourceItem::ObservabilityPipelineDatadogAgentSource(Box::new(ObservabilityPipelineDatadogAgentSource::new("datadog-agent-source".to_string(),ObservabilityPipelineDatadogAgentSourceType::DATADOG_AGENT,),),)],).processor_groups(vec![ObservabilityPipelineConfigProcessorGroup::new(true,"my-processor-group".to_string(),"service:my-service".to_string(),vec!["datadog-agent-source".to_string()],vec![ObservabilityPipelineConfigProcessorItem::ObservabilityPipelineFilterProcessor(Box::new(ObservabilityPipelineFilterProcessor::new(true,"filter-processor".to_string(),"status:error".to_string(),ObservabilityPipelineFilterProcessorType::FILTER,),),)],)],),"Main Observability Pipeline".to_string(),),"pipelines".to_string(),),);letconfiguration=datadog::Configuration::new();letapi=ObservabilityPipelinesAPI::with_config(configuration);letresp=api.create_pipeline(body).await;ifletOk(value)=resp{println!("{:#?}",value);}else{println!("{:#?}",resp.unwrap_err());}}
// Create a pipeline with dedupe processor with cache returns "OK" response
usedatadog_api_client::datadog;usedatadog_api_client::datadogV2::api_observability_pipelines::ObservabilityPipelinesAPI;usedatadog_api_client::datadogV2::model::ObservabilityPipelineConfig;usedatadog_api_client::datadogV2::model::ObservabilityPipelineConfigDestinationItem;usedatadog_api_client::datadogV2::model::ObservabilityPipelineConfigProcessorGroup;usedatadog_api_client::datadogV2::model::ObservabilityPipelineConfigProcessorItem;usedatadog_api_client::datadogV2::model::ObservabilityPipelineConfigSourceItem;usedatadog_api_client::datadogV2::model::ObservabilityPipelineDataAttributes;usedatadog_api_client::datadogV2::model::ObservabilityPipelineDatadogAgentSource;usedatadog_api_client::datadogV2::model::ObservabilityPipelineDatadogAgentSourceType;usedatadog_api_client::datadogV2::model::ObservabilityPipelineDatadogLogsDestination;usedatadog_api_client::datadogV2::model::ObservabilityPipelineDatadogLogsDestinationType;usedatadog_api_client::datadogV2::model::ObservabilityPipelineDedupeProcessor;usedatadog_api_client::datadogV2::model::ObservabilityPipelineDedupeProcessorCache;usedatadog_api_client::datadogV2::model::ObservabilityPipelineDedupeProcessorMode;usedatadog_api_client::datadogV2::model::ObservabilityPipelineDedupeProcessorType;usedatadog_api_client::datadogV2::model::ObservabilityPipelineSpec;usedatadog_api_client::datadogV2::model::ObservabilityPipelineSpecData;#[tokio::main]asyncfnmain(){letbody=ObservabilityPipelineSpec::new(ObservabilityPipelineSpecData::new(ObservabilityPipelineDataAttributes::new(ObservabilityPipelineConfig::new(vec![ObservabilityPipelineConfigDestinationItem::ObservabilityPipelineDatadogLogsDestination(Box::new(ObservabilityPipelineDatadogLogsDestination::new("datadog-logs-destination".to_string(),vec!["my-processor-group".to_string()],ObservabilityPipelineDatadogLogsDestinationType::DATADOG_LOGS,),),)],vec![ObservabilityPipelineConfigSourceItem::ObservabilityPipelineDatadogAgentSource(Box::new(ObservabilityPipelineDatadogAgentSource::new("datadog-agent-source".to_string(),ObservabilityPipelineDatadogAgentSourceType::DATADOG_AGENT,),),)],).processor_groups(vec![ObservabilityPipelineConfigProcessorGroup::new(true,"my-processor-group".to_string(),"service:my-service".to_string(),vec!["datadog-agent-source".to_string()],vec![ObservabilityPipelineConfigProcessorItem::ObservabilityPipelineDedupeProcessor(Box::new(ObservabilityPipelineDedupeProcessor::new(true,vec!["message".to_string()],"dedupe-processor".to_string(),"service:my-service".to_string(),ObservabilityPipelineDedupeProcessorMode::MATCH,ObservabilityPipelineDedupeProcessorType::DEDUPE,).cache(ObservabilityPipelineDedupeProcessorCache::new(5000)),),)],)],),"Pipeline with Dedupe Cache".to_string(),),"pipelines".to_string(),),);letconfiguration=datadog::Configuration::new();letapi=ObservabilityPipelinesAPI::with_config(configuration);letresp=api.create_pipeline(body).await;ifletOk(value)=resp{println!("{:#?}",value);}else{println!("{:#?}",resp.unwrap_err());}}
// Create a pipeline with dedupe processor without cache returns "OK" response
usedatadog_api_client::datadog;usedatadog_api_client::datadogV2::api_observability_pipelines::ObservabilityPipelinesAPI;usedatadog_api_client::datadogV2::model::ObservabilityPipelineConfig;usedatadog_api_client::datadogV2::model::ObservabilityPipelineConfigDestinationItem;usedatadog_api_client::datadogV2::model::ObservabilityPipelineConfigProcessorGroup;usedatadog_api_client::datadogV2::model::ObservabilityPipelineConfigProcessorItem;usedatadog_api_client::datadogV2::model::ObservabilityPipelineConfigSourceItem;usedatadog_api_client::datadogV2::model::ObservabilityPipelineDataAttributes;usedatadog_api_client::datadogV2::model::ObservabilityPipelineDatadogAgentSource;usedatadog_api_client::datadogV2::model::ObservabilityPipelineDatadogAgentSourceType;usedatadog_api_client::datadogV2::model::ObservabilityPipelineDatadogLogsDestination;usedatadog_api_client::datadogV2::model::ObservabilityPipelineDatadogLogsDestinationType;usedatadog_api_client::datadogV2::model::ObservabilityPipelineDedupeProcessor;usedatadog_api_client::datadogV2::model::ObservabilityPipelineDedupeProcessorMode;usedatadog_api_client::datadogV2::model::ObservabilityPipelineDedupeProcessorType;usedatadog_api_client::datadogV2::model::ObservabilityPipelineSpec;usedatadog_api_client::datadogV2::model::ObservabilityPipelineSpecData;#[tokio::main]asyncfnmain(){letbody=ObservabilityPipelineSpec::new(ObservabilityPipelineSpecData::new(ObservabilityPipelineDataAttributes::new(ObservabilityPipelineConfig::new(vec![ObservabilityPipelineConfigDestinationItem::ObservabilityPipelineDatadogLogsDestination(Box::new(ObservabilityPipelineDatadogLogsDestination::new("datadog-logs-destination".to_string(),vec!["my-processor-group".to_string()],ObservabilityPipelineDatadogLogsDestinationType::DATADOG_LOGS,),),)],vec![ObservabilityPipelineConfigSourceItem::ObservabilityPipelineDatadogAgentSource(Box::new(ObservabilityPipelineDatadogAgentSource::new("datadog-agent-source".to_string(),ObservabilityPipelineDatadogAgentSourceType::DATADOG_AGENT,),),)],).processor_groups(vec![ObservabilityPipelineConfigProcessorGroup::new(true,"my-processor-group".to_string(),"service:my-service".to_string(),vec!["datadog-agent-source".to_string()],vec![ObservabilityPipelineConfigProcessorItem::ObservabilityPipelineDedupeProcessor(Box::new(ObservabilityPipelineDedupeProcessor::new(true,vec!["message".to_string()],"dedupe-processor".to_string(),"service:my-service".to_string(),ObservabilityPipelineDedupeProcessorMode::MATCH,ObservabilityPipelineDedupeProcessorType::DEDUPE,),),)],)],),"Pipeline with Dedupe No Cache".to_string(),),"pipelines".to_string(),),);letconfiguration=datadog::Configuration::new();letapi=ObservabilityPipelinesAPI::with_config(configuration);letresp=api.create_pipeline(body).await;ifletOk(value)=resp{println!("{:#?}",value);}else{println!("{:#?}",resp.unwrap_err());}}
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com"DD_API_KEY="<API-KEY>"DD_APP_KEY="<APP-KEY>"cargo run
/**
* Create a new pipeline returns "OK" response
*/import{client,v2}from"@datadog/datadog-api-client";constconfiguration=client.createConfiguration();constapiInstance=newv2.ObservabilityPipelinesApi(configuration);constparams: v2.ObservabilityPipelinesApiCreatePipelineRequest={body:{data:{attributes:{config:{destinations:[{id:"datadog-logs-destination",inputs:["my-processor-group"],type:"datadog_logs",},],processorGroups:[{enabled: true,id:"my-processor-group",include:"service:my-service",inputs:["datadog-agent-source"],processors:[{enabled: true,id:"filter-processor",include:"status:error",type:"filter",},],},],sources:[{id:"datadog-agent-source",type:"datadog_agent",},],},name:"Main Observability Pipeline",},type:"pipelines",},},};apiInstance.createPipeline(params).then((data: v2.ObservabilityPipeline)=>{console.log("API called successfully. Returned data: "+JSON.stringify(data));}).catch((error: any)=>console.error(error));
/**
* Create a pipeline with dedupe processor with cache returns "OK" response
*/import{client,v2}from"@datadog/datadog-api-client";constconfiguration=client.createConfiguration();constapiInstance=newv2.ObservabilityPipelinesApi(configuration);constparams: v2.ObservabilityPipelinesApiCreatePipelineRequest={body:{data:{attributes:{config:{destinations:[{id:"datadog-logs-destination",inputs:["my-processor-group"],type:"datadog_logs",},],processorGroups:[{enabled: true,id:"my-processor-group",include:"service:my-service",inputs:["datadog-agent-source"],processors:[{enabled: true,id:"dedupe-processor",include:"service:my-service",type:"dedupe",fields:["message"],mode:"match",cache:{numEvents: 5000,},},],},],sources:[{id:"datadog-agent-source",type:"datadog_agent",},],},name:"Pipeline with Dedupe Cache",},type:"pipelines",},},};apiInstance.createPipeline(params).then((data: v2.ObservabilityPipeline)=>{console.log("API called successfully. Returned data: "+JSON.stringify(data));}).catch((error: any)=>console.error(error));
/**
* Create a pipeline with dedupe processor without cache returns "OK" response
*/import{client,v2}from"@datadog/datadog-api-client";constconfiguration=client.createConfiguration();constapiInstance=newv2.ObservabilityPipelinesApi(configuration);constparams: v2.ObservabilityPipelinesApiCreatePipelineRequest={body:{data:{attributes:{config:{destinations:[{id:"datadog-logs-destination",inputs:["my-processor-group"],type:"datadog_logs",},],processorGroups:[{enabled: true,id:"my-processor-group",include:"service:my-service",inputs:["datadog-agent-source"],processors:[{enabled: true,id:"dedupe-processor",include:"service:my-service",type:"dedupe",fields:["message"],mode:"match",},],},],sources:[{id:"datadog-agent-source",type:"datadog_agent",},],},name:"Pipeline with Dedupe No Cache",},type:"pipelines",},},};apiInstance.createPipeline(params).then((data: v2.ObservabilityPipeline)=>{console.log("API called successfully. Returned data: "+JSON.stringify(data));}).catch((error: any)=>console.error(error));
Name of the environment variable or secret that holds a custom header value (used with custom auth strategies).
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
password_key
string
Name of the environment variable or secret that holds the password (used when auth_strategy is basic).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
token_key
string
Name of the environment variable or secret that holds the bearer token (used when auth_strategy is bearer).
type [required]
enum
The destination type. The value should always be http_client.
Allowed enum values: http_client
default: http_client
uri_key
string
Name of the environment variable or secret that holds the HTTP endpoint URI.
username_key
string
Name of the environment variable or secret that holds the username (used when auth_strategy is basic).
Option 2
object
The amazon_opensearch destination writes logs to Amazon OpenSearch.
Supported pipeline types: logs
auth [required]
object
Authentication settings for the Amazon OpenSearch destination.
The strategy field determines whether basic or AWS-based authentication is used.
assume_role
string
The ARN of the role to assume (used with aws strategy).
aws_region
string
AWS region
external_id
string
External ID for the assumed role (used with aws strategy).
session_name
string
Session name for the assumed role (used with aws strategy).
strategy [required]
enum
The authentication strategy to use.
Allowed enum values: basic,aws
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
bulk_index
string
The index to write logs to.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be amazon_opensearch.
Allowed enum values: amazon_opensearch
default: amazon_opensearch
Option 3
object
The amazon_s3 destination sends your logs in Datadog-rehydratable format to an Amazon S3 bucket for archiving.
Supported pipeline types: logs
auth
object
AWS authentication credentials used for accessing AWS services such as S3.
If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).
assume_role
string
The Amazon Resource Name (ARN) of the role to assume.
external_id
string
A unique identifier for cross-account role assumption.
session_name
string
A session identifier used for logging and tracing the assumed role session.
bucket [required]
string
S3 bucket name.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
Unique identifier for the destination component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. Always amazon_s3.
Allowed enum values: amazon_s3
default: amazon_s3
Option 4
object
The amazon_security_lake destination sends your logs to Amazon Security Lake.
Supported pipeline types: logs
auth
object
AWS authentication credentials used for accessing AWS services such as S3.
If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).
assume_role
string
The Amazon Resource Name (ARN) of the role to assume.
external_id
string
A unique identifier for cross-account role assumption.
session_name
string
A session identifier used for logging and tracing the assumed role session.
bucket [required]
string
Name of the Amazon S3 bucket in Security Lake (3-63 characters).
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
custom_source_name [required]
string
Custom source name for the logs in Security Lake.
id [required]
string
Unique identifier for the destination component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
region [required]
string
AWS region of the S3 bucket.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. Always amazon_security_lake.
Allowed enum values: amazon_security_lake
default: amazon_security_lake
Option 5
object
The azure_storage destination forwards logs to an Azure Blob Storage container.
Supported pipeline types: logs
blob_prefix
string
Optional prefix for blobs written to the container.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
connection_string_key
string
Name of the environment variable or secret that holds the Azure Storage connection string.
container_name [required]
string
The name of the Azure Blob Storage container to store logs in.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be azure_storage.
Allowed enum values: azure_storage
default: azure_storage
Option 6
object
The cloud_prem destination sends logs to Datadog CloudPrem.
Supported pipeline types: logs
endpoint_url_key
string
Name of the environment variable or secret that holds the CloudPrem endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be cloud_prem.
Allowed enum values: cloud_prem
default: cloud_prem
Option 7
object
The crowdstrike_next_gen_siem destination forwards logs to CrowdStrike Next Gen SIEM.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
compression
object
Compression configuration for log events.
algorithm [required]
enum
Compression algorithm for log events.
Allowed enum values: gzip,zlib
level
int64
Compression level.
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json,raw_message
endpoint_url_key
string
Name of the environment variable or secret that holds the CrowdStrike endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
token_key
string
Name of the environment variable or secret that holds the CrowdStrike API token.
type [required]
enum
The destination type. The value should always be crowdstrike_next_gen_siem.
Allowed enum values: crowdstrike_next_gen_siem
default: crowdstrike_next_gen_siem
Option 8
object
The datadog_logs destination forwards logs to Datadog Log Management.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
routes
[object]
A list of routing rules that forward matching logs to Datadog using dedicated API keys.
api_key_key
string
Name of the environment variable or secret that stores the Datadog API key used by this route.
include
string
A Datadog search query that determines which logs are forwarded using this route.
route_id
string
Unique identifier for this route within the destination.
site
string
Datadog site where matching logs are sent (for example, us1).
type [required]
enum
The destination type. The value should always be datadog_logs.
Allowed enum values: datadog_logs
default: datadog_logs
Option 9
object
The elasticsearch destination writes logs to an Elasticsearch cluster.
Supported pipeline types: logs
api_version
enum
The Elasticsearch API version to use. Set to auto to auto-detect.
Allowed enum values: auto,v6,v7,v8
auth
object
Authentication settings for the Elasticsearch destination.
When strategy is basic, use username_key and password_key to reference credentials stored in environment variables or secrets.
password_key
string
Name of the environment variable or secret that holds the Elasticsearch password (used when strategy is basic).
strategy [required]
enum
The authentication strategy to use.
Allowed enum values: basic,aws
username_key
string
Name of the environment variable or secret that holds the Elasticsearch username (used when strategy is basic).
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
bulk_index
string
The index to write logs to in Elasticsearch.
data_stream
object
Configuration options for writing to Elasticsearch Data Streams instead of a fixed index.
dataset
string
The data stream dataset for your logs. This groups logs by their source or application.
dtype
string
The data stream type for your logs. This determines how logs are categorized within the data stream.
namespace
string
The data stream namespace for your logs. This separates logs into different environments or domains.
endpoint_url_key
string
Name of the environment variable or secret that holds the Elasticsearch endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be elasticsearch.
Allowed enum values: elasticsearch
default: elasticsearch
Option 10
object
The google_chronicle destination sends logs to Google Chronicle.
Supported pipeline types: logs
auth
object
Google Cloud credentials used to authenticate with Google Cloud Storage.
credentials_file [required]
string
Path to the Google Cloud service account key file.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
customer_id [required]
string
The Google Chronicle customer ID.
encoding
enum
The encoding format for the logs sent to Chronicle.
Allowed enum values: json,raw_message
endpoint_url_key
string
Name of the environment variable or secret that holds the Google Chronicle endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
log_type
string
The log type metadata associated with the Chronicle destination.
type [required]
enum
The destination type. The value should always be google_chronicle.
Allowed enum values: google_chronicle
default: google_chronicle
Option 11
object
The google_cloud_storage destination stores logs in a Google Cloud Storage (GCS) bucket.
It requires a bucket name, Google Cloud authentication, and metadata fields.
Supported pipeline types: logs
acl
enum
Access control list setting for objects written to the bucket.
Allowed enum values: private,project-private,public-read,authenticated-read,bucket-owner-read,bucket-owner-full-control
auth
object
Google Cloud credentials used to authenticate with Google Cloud Storage.
credentials_file [required]
string
Path to the Google Cloud service account key file.
bucket [required]
string
Name of the GCS bucket.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
Unique identifier for the destination component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
key_prefix
string
Optional prefix for object keys within the GCS bucket.
metadata
[object]
Custom metadata to attach to each object uploaded to the GCS bucket.
name [required]
string
The metadata key.
value [required]
string
The metadata value.
storage_class [required]
enum
Storage class used for objects stored in GCS.
Allowed enum values: STANDARD,NEARLINE,COLDLINE,ARCHIVE
type [required]
enum
The destination type. Always google_cloud_storage.
Allowed enum values: google_cloud_storage
default: google_cloud_storage
Option 12
object
The google_pubsub destination publishes logs to a Google Cloud Pub/Sub topic.
Supported pipeline types: logs
auth
object
Google Cloud credentials used to authenticate with Google Cloud Storage.
credentials_file [required]
string
Path to the Google Cloud service account key file.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json,raw_message
endpoint_url_key
string
Name of the environment variable or secret that holds the Google Cloud Pub/Sub endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
project [required]
string
The Google Cloud project ID that owns the Pub/Sub topic.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
topic [required]
string
The Pub/Sub topic name to publish logs to.
type [required]
enum
The destination type. The value should always be google_pubsub.
Allowed enum values: google_pubsub
default: google_pubsub
Option 13
object
The kafka destination sends logs to Apache Kafka topics.
Supported pipeline types: logs
bootstrap_servers_key
string
Name of the environment variable or secret that holds the Kafka bootstrap servers list.
compression
enum
Compression codec for Kafka messages.
Allowed enum values: none,gzip,snappy,lz4,zstd
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json,raw_message
headers_key
string
The field name to use for Kafka message headers.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
key_field
string
The field name to use as the Kafka message key.
librdkafka_options
[object]
Optional list of advanced Kafka producer configuration options, defined as key-value pairs.
name [required]
string
The name of the librdkafka configuration option to set.
value [required]
string
The value assigned to the specified librdkafka configuration option.
message_timeout_ms
int64
Maximum time in milliseconds to wait for message delivery confirmation.
rate_limit_duration_secs
int64
Duration in seconds for the rate limit window.
rate_limit_num
int64
Maximum number of messages allowed per rate limit duration.
sasl
object
Specifies the SASL mechanism for authenticating with a Kafka cluster.
mechanism
enum
SASL mechanism used for Kafka authentication.
Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
password_key
string
Name of the environment variable or secret that holds the SASL password.
username_key
string
Name of the environment variable or secret that holds the SASL username.
socket_timeout_ms
int64
Socket timeout in milliseconds for network requests.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
topic [required]
string
The Kafka topic name to publish logs to.
type [required]
enum
The destination type. The value should always be kafka.
Allowed enum values: kafka
default: kafka
Option 14
object
The microsoft_sentinel destination forwards logs to Microsoft Sentinel.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
client_id [required]
string
Azure AD client ID used for authentication.
client_secret_key
string
Name of the environment variable or secret that holds the Azure AD client secret.
dce_uri_key
string
Name of the environment variable or secret that holds the Data Collection Endpoint (DCE) URI.
dcr_immutable_id [required]
string
The immutable ID of the Data Collection Rule (DCR).
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
table [required]
string
The name of the Log Analytics table where logs are sent.
tenant_id [required]
string
Azure AD tenant ID.
type [required]
enum
The destination type. The value should always be microsoft_sentinel.
Allowed enum values: microsoft_sentinel
default: microsoft_sentinel
Option 15
object
The new_relic destination sends logs to the New Relic platform.
Supported pipeline types: logs
account_id_key
string
Name of the environment variable or secret that holds the New Relic account ID.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
license_key_key
string
Name of the environment variable or secret that holds the New Relic license key.
region [required]
enum
The New Relic region.
Allowed enum values: us,eu
type [required]
enum
The destination type. The value should always be new_relic.
Allowed enum values: new_relic
default: new_relic
Option 16
object
The opensearch destination writes logs to an OpenSearch cluster.
Supported pipeline types: logs
auth
object
Authentication settings for the Elasticsearch destination.
When strategy is basic, use username_key and password_key to reference credentials stored in environment variables or secrets.
password_key
string
Name of the environment variable or secret that holds the Elasticsearch password (used when strategy is basic).
strategy [required]
enum
The authentication strategy to use.
Allowed enum values: basic,aws
username_key
string
Name of the environment variable or secret that holds the Elasticsearch username (used when strategy is basic).
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
bulk_index
string
The index to write logs to.
data_stream
object
Configuration options for writing to OpenSearch Data Streams instead of a fixed index.
dataset
string
The data stream dataset for your logs. This groups logs by their source or application.
dtype
string
The data stream type for your logs. This determines how logs are categorized within the data stream.
namespace
string
The data stream namespace for your logs. This separates logs into different environments or domains.
endpoint_url_key
string
Name of the environment variable or secret that holds the OpenSearch endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be opensearch.
Allowed enum values: opensearch
default: opensearch
Option 17
object
The rsyslog destination forwards logs to an external rsyslog server over TCP or UDP using the syslog protocol.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
endpoint_url_key
string
Name of the environment variable or secret that holds the syslog server endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
keepalive
int64
Optional socket keepalive duration in milliseconds.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. The value should always be rsyslog.
Allowed enum values: rsyslog
default: rsyslog
Option 18
object
The sentinel_one destination sends logs to SentinelOne.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
region [required]
enum
The SentinelOne region to send logs to.
Allowed enum values: us,eu,ca,data_set_us
token_key
string
Name of the environment variable or secret that holds the SentinelOne API token.
type [required]
enum
The destination type. The value should always be sentinel_one.
Allowed enum values: sentinel_one
default: sentinel_one
Option 19
object
The socket destination sends logs over TCP or UDP to a remote server.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the socket address (host:port).
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json,raw_message
framing [required]
<oneOf>
Framing method configuration.
Option 1
object
Each log event is delimited by a newline character.
method [required]
enum
The definition of ObservabilityPipelineSocketDestinationFramingNewlineDelimitedMethod object.
Allowed enum values: newline_delimited
Option 2
object
Event data is not delimited at all.
method [required]
enum
The definition of ObservabilityPipelineSocketDestinationFramingBytesMethod object.
Allowed enum values: bytes
Option 3
object
Each log event is separated using the specified delimiter character.
delimiter [required]
string
A single ASCII character used as a delimiter.
method [required]
enum
The definition of ObservabilityPipelineSocketDestinationFramingCharacterDelimitedMethod object.
Allowed enum values: character_delimited
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
mode [required]
enum
Protocol used to send logs.
Allowed enum values: tcp,udp
tls
object
TLS configuration. Relevant only when mode is tcp.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. The value should always be socket.
Allowed enum values: socket
default: socket
Option 20
object
The splunk_hec destination forwards logs to Splunk using the HTTP Event Collector (HEC).
Supported pipeline types: logs
auto_extract_timestamp
boolean
If true, Splunk tries to extract timestamps from incoming log events.
If false, Splunk assigns the time the event was received.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
encoding
enum
Encoding format for log events.
Allowed enum values: json,raw_message
endpoint_url_key
string
Name of the environment variable or secret that holds the Splunk HEC endpoint URL.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
index
string
Optional name of the Splunk index where logs are written.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
sourcetype
string
The Splunk sourcetype to assign to log events.
token_key
string
Name of the environment variable or secret that holds the Splunk HEC token.
type [required]
enum
The destination type. Always splunk_hec.
Allowed enum values: splunk_hec
default: splunk_hec
Option 21
object
The sumo_logic destination forwards logs to Sumo Logic.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
encoding
enum
The output encoding format.
Allowed enum values: json,raw_message,logfmt
endpoint_url_key
string
Name of the environment variable or secret that holds the Sumo Logic HTTP endpoint URL.
header_custom_fields
[object]
A list of custom headers to include in the request to Sumo Logic.
name [required]
string
The header field name.
value [required]
string
The header field value.
header_host_name
string
Optional override for the host name header.
header_source_category
string
Optional override for the source category header.
header_source_name
string
Optional override for the source name header.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be sumo_logic.
Allowed enum values: sumo_logic
default: sumo_logic
Option 22
object
The syslog_ng destination forwards logs to an external syslog-ng server over TCP or UDP using the syslog protocol.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
endpoint_url_key
string
Name of the environment variable or secret that holds the syslog-ng server endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
keepalive
int64
Optional socket keepalive duration in milliseconds.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. The value should always be syslog_ng.
Allowed enum values: syslog_ng
default: syslog_ng
Option 23
object
The datadog_metrics destination forwards metrics to Datadog.
Supported pipeline types: metrics
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be datadog_metrics.
Allowed enum values: datadog_metrics
default: datadog_metrics
pipeline_type
enum
The type of data being ingested. Defaults to logs if not specified.
Allowed enum values: logs,metrics
default: logs
processor_groups
[object]
A list of processor groups that transform or enrich log data.
display_name
string
The display name for a component.
enabled [required]
boolean
Whether this processor group is enabled.
id [required]
string
The unique identifier for the processor group.
include [required]
string
Conditional expression for when this processor group should execute.
inputs [required]
[string]
A list of IDs for components whose output is used as the input for this processor group.
processors [required]
[ <oneOf>]
Processors applied sequentially within this group. Events flow through each processor in order.
Option 1
object
The filter processor allows conditional processing of logs/metrics based on a Datadog search query. Logs/metrics that match the include query are passed through; others are discarded.
Supported pipeline types: logs, metrics
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs/metrics should pass through the filter. Logs/metrics that match this query continue to downstream components; others are dropped.
type [required]
enum
The processor type. The value should always be filter.
Allowed enum values: filter
default: filter
Option 2
object
The add_env_vars processor adds environment variable values to log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this processor in the pipeline.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_env_vars.
Allowed enum values: add_env_vars
default: add_env_vars
variables [required]
[object]
A list of environment variable mappings to apply to log fields.
field [required]
string
The target field in the log event.
name [required]
string
The name of the environment variable to read.
Option 3
object
The add_fields processor adds static key-value fields to logs.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[object]
A list of static fields (key-value pairs) that is added to each log event processed by this component.
name [required]
string
The field name.
value [required]
string
The field value.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_fields.
Allowed enum values: add_fields
default: add_fields
Option 4
object
The add_hostname processor adds the hostname to log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_hostname.
Allowed enum values: add_hostname
default: add_hostname
Option 5
object
The custom_processor processor transforms events using Vector Remap Language (VRL) scripts with advanced filtering capabilities.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets. This field should always be set to * for the custom_processor processor.
default: *
remaps [required]
[object]
Array of VRL remap rules.
drop_on_error [required]
boolean
Whether to drop events that caused errors during processing.
enabled
boolean
Whether this remap rule is enabled.
include [required]
string
A Datadog search query used to filter events for this specific remap rule.
name [required]
string
A descriptive name for this remap rule.
source [required]
string
The VRL script source code that defines the processing logic.
type [required]
enum
The processor type. The value should always be custom_processor.
Allowed enum values: custom_processor
default: custom_processor
Option 6
object
The datadog_tags processor includes or excludes specific Datadog tags in your logs.
Supported pipeline types: logs
action [required]
enum
The action to take on tags with matching keys.
Allowed enum values: include,exclude
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
keys [required]
[string]
A list of tag keys.
mode [required]
enum
The processing mode.
Allowed enum values: filter
type [required]
enum
The processor type. The value should always be datadog_tags.
Allowed enum values: datadog_tags
default: datadog_tags
Option 7
object
The dedupe processor removes duplicate fields in log events.
Supported pipeline types: logs
cache
object
Configuration for the cache used to detect duplicates.
num_events [required]
int64
The number of events to cache for duplicate detection.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[string]
A list of log field paths to check for duplicates.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
mode [required]
enum
The deduplication mode to apply to the fields.
Allowed enum values: match,ignore
type [required]
enum
The processor type. The value should always be dedupe.
Allowed enum values: dedupe
default: dedupe
Option 8
object
The enrichment_table processor enriches logs using a static CSV file, GeoIP database, or reference table. Exactly one of file, geoip, or reference_table must be configured.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
file
object
Defines a static enrichment table loaded from a CSV file.
encoding [required]
object
File encoding format.
delimiter [required]
string
The encodingdelimiter.
includes_headers [required]
boolean
The encodingincludes_headers.
type [required]
enum
Specifies the encoding format (e.g., CSV) used for enrichment tables.
Allowed enum values: csv
key [required]
[object]
Key fields used to look up enrichment values.
column [required]
string
The itemscolumn.
comparison [required]
enum
Defines how to compare key fields for enrichment table lookups.
Allowed enum values: equals
field [required]
string
The itemsfield.
path [required]
string
Path to the CSV file.
schema [required]
[object]
Schema defining column names and their types.
column [required]
string
The itemscolumn.
type [required]
enum
Declares allowed data types for enrichment table columns.
Allowed enum values: string,boolean,integer,float,date,timestamp
geoip
object
Uses a GeoIP database to enrich logs based on an IP field.
key_field [required]
string
Path to the IP field in the log.
locale [required]
string
Locale used to resolve geographical names.
path [required]
string
Path to the GeoIP database file.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
reference_table
object
Uses a Datadog reference table to enrich logs.
app_key_key
string
Name of the environment variable or secret that holds the Datadog application key used to access the reference table.
columns
[string]
List of column names to include from the reference table. If not provided, all columns are included.
key_field [required]
string
Path to the field in the log event to match against the reference table.
table_id [required]
string
The unique identifier of the reference table.
target [required]
string
Path where enrichment results should be stored in the log.
type [required]
enum
The processor type. The value should always be enrichment_table.
Allowed enum values: enrichment_table
default: enrichment_table
Option 9
object
The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog.
Metrics can be counters, gauges, or distributions and optionally grouped by log fields.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline.
include
string
A Datadog search query used to determine which logs this processor targets.
metrics
[object]
Configuration for generating individual metrics.
group_by
[string]
Optional fields used to group the metric series.
include [required]
string
Datadog filter query to match logs for metric generation.
metric_type [required]
enum
Type of metric to create.
Allowed enum values: count,gauge,distribution
name [required]
string
Name of the custom metric to be created.
value [required]
<oneOf>
Specifies how the value of the generated metric is computed.
Option 1
object
Strategy that increments a generated metric by one for each matching event.
strategy [required]
enum
Increments the metric by 1 for each matching event.
Allowed enum values: increment_by_one
Option 2
object
Strategy that increments a generated metric based on the value of a log field.
field [required]
string
Name of the log field containing the numeric value to increment the metric by.
strategy [required]
enum
Uses a numeric field in the log event as the metric increment.
Allowed enum values: increment_by_field
type [required]
enum
The processor type. Always generate_datadog_metrics.
Allowed enum values: generate_datadog_metrics
default: generate_datadog_metrics
Option 10
object
The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
mappings [required]
[object]
A list of mapping rules to convert events to the OCSF format.
include [required]
string
A Datadog search query used to select the logs that this mapping should apply to.
mapping [required]
<oneOf>
Defines a single mapping rule for transforming logs into the OCSF schema.
Custom OCSF mapping configuration for transforming logs.
mapping [required]
[object]
A list of field mapping rules for transforming log fields to OCSF schema fields.
default
The default value to use if the source field is missing or empty.
dest [required]
string
The destination OCSF field path.
lookup
object
Lookup table configuration for mapping source values to destination values.
source
The source field path from the log event.
sources
Multiple source field paths for combined mapping.
value
A static value to use for the destination field.
metadata [required]
object
Metadata for the custom OCSF mapping.
class [required]
string
The OCSF event class name.
profiles
[string]
A list of OCSF profiles to apply.
version [required]
string
The OCSF schema version.
version [required]
int64
The version of the custom mapping configuration.
type [required]
enum
The processor type. The value should always be ocsf_mapper.
Allowed enum values: ocsf_mapper
default: ocsf_mapper
Option 11
object
The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.
Supported pipeline types: logs
disable_library_rules
boolean
If set to true, disables the default Grok rules provided by Datadog.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
A unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
rules [required]
[object]
The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.
match_rules [required]
[object]
A list of Grok parsing rules that define how to extract fields from the source field.
Each rule must contain a name and a valid Grok pattern.
name [required]
string
The name of the rule.
rule [required]
string
The definition of the Grok rule.
source [required]
string
The name of the field in the log event to apply the Grok rules to.
support_rules
[object]
A list of Grok helper rules that can be referenced by the parsing rules.
name [required]
string
The name of the Grok helper rule.
rule [required]
string
The definition of the Grok helper rule.
type [required]
enum
The processor type. The value should always be parse_grok.
Allowed enum values: parse_grok
default: parse_grok
Option 12
object
The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
field [required]
string
The name of the log field that contains a JSON string.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be parse_json.
Allowed enum values: parse_json
default: parse_json
Option 13
object
The parse_xml processor parses XML from a specified field and extracts it into the event.
Supported pipeline types: logs
always_use_text_key
boolean
Whether to always use a text key for element content.
attr_prefix
string
The prefix to use for XML attributes in the parsed output.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
field [required]
string
The name of the log field that contains an XML string.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
include_attr
boolean
Whether to include XML attributes in the parsed output.
parse_bool
boolean
Whether to parse boolean values from strings.
parse_null
boolean
Whether to parse null values.
parse_number
boolean
Whether to parse numeric values from strings.
text_key
string
The key name to use for text content within XML elements. Must be at least 1 character if specified.
type [required]
enum
The processor type. The value should always be parse_xml.
Allowed enum values: parse_xml
default: parse_xml
Option 14
object
The quota processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.
Supported pipeline types: logs
display_name
string
The display name for a component.
drop_events
boolean
If set to true, logs that match the quota filter and are sent after the quota is exceeded are dropped. Logs that do not match the filter continue through the pipeline. Note: You can set either drop_events or overflow_action, but not both.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
ignore_when_missing_partitions
boolean
If true, the processor skips quota checks when partition fields are missing from the logs.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
name [required]
string
Name of the quota.
overflow_action
enum
The action to take when the quota or bucket limit is exceeded. Options:
drop: Drop the event.
no_action: Let the event pass through.
overflow_routing: Route to an overflow destination.
Allowed enum values: drop,no_action,overflow_routing
overrides
[object]
A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.
fields [required]
[object]
A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.
name [required]
string
The field name.
value [required]
string
The field value.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
partition_fields
[string]
A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.
too_many_buckets_action
enum
The action to take when the quota or bucket limit is exceeded. Options:
drop: Drop the event.
no_action: Let the event pass through.
overflow_routing: Route to an overflow destination.
Allowed enum values: drop,no_action,overflow_routing
type [required]
enum
The processor type. The value should always be quota.
Allowed enum values: quota
default: quota
Option 15
object
The reduce processor aggregates and merges logs based on matching keys and merge strategies.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by [required]
[string]
A list of fields used to group log events for merging.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
merge_strategies [required]
[object]
List of merge strategies defining how values from grouped events should be combined.
path [required]
string
The field path in the log event.
strategy [required]
enum
The merge strategy to apply.
Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique
type [required]
enum
The processor type. The value should always be reduce.
Allowed enum values: reduce
default: reduce
Option 16
object
The remove_fields processor deletes specified fields from logs.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[string]
A list of field names to be removed from each log event.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be remove_fields.
Allowed enum values: remove_fields
default: remove_fields
Option 17
object
The rename_fields processor changes field names.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[object]
A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.
destination [required]
string
The field name to assign the renamed value to.
preserve_source [required]
boolean
Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.
source [required]
string
The original field name in the log event that should be renamed.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be rename_fields.
Allowed enum values: rename_fields
default: rename_fields
Option 18
object
The sample processor allows probabilistic sampling of logs at a fixed rate.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by
[string]
Optional list of fields to group events by. Each group is sampled independently.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
percentage [required]
double
The percentage of logs to sample.
type [required]
enum
The processor type. The value should always be sample.
Allowed enum values: sample
default: sample
Option 19
object
The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
rules [required]
[object]
A list of rules for identifying and acting on sensitive data patterns.
keyword_options
object
Configuration for keywords used to reinforce sensitive data pattern detection.
keywords [required]
[string]
A list of keywords to match near the sensitive pattern.
proximity [required]
int64
Maximum number of tokens between a keyword and a sensitive value match.
name [required]
string
A name identifying the rule.
on_match [required]
<oneOf>
Defines what action to take when sensitive data is matched.
Option 1
object
Configuration for completely redacting matched sensitive data.
action [required]
enum
Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility.
Allowed enum values: redact
options [required]
object
Configuration for fully redacting sensitive data.
replace [required]
string
The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptionsreplace.
Option 2
object
Configuration for hashing matched sensitive values.
action [required]
enum
Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content.
Allowed enum values: hash
options
object
The ObservabilityPipelineSensitiveDataScannerProcessorActionHashoptions.
Option 3
object
Configuration for partially redacting matched sensitive data.
action [required]
enum
Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card).
Allowed enum values: partial_redact
options [required]
object
Controls how partial redaction is applied, including character count and direction.
characters [required]
int64
The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptionscharacters.
direction [required]
enum
Indicates whether to redact characters from the first or last part of the matched value.
Allowed enum values: first,last
pattern [required]
<oneOf>
Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.
Option 1
object
Defines a custom regex-based pattern for identifying sensitive data in logs.
options [required]
object
Options for defining a custom regex pattern.
description
string
Human-readable description providing context about a sensitive data scanner rule
rule [required]
string
A regular expression used to detect sensitive values. Must be a valid regex.
type [required]
enum
Indicates a custom regular expression is used for matching.
Allowed enum values: custom
Option 2
object
Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.
options [required]
object
Options for selecting a predefined library pattern and enabling keyword support.
description
string
Human-readable description providing context about a sensitive data scanner rule
id [required]
string
Identifier for a predefined pattern from the sensitive data scanner pattern library.
use_recommended_keywords
boolean
Whether to augment the pattern with recommended keywords (optional).
type [required]
enum
Indicates that a predefined library pattern is used.
Allowed enum values: library
scope [required]
<oneOf>
Determines which parts of the log the pattern-matching rule should be applied to.
Option 1
object
Includes only specific fields for sensitive data scanning.
options [required]
object
Fields to which the scope rule applies.
fields [required]
[string]
The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptionsfields.
target [required]
enum
Applies the rule only to included fields.
Allowed enum values: include
Option 2
object
Excludes specific fields from sensitive data scanning.
options [required]
object
Fields to which the scope rule applies.
fields [required]
[string]
The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptionsfields.
target [required]
enum
Excludes specific fields from processing.
Allowed enum values: exclude
Option 3
object
Applies scanning across all available fields.
target [required]
enum
Applies the rule to all fields.
Allowed enum values: all
tags [required]
[string]
Tags assigned to this rule for filtering and classification.
type [required]
enum
The processor type. The value should always be sensitive_data_scanner.
Allowed enum values: sensitive_data_scanner
default: sensitive_data_scanner
Option 20
object
The split_array processor splits array fields into separate events based on configured rules.
Supported pipeline types: logs
arrays [required]
[object]
A list of array split configurations.
field [required]
string
The path to the array field to split.
include [required]
string
A Datadog search query used to determine which logs this array split operation targets.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets. For split_array, this should typically be *.
type [required]
enum
The processor type. The value should always be split_array.
Allowed enum values: split_array
default: split_array
Option 21
object
The throttle processor limits the number of events that pass through over a given time window.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by
[string]
Optional list of fields used to group events before the threshold has been reached.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
threshold [required]
int64
the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.
type [required]
enum
The processor type. The value should always be throttle.
Allowed enum values: throttle
default: throttle
window [required]
double
The time window in seconds over which the threshold applies.
Option 22
object
The metric_tags processor filters metrics based on their tags using Datadog tag key patterns.
Supported pipeline types: metrics
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query that determines which metrics the processor targets.
rules [required]
[object]
A list of rules for filtering metric tags.
action [required]
enum
The action to take on tags with matching keys.
Allowed enum values: include,exclude
include [required]
string
A Datadog search query used to determine which metrics this rule targets.
keys [required]
[string]
A list of tag keys to include or exclude.
mode [required]
enum
The processing mode for tag filtering.
Allowed enum values: filter
type [required]
enum
The processor type. The value should always be metric_tags.
Allowed enum values: metric_tags
default: metric_tags
processors
[object]
DEPRECATED: A list of processor groups that transform or enrich log data.
Deprecated: This field is deprecated, you should now use the processor_groups field.
display_name
string
The display name for a component.
enabled [required]
boolean
Whether this processor group is enabled.
id [required]
string
The unique identifier for the processor group.
include [required]
string
Conditional expression for when this processor group should execute.
inputs [required]
[string]
A list of IDs for components whose output is used as the input for this processor group.
processors [required]
[ <oneOf>]
Processors applied sequentially within this group. Events flow through each processor in order.
Option 1
object
The filter processor allows conditional processing of logs/metrics based on a Datadog search query. Logs/metrics that match the include query are passed through; others are discarded.
Supported pipeline types: logs, metrics
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs/metrics should pass through the filter. Logs/metrics that match this query continue to downstream components; others are dropped.
type [required]
enum
The processor type. The value should always be filter.
Allowed enum values: filter
default: filter
Option 2
object
The add_env_vars processor adds environment variable values to log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this processor in the pipeline.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_env_vars.
Allowed enum values: add_env_vars
default: add_env_vars
variables [required]
[object]
A list of environment variable mappings to apply to log fields.
field [required]
string
The target field in the log event.
name [required]
string
The name of the environment variable to read.
Option 3
object
The add_fields processor adds static key-value fields to logs.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[object]
A list of static fields (key-value pairs) that is added to each log event processed by this component.
name [required]
string
The field name.
value [required]
string
The field value.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_fields.
Allowed enum values: add_fields
default: add_fields
Option 4
object
The add_hostname processor adds the hostname to log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_hostname.
Allowed enum values: add_hostname
default: add_hostname
Option 5
object
The custom_processor processor transforms events using Vector Remap Language (VRL) scripts with advanced filtering capabilities.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets. This field should always be set to * for the custom_processor processor.
default: *
remaps [required]
[object]
Array of VRL remap rules.
drop_on_error [required]
boolean
Whether to drop events that caused errors during processing.
enabled
boolean
Whether this remap rule is enabled.
include [required]
string
A Datadog search query used to filter events for this specific remap rule.
name [required]
string
A descriptive name for this remap rule.
source [required]
string
The VRL script source code that defines the processing logic.
type [required]
enum
The processor type. The value should always be custom_processor.
Allowed enum values: custom_processor
default: custom_processor
Option 6
object
The datadog_tags processor includes or excludes specific Datadog tags in your logs.
Supported pipeline types: logs
action [required]
enum
The action to take on tags with matching keys.
Allowed enum values: include,exclude
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
keys [required]
[string]
A list of tag keys.
mode [required]
enum
The processing mode.
Allowed enum values: filter
type [required]
enum
The processor type. The value should always be datadog_tags.
Allowed enum values: datadog_tags
default: datadog_tags
Option 7
object
The dedupe processor removes duplicate fields in log events.
Supported pipeline types: logs
cache
object
Configuration for the cache used to detect duplicates.
num_events [required]
int64
The number of events to cache for duplicate detection.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[string]
A list of log field paths to check for duplicates.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
mode [required]
enum
The deduplication mode to apply to the fields.
Allowed enum values: match,ignore
type [required]
enum
The processor type. The value should always be dedupe.
Allowed enum values: dedupe
default: dedupe
Option 8
object
The enrichment_table processor enriches logs using a static CSV file, GeoIP database, or reference table. Exactly one of file, geoip, or reference_table must be configured.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
file
object
Defines a static enrichment table loaded from a CSV file.
encoding [required]
object
File encoding format.
delimiter [required]
string
The encodingdelimiter.
includes_headers [required]
boolean
The encodingincludes_headers.
type [required]
enum
Specifies the encoding format (e.g., CSV) used for enrichment tables.
Allowed enum values: csv
key [required]
[object]
Key fields used to look up enrichment values.
column [required]
string
The itemscolumn.
comparison [required]
enum
Defines how to compare key fields for enrichment table lookups.
Allowed enum values: equals
field [required]
string
The itemsfield.
path [required]
string
Path to the CSV file.
schema [required]
[object]
Schema defining column names and their types.
column [required]
string
The itemscolumn.
type [required]
enum
Declares allowed data types for enrichment table columns.
Allowed enum values: string,boolean,integer,float,date,timestamp
geoip
object
Uses a GeoIP database to enrich logs based on an IP field.
key_field [required]
string
Path to the IP field in the log.
locale [required]
string
Locale used to resolve geographical names.
path [required]
string
Path to the GeoIP database file.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
reference_table
object
Uses a Datadog reference table to enrich logs.
app_key_key
string
Name of the environment variable or secret that holds the Datadog application key used to access the reference table.
columns
[string]
List of column names to include from the reference table. If not provided, all columns are included.
key_field [required]
string
Path to the field in the log event to match against the reference table.
table_id [required]
string
The unique identifier of the reference table.
target [required]
string
Path where enrichment results should be stored in the log.
type [required]
enum
The processor type. The value should always be enrichment_table.
Allowed enum values: enrichment_table
default: enrichment_table
Option 9
object
The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog.
Metrics can be counters, gauges, or distributions and optionally grouped by log fields.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline.
include
string
A Datadog search query used to determine which logs this processor targets.
metrics
[object]
Configuration for generating individual metrics.
group_by
[string]
Optional fields used to group the metric series.
include [required]
string
Datadog filter query to match logs for metric generation.
metric_type [required]
enum
Type of metric to create.
Allowed enum values: count,gauge,distribution
name [required]
string
Name of the custom metric to be created.
value [required]
<oneOf>
Specifies how the value of the generated metric is computed.
Option 1
object
Strategy that increments a generated metric by one for each matching event.
strategy [required]
enum
Increments the metric by 1 for each matching event.
Allowed enum values: increment_by_one
Option 2
object
Strategy that increments a generated metric based on the value of a log field.
field [required]
string
Name of the log field containing the numeric value to increment the metric by.
strategy [required]
enum
Uses a numeric field in the log event as the metric increment.
Allowed enum values: increment_by_field
type [required]
enum
The processor type. Always generate_datadog_metrics.
Allowed enum values: generate_datadog_metrics
default: generate_datadog_metrics
Option 10
object
The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
mappings [required]
[object]
A list of mapping rules to convert events to the OCSF format.
include [required]
string
A Datadog search query used to select the logs that this mapping should apply to.
mapping [required]
<oneOf>
Defines a single mapping rule for transforming logs into the OCSF schema.
Custom OCSF mapping configuration for transforming logs.
mapping [required]
[object]
A list of field mapping rules for transforming log fields to OCSF schema fields.
default
The default value to use if the source field is missing or empty.
dest [required]
string
The destination OCSF field path.
lookup
object
Lookup table configuration for mapping source values to destination values.
source
The source field path from the log event.
sources
Multiple source field paths for combined mapping.
value
A static value to use for the destination field.
metadata [required]
object
Metadata for the custom OCSF mapping.
class [required]
string
The OCSF event class name.
profiles
[string]
A list of OCSF profiles to apply.
version [required]
string
The OCSF schema version.
version [required]
int64
The version of the custom mapping configuration.
type [required]
enum
The processor type. The value should always be ocsf_mapper.
Allowed enum values: ocsf_mapper
default: ocsf_mapper
Option 11
object
The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.
Supported pipeline types: logs
disable_library_rules
boolean
If set to true, disables the default Grok rules provided by Datadog.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
A unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
rules [required]
[object]
The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.
match_rules [required]
[object]
A list of Grok parsing rules that define how to extract fields from the source field.
Each rule must contain a name and a valid Grok pattern.
name [required]
string
The name of the rule.
rule [required]
string
The definition of the Grok rule.
source [required]
string
The name of the field in the log event to apply the Grok rules to.
support_rules
[object]
A list of Grok helper rules that can be referenced by the parsing rules.
name [required]
string
The name of the Grok helper rule.
rule [required]
string
The definition of the Grok helper rule.
type [required]
enum
The processor type. The value should always be parse_grok.
Allowed enum values: parse_grok
default: parse_grok
Option 12
object
The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
field [required]
string
The name of the log field that contains a JSON string.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be parse_json.
Allowed enum values: parse_json
default: parse_json
Option 13
object
The parse_xml processor parses XML from a specified field and extracts it into the event.
Supported pipeline types: logs
always_use_text_key
boolean
Whether to always use a text key for element content.
attr_prefix
string
The prefix to use for XML attributes in the parsed output.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
field [required]
string
The name of the log field that contains an XML string.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
include_attr
boolean
Whether to include XML attributes in the parsed output.
parse_bool
boolean
Whether to parse boolean values from strings.
parse_null
boolean
Whether to parse null values.
parse_number
boolean
Whether to parse numeric values from strings.
text_key
string
The key name to use for text content within XML elements. Must be at least 1 character if specified.
type [required]
enum
The processor type. The value should always be parse_xml.
Allowed enum values: parse_xml
default: parse_xml
Option 14
object
The quota processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.
Supported pipeline types: logs
display_name
string
The display name for a component.
drop_events
boolean
If set to true, logs that match the quota filter and are sent after the quota is exceeded are dropped. Logs that do not match the filter continue through the pipeline. Note: You can set either drop_events or overflow_action, but not both.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
ignore_when_missing_partitions
boolean
If true, the processor skips quota checks when partition fields are missing from the logs.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
name [required]
string
Name of the quota.
overflow_action
enum
The action to take when the quota or bucket limit is exceeded. Options:
drop: Drop the event.
no_action: Let the event pass through.
overflow_routing: Route to an overflow destination.
Allowed enum values: drop,no_action,overflow_routing
overrides
[object]
A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.
fields [required]
[object]
A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.
name [required]
string
The field name.
value [required]
string
The field value.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
partition_fields
[string]
A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.
too_many_buckets_action
enum
The action to take when the quota or bucket limit is exceeded. Options:
drop: Drop the event.
no_action: Let the event pass through.
overflow_routing: Route to an overflow destination.
Allowed enum values: drop,no_action,overflow_routing
type [required]
enum
The processor type. The value should always be quota.
Allowed enum values: quota
default: quota
Option 15
object
The reduce processor aggregates and merges logs based on matching keys and merge strategies.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by [required]
[string]
A list of fields used to group log events for merging.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
merge_strategies [required]
[object]
List of merge strategies defining how values from grouped events should be combined.
path [required]
string
The field path in the log event.
strategy [required]
enum
The merge strategy to apply.
Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique
type [required]
enum
The processor type. The value should always be reduce.
Allowed enum values: reduce
default: reduce
Option 16
object
The remove_fields processor deletes specified fields from logs.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[string]
A list of field names to be removed from each log event.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be remove_fields.
Allowed enum values: remove_fields
default: remove_fields
Option 17
object
The rename_fields processor changes field names.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[object]
A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.
destination [required]
string
The field name to assign the renamed value to.
preserve_source [required]
boolean
Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.
source [required]
string
The original field name in the log event that should be renamed.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be rename_fields.
Allowed enum values: rename_fields
default: rename_fields
Option 18
object
The sample processor allows probabilistic sampling of logs at a fixed rate.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by
[string]
Optional list of fields to group events by. Each group is sampled independently.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
percentage [required]
double
The percentage of logs to sample.
type [required]
enum
The processor type. The value should always be sample.
Allowed enum values: sample
default: sample
Option 19
object
The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
rules [required]
[object]
A list of rules for identifying and acting on sensitive data patterns.
keyword_options
object
Configuration for keywords used to reinforce sensitive data pattern detection.
keywords [required]
[string]
A list of keywords to match near the sensitive pattern.
proximity [required]
int64
Maximum number of tokens between a keyword and a sensitive value match.
name [required]
string
A name identifying the rule.
on_match [required]
<oneOf>
Defines what action to take when sensitive data is matched.
Option 1
object
Configuration for completely redacting matched sensitive data.
action [required]
enum
Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility.
Allowed enum values: redact
options [required]
object
Configuration for fully redacting sensitive data.
replace [required]
string
The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptionsreplace.
Option 2
object
Configuration for hashing matched sensitive values.
action [required]
enum
Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content.
Allowed enum values: hash
options
object
The ObservabilityPipelineSensitiveDataScannerProcessorActionHashoptions.
Option 3
object
Configuration for partially redacting matched sensitive data.
action [required]
enum
Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card).
Allowed enum values: partial_redact
options [required]
object
Controls how partial redaction is applied, including character count and direction.
characters [required]
int64
The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptionscharacters.
direction [required]
enum
Indicates whether to redact characters from the first or last part of the matched value.
Allowed enum values: first,last
pattern [required]
<oneOf>
Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.
Option 1
object
Defines a custom regex-based pattern for identifying sensitive data in logs.
options [required]
object
Options for defining a custom regex pattern.
description
string
Human-readable description providing context about a sensitive data scanner rule
rule [required]
string
A regular expression used to detect sensitive values. Must be a valid regex.
type [required]
enum
Indicates a custom regular expression is used for matching.
Allowed enum values: custom
Option 2
object
Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.
options [required]
object
Options for selecting a predefined library pattern and enabling keyword support.
description
string
Human-readable description providing context about a sensitive data scanner rule
id [required]
string
Identifier for a predefined pattern from the sensitive data scanner pattern library.
use_recommended_keywords
boolean
Whether to augment the pattern with recommended keywords (optional).
type [required]
enum
Indicates that a predefined library pattern is used.
Allowed enum values: library
scope [required]
<oneOf>
Determines which parts of the log the pattern-matching rule should be applied to.
Option 1
object
Includes only specific fields for sensitive data scanning.
options [required]
object
Fields to which the scope rule applies.
fields [required]
[string]
The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptionsfields.
target [required]
enum
Applies the rule only to included fields.
Allowed enum values: include
Option 2
object
Excludes specific fields from sensitive data scanning.
options [required]
object
Fields to which the scope rule applies.
fields [required]
[string]
The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptionsfields.
target [required]
enum
Excludes specific fields from processing.
Allowed enum values: exclude
Option 3
object
Applies scanning across all available fields.
target [required]
enum
Applies the rule to all fields.
Allowed enum values: all
tags [required]
[string]
Tags assigned to this rule for filtering and classification.
type [required]
enum
The processor type. The value should always be sensitive_data_scanner.
Allowed enum values: sensitive_data_scanner
default: sensitive_data_scanner
Option 20
object
The split_array processor splits array fields into separate events based on configured rules.
Supported pipeline types: logs
arrays [required]
[object]
A list of array split configurations.
field [required]
string
The path to the array field to split.
include [required]
string
A Datadog search query used to determine which logs this array split operation targets.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets. For split_array, this should typically be *.
type [required]
enum
The processor type. The value should always be split_array.
Allowed enum values: split_array
default: split_array
Option 21
object
The throttle processor limits the number of events that pass through over a given time window.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by
[string]
Optional list of fields used to group events before the threshold has been reached.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
threshold [required]
int64
the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.
type [required]
enum
The processor type. The value should always be throttle.
Allowed enum values: throttle
default: throttle
window [required]
double
The time window in seconds over which the threshold applies.
Option 22
object
The metric_tags processor filters metrics based on their tags using Datadog tag key patterns.
Supported pipeline types: metrics
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query that determines which metrics the processor targets.
rules [required]
[object]
A list of rules for filtering metric tags.
action [required]
enum
The action to take on tags with matching keys.
Allowed enum values: include,exclude
include [required]
string
A Datadog search query used to determine which metrics this rule targets.
keys [required]
[string]
A list of tag keys to include or exclude.
mode [required]
enum
The processing mode for tag filtering.
Allowed enum values: filter
type [required]
enum
The processor type. The value should always be metric_tags.
Allowed enum values: metric_tags
default: metric_tags
sources [required]
[ <oneOf>]
A list of configured data sources for the pipeline.
Option 1
object
The datadog_agent source collects logs/metrics from the Datadog Agent.
Supported pipeline types: logs, metrics
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be datadog_agent.
Allowed enum values: datadog_agent
default: datadog_agent
Option 2
object
The amazon_data_firehose source ingests logs from AWS Data Firehose.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the Firehose delivery stream address.
auth
object
AWS authentication credentials used for accessing AWS services such as S3.
If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).
assume_role
string
The Amazon Resource Name (ARN) of the role to assume.
external_id
string
A unique identifier for cross-account role assumption.
session_name
string
A session identifier used for logging and tracing the assumed role session.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be amazon_data_firehose.
Allowed enum values: amazon_data_firehose
default: amazon_data_firehose
Option 3
object
The amazon_s3 source ingests logs from an Amazon S3 bucket.
It supports AWS authentication and TLS encryption.
Supported pipeline types: logs
auth
object
AWS authentication credentials used for accessing AWS services such as S3.
If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).
assume_role
string
The Amazon Resource Name (ARN) of the role to assume.
external_id
string
A unique identifier for cross-account role assumption.
session_name
string
A session identifier used for logging and tracing the assumed role session.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
region [required]
string
AWS region where the S3 bucket resides.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. Always amazon_s3.
Allowed enum values: amazon_s3
default: amazon_s3
url_key
string
Name of the environment variable or secret that holds the S3 bucket URL.
Option 4
object
The fluent_bit source ingests logs from Fluent Bit.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Fluent Bit receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be fluent_bit.
Allowed enum values: fluent_bit
default: fluent_bit
Option 5
object
The fluentd source ingests logs from a Fluentd-compatible service.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Fluent receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be `fluentd.
Allowed enum values: fluentd
default: fluentd
Option 6
object
The google_pubsub source ingests logs from a Google Cloud Pub/Sub subscription.
Supported pipeline types: logs
auth
object
Google Cloud credentials used to authenticate with Google Cloud Storage.
credentials_file [required]
string
Path to the Google Cloud service account key file.
decoding [required]
enum
The decoding format used to interpret incoming logs.
Allowed enum values: bytes,gelf,json,syslog
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
project [required]
string
The Google Cloud project ID that owns the Pub/Sub subscription.
subscription [required]
string
The Pub/Sub subscription name from which messages are consumed.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be google_pubsub.
Allowed enum values: google_pubsub
default: google_pubsub
Option 7
object
The http_client source scrapes logs from HTTP endpoints at regular intervals.
Supported pipeline types: logs
auth_strategy
enum
Optional authentication strategy for HTTP requests.
Allowed enum values: none,basic,bearer,custom
custom_key
string
Name of the environment variable or secret that holds a custom header value (used with custom auth strategies).
decoding [required]
enum
The decoding format used to interpret incoming logs.
Allowed enum values: bytes,gelf,json,syslog
endpoint_url_key
string
Name of the environment variable or secret that holds the HTTP endpoint URL to scrape.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
password_key
string
Name of the environment variable or secret that holds the password (used when auth_strategy is basic).
scrape_interval_secs
int64
The interval (in seconds) between HTTP scrape requests.
scrape_timeout_secs
int64
The timeout (in seconds) for each scrape request.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
token_key
string
Name of the environment variable or secret that holds the bearer token (used when auth_strategy is bearer).
type [required]
enum
The source type. The value should always be http_client.
Allowed enum values: http_client
default: http_client
username_key
string
Name of the environment variable or secret that holds the username (used when auth_strategy is basic).
Option 8
object
The http_server source collects logs over HTTP POST from external services.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the HTTP server.
Name of the environment variable or secret that holds a custom header value (used with custom auth strategies).
decoding [required]
enum
The decoding format used to interpret incoming logs.
Allowed enum values: bytes,gelf,json,syslog
id [required]
string
Unique ID for the HTTP server source.
password_key
string
Name of the environment variable or secret that holds the password (used when auth_strategy is plain).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be http_server.
Allowed enum values: http_server
default: http_server
username_key
string
Name of the environment variable or secret that holds the username (used when auth_strategy is plain).
Option 9
object
The kafka source ingests data from Apache Kafka topics.
Supported pipeline types: logs
bootstrap_servers_key
string
Name of the environment variable or secret that holds the Kafka bootstrap servers list.
group_id [required]
string
Consumer group ID used by the Kafka client.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
librdkafka_options
[object]
Optional list of advanced Kafka client configuration options, defined as key-value pairs.
name [required]
string
The name of the librdkafka configuration option to set.
value [required]
string
The value assigned to the specified librdkafka configuration option.
sasl
object
Specifies the SASL mechanism for authenticating with a Kafka cluster.
mechanism
enum
SASL mechanism used for Kafka authentication.
Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
password_key
string
Name of the environment variable or secret that holds the SASL password.
username_key
string
Name of the environment variable or secret that holds the SASL username.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
topics [required]
[string]
A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.
type [required]
enum
The source type. The value should always be kafka.
Allowed enum values: kafka
default: kafka
Option 10
object
The logstash source ingests logs from a Logstash forwarder.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Logstash receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be logstash.
Allowed enum values: logstash
default: logstash
Option 11
object
The rsyslog source listens for logs over TCP or UDP from an rsyslog server using the syslog protocol.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the syslog receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
mode [required]
enum
Protocol used by the syslog source to receive messages.
Allowed enum values: tcp,udp
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be rsyslog.
Allowed enum values: rsyslog
default: rsyslog
Option 12
object
The socket source ingests logs over TCP or UDP.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the socket.
framing [required]
<oneOf>
Framing method configuration for the socket source.
Option 1
object
Byte frames which are delimited by a newline character.
method [required]
enum
Byte frames which are delimited by a newline character.
Allowed enum values: newline_delimited
Option 2
object
Byte frames are passed through as-is according to the underlying I/O boundaries (for example, split between messages or stream segments).
method [required]
enum
Byte frames are passed through as-is according to the underlying I/O boundaries (for example, split between messages or stream segments).
Allowed enum values: bytes
Option 3
object
Byte frames which are delimited by a chosen character.
delimiter [required]
string
A single ASCII character used to delimit events.
method [required]
enum
Byte frames which are delimited by a chosen character.
Allowed enum values: character_delimited
Option 4
object
Byte frames according to the octet counting format as per RFC6587.
method [required]
enum
Byte frames according to the octet counting format as per RFC6587.
Allowed enum values: octet_counting
Option 5
object
Byte frames which are chunked GELF messages.
method [required]
enum
Byte frames which are chunked GELF messages.
Allowed enum values: chunked_gelf
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
mode [required]
enum
Protocol used to receive logs.
Allowed enum values: tcp,udp
tls
object
TLS configuration. Relevant only when mode is tcp.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be socket.
Allowed enum values: socket
default: socket
Option 13
object
The splunk_hec source implements the Splunk HTTP Event Collector (HEC) API.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the HEC API.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. Always splunk_hec.
Allowed enum values: splunk_hec
default: splunk_hec
Option 14
object
The splunk_tcp source receives logs from a Splunk Universal Forwarder over TCP.
TLS is supported for secure transmission.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Splunk TCP receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. Always splunk_tcp.
Allowed enum values: splunk_tcp
default: splunk_tcp
Option 15
object
The sumo_logic source receives logs from Sumo Logic collectors.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Sumo Logic receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
type [required]
enum
The source type. The value should always be sumo_logic.
Allowed enum values: sumo_logic
default: sumo_logic
Option 16
object
The syslog_ng source listens for logs over TCP or UDP from a syslog-ng server using the syslog protocol.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the syslog-ng receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
mode [required]
enum
Protocol used by the syslog source to receive messages.
Allowed enum values: tcp,udp
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be syslog_ng.
Allowed enum values: syslog_ng
default: syslog_ng
Option 17
object
The opentelemetry source receives telemetry data using the OpenTelemetry Protocol (OTLP) over gRPC and HTTP.
Supported pipeline types: logs
grpc_address_key
string
Environment variable name containing the gRPC server address for receiving OTLP data. Must be a valid environment variable name (alphanumeric characters and underscores only).
http_address_key
string
Environment variable name containing the HTTP server address for receiving OTLP data. Must be a valid environment variable name (alphanumeric characters and underscores only).
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be opentelemetry.
Allowed enum values: opentelemetry
default: opentelemetry
use_legacy_search_syntax
boolean
Set to true to continue using the legacy search syntax while migrating filter queries. After migrating all queries to the new syntax, set to false.
The legacy syntax is deprecated and will eventually be removed.
Requires Observability Pipelines Worker 2.11 or later.
See Upgrade Your Filter Queries to the New Search Syntax for more information.
name [required]
string
Name of the pipeline.
id [required]
string
Unique identifier for the pipeline.
type [required]
string
The resource type identifier. For pipeline resources, this should always be set to pipelines.
"""
Get a specific pipeline returns "OK" response
"""fromosimportenvironfromdatadog_api_clientimportApiClient,Configurationfromdatadog_api_client.v2.api.observability_pipelines_apiimportObservabilityPipelinesApi# there is a valid "pipeline" in the systemPIPELINE_DATA_ID=environ["PIPELINE_DATA_ID"]configuration=Configuration()withApiClient(configuration)asapi_client:api_instance=ObservabilityPipelinesApi(api_client)response=api_instance.get_pipeline(pipeline_id=PIPELINE_DATA_ID,)print(response)
# Get a specific pipeline returns "OK" responserequire"datadog_api_client"api_instance=DatadogAPIClient::V2::ObservabilityPipelinesAPI.new# there is a valid "pipeline" in the systemPIPELINE_DATA_ID=ENV["PIPELINE_DATA_ID"]papi_instance.get_pipeline(PIPELINE_DATA_ID)
// Get a specific pipeline returns "OK" responsepackagemainimport("context""encoding/json""fmt""os""github.com/DataDog/datadog-api-client-go/v2/api/datadog""github.com/DataDog/datadog-api-client-go/v2/api/datadogV2")funcmain(){// there is a valid "pipeline" in the systemPipelineDataID:=os.Getenv("PIPELINE_DATA_ID")ctx:=datadog.NewDefaultContext(context.Background())configuration:=datadog.NewConfiguration()apiClient:=datadog.NewAPIClient(configuration)api:=datadogV2.NewObservabilityPipelinesApi(apiClient)resp,r,err:=api.GetPipeline(ctx,PipelineDataID)iferr!=nil{fmt.Fprintf(os.Stderr,"Error when calling `ObservabilityPipelinesApi.GetPipeline`: %v\n",err)fmt.Fprintf(os.Stderr,"Full HTTP response: %v\n",r)}responseContent,_:=json.MarshalIndent(resp,""," ")fmt.Fprintf(os.Stdout,"Response from `ObservabilityPipelinesApi.GetPipeline`:\n%s\n",responseContent)}
// Get a specific pipeline returns "OK" responseimportcom.datadog.api.client.ApiClient;importcom.datadog.api.client.ApiException;importcom.datadog.api.client.v2.api.ObservabilityPipelinesApi;importcom.datadog.api.client.v2.model.ObservabilityPipeline;publicclassExample{publicstaticvoidmain(String[]args){ApiClientdefaultClient=ApiClient.getDefaultApiClient();ObservabilityPipelinesApiapiInstance=newObservabilityPipelinesApi(defaultClient);// there is a valid "pipeline" in the systemStringPIPELINE_DATA_ID=System.getenv("PIPELINE_DATA_ID");try{ObservabilityPipelineresult=apiInstance.getPipeline(PIPELINE_DATA_ID);System.out.println(result);}catch(ApiExceptione){System.err.println("Exception when calling ObservabilityPipelinesApi#getPipeline");System.err.println("Status code: "+e.getCode());System.err.println("Reason: "+e.getResponseBody());System.err.println("Response headers: "+e.getResponseHeaders());e.printStackTrace();}}}
// Get a specific pipeline returns "OK" response
usedatadog_api_client::datadog;usedatadog_api_client::datadogV2::api_observability_pipelines::ObservabilityPipelinesAPI;#[tokio::main]asyncfnmain(){// there is a valid "pipeline" in the system
letpipeline_data_id=std::env::var("PIPELINE_DATA_ID").unwrap();letconfiguration=datadog::Configuration::new();letapi=ObservabilityPipelinesAPI::with_config(configuration);letresp=api.get_pipeline(pipeline_data_id.clone()).await;ifletOk(value)=resp{println!("{:#?}",value);}else{println!("{:#?}",resp.unwrap_err());}}
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com"DD_API_KEY="<API-KEY>"DD_APP_KEY="<APP-KEY>"cargo run
/**
* Get a specific pipeline returns "OK" response
*/import{client,v2}from"@datadog/datadog-api-client";constconfiguration=client.createConfiguration();constapiInstance=newv2.ObservabilityPipelinesApi(configuration);// there is a valid "pipeline" in the system
constPIPELINE_DATA_ID=process.env.PIPELINE_DATA_IDasstring;constparams: v2.ObservabilityPipelinesApiGetPipelineRequest={pipelineId: PIPELINE_DATA_ID,};apiInstance.getPipeline(params).then((data: v2.ObservabilityPipeline)=>{console.log("API called successfully. Returned data: "+JSON.stringify(data));}).catch((error: any)=>console.error(error));
Name of the environment variable or secret that holds a custom header value (used with custom auth strategies).
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
password_key
string
Name of the environment variable or secret that holds the password (used when auth_strategy is basic).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
token_key
string
Name of the environment variable or secret that holds the bearer token (used when auth_strategy is bearer).
type [required]
enum
The destination type. The value should always be http_client.
Allowed enum values: http_client
default: http_client
uri_key
string
Name of the environment variable or secret that holds the HTTP endpoint URI.
username_key
string
Name of the environment variable or secret that holds the username (used when auth_strategy is basic).
Option 2
object
The amazon_opensearch destination writes logs to Amazon OpenSearch.
Supported pipeline types: logs
auth [required]
object
Authentication settings for the Amazon OpenSearch destination.
The strategy field determines whether basic or AWS-based authentication is used.
assume_role
string
The ARN of the role to assume (used with aws strategy).
aws_region
string
AWS region
external_id
string
External ID for the assumed role (used with aws strategy).
session_name
string
Session name for the assumed role (used with aws strategy).
strategy [required]
enum
The authentication strategy to use.
Allowed enum values: basic,aws
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
bulk_index
string
The index to write logs to.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be amazon_opensearch.
Allowed enum values: amazon_opensearch
default: amazon_opensearch
Option 3
object
The amazon_s3 destination sends your logs in Datadog-rehydratable format to an Amazon S3 bucket for archiving.
Supported pipeline types: logs
auth
object
AWS authentication credentials used for accessing AWS services such as S3.
If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).
assume_role
string
The Amazon Resource Name (ARN) of the role to assume.
external_id
string
A unique identifier for cross-account role assumption.
session_name
string
A session identifier used for logging and tracing the assumed role session.
bucket [required]
string
S3 bucket name.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
Unique identifier for the destination component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. Always amazon_s3.
Allowed enum values: amazon_s3
default: amazon_s3
Option 4
object
The amazon_security_lake destination sends your logs to Amazon Security Lake.
Supported pipeline types: logs
auth
object
AWS authentication credentials used for accessing AWS services such as S3.
If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).
assume_role
string
The Amazon Resource Name (ARN) of the role to assume.
external_id
string
A unique identifier for cross-account role assumption.
session_name
string
A session identifier used for logging and tracing the assumed role session.
bucket [required]
string
Name of the Amazon S3 bucket in Security Lake (3-63 characters).
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
custom_source_name [required]
string
Custom source name for the logs in Security Lake.
id [required]
string
Unique identifier for the destination component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
region [required]
string
AWS region of the S3 bucket.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. Always amazon_security_lake.
Allowed enum values: amazon_security_lake
default: amazon_security_lake
Option 5
object
The azure_storage destination forwards logs to an Azure Blob Storage container.
Supported pipeline types: logs
blob_prefix
string
Optional prefix for blobs written to the container.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
connection_string_key
string
Name of the environment variable or secret that holds the Azure Storage connection string.
container_name [required]
string
The name of the Azure Blob Storage container to store logs in.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be azure_storage.
Allowed enum values: azure_storage
default: azure_storage
Option 6
object
The cloud_prem destination sends logs to Datadog CloudPrem.
Supported pipeline types: logs
endpoint_url_key
string
Name of the environment variable or secret that holds the CloudPrem endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be cloud_prem.
Allowed enum values: cloud_prem
default: cloud_prem
Option 7
object
The crowdstrike_next_gen_siem destination forwards logs to CrowdStrike Next Gen SIEM.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
compression
object
Compression configuration for log events.
algorithm [required]
enum
Compression algorithm for log events.
Allowed enum values: gzip,zlib
level
int64
Compression level.
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json,raw_message
endpoint_url_key
string
Name of the environment variable or secret that holds the CrowdStrike endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
token_key
string
Name of the environment variable or secret that holds the CrowdStrike API token.
type [required]
enum
The destination type. The value should always be crowdstrike_next_gen_siem.
Allowed enum values: crowdstrike_next_gen_siem
default: crowdstrike_next_gen_siem
Option 8
object
The datadog_logs destination forwards logs to Datadog Log Management.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
routes
[object]
A list of routing rules that forward matching logs to Datadog using dedicated API keys.
api_key_key
string
Name of the environment variable or secret that stores the Datadog API key used by this route.
include
string
A Datadog search query that determines which logs are forwarded using this route.
route_id
string
Unique identifier for this route within the destination.
site
string
Datadog site where matching logs are sent (for example, us1).
type [required]
enum
The destination type. The value should always be datadog_logs.
Allowed enum values: datadog_logs
default: datadog_logs
Option 9
object
The elasticsearch destination writes logs to an Elasticsearch cluster.
Supported pipeline types: logs
api_version
enum
The Elasticsearch API version to use. Set to auto to auto-detect.
Allowed enum values: auto,v6,v7,v8
auth
object
Authentication settings for the Elasticsearch destination.
When strategy is basic, use username_key and password_key to reference credentials stored in environment variables or secrets.
password_key
string
Name of the environment variable or secret that holds the Elasticsearch password (used when strategy is basic).
strategy [required]
enum
The authentication strategy to use.
Allowed enum values: basic,aws
username_key
string
Name of the environment variable or secret that holds the Elasticsearch username (used when strategy is basic).
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
bulk_index
string
The index to write logs to in Elasticsearch.
data_stream
object
Configuration options for writing to Elasticsearch Data Streams instead of a fixed index.
dataset
string
The data stream dataset for your logs. This groups logs by their source or application.
dtype
string
The data stream type for your logs. This determines how logs are categorized within the data stream.
namespace
string
The data stream namespace for your logs. This separates logs into different environments or domains.
endpoint_url_key
string
Name of the environment variable or secret that holds the Elasticsearch endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be elasticsearch.
Allowed enum values: elasticsearch
default: elasticsearch
Option 10
object
The google_chronicle destination sends logs to Google Chronicle.
Supported pipeline types: logs
auth
object
Google Cloud credentials used to authenticate with Google Cloud Storage.
credentials_file [required]
string
Path to the Google Cloud service account key file.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
customer_id [required]
string
The Google Chronicle customer ID.
encoding
enum
The encoding format for the logs sent to Chronicle.
Allowed enum values: json,raw_message
endpoint_url_key
string
Name of the environment variable or secret that holds the Google Chronicle endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
log_type
string
The log type metadata associated with the Chronicle destination.
type [required]
enum
The destination type. The value should always be google_chronicle.
Allowed enum values: google_chronicle
default: google_chronicle
Option 11
object
The google_cloud_storage destination stores logs in a Google Cloud Storage (GCS) bucket.
It requires a bucket name, Google Cloud authentication, and metadata fields.
Supported pipeline types: logs
acl
enum
Access control list setting for objects written to the bucket.
Allowed enum values: private,project-private,public-read,authenticated-read,bucket-owner-read,bucket-owner-full-control
auth
object
Google Cloud credentials used to authenticate with Google Cloud Storage.
credentials_file [required]
string
Path to the Google Cloud service account key file.
bucket [required]
string
Name of the GCS bucket.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
Unique identifier for the destination component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
key_prefix
string
Optional prefix for object keys within the GCS bucket.
metadata
[object]
Custom metadata to attach to each object uploaded to the GCS bucket.
name [required]
string
The metadata key.
value [required]
string
The metadata value.
storage_class [required]
enum
Storage class used for objects stored in GCS.
Allowed enum values: STANDARD,NEARLINE,COLDLINE,ARCHIVE
type [required]
enum
The destination type. Always google_cloud_storage.
Allowed enum values: google_cloud_storage
default: google_cloud_storage
Option 12
object
The google_pubsub destination publishes logs to a Google Cloud Pub/Sub topic.
Supported pipeline types: logs
auth
object
Google Cloud credentials used to authenticate with Google Cloud Storage.
credentials_file [required]
string
Path to the Google Cloud service account key file.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json,raw_message
endpoint_url_key
string
Name of the environment variable or secret that holds the Google Cloud Pub/Sub endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
project [required]
string
The Google Cloud project ID that owns the Pub/Sub topic.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
topic [required]
string
The Pub/Sub topic name to publish logs to.
type [required]
enum
The destination type. The value should always be google_pubsub.
Allowed enum values: google_pubsub
default: google_pubsub
Option 13
object
The kafka destination sends logs to Apache Kafka topics.
Supported pipeline types: logs
bootstrap_servers_key
string
Name of the environment variable or secret that holds the Kafka bootstrap servers list.
compression
enum
Compression codec for Kafka messages.
Allowed enum values: none,gzip,snappy,lz4,zstd
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json,raw_message
headers_key
string
The field name to use for Kafka message headers.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
key_field
string
The field name to use as the Kafka message key.
librdkafka_options
[object]
Optional list of advanced Kafka producer configuration options, defined as key-value pairs.
name [required]
string
The name of the librdkafka configuration option to set.
value [required]
string
The value assigned to the specified librdkafka configuration option.
message_timeout_ms
int64
Maximum time in milliseconds to wait for message delivery confirmation.
rate_limit_duration_secs
int64
Duration in seconds for the rate limit window.
rate_limit_num
int64
Maximum number of messages allowed per rate limit duration.
sasl
object
Specifies the SASL mechanism for authenticating with a Kafka cluster.
mechanism
enum
SASL mechanism used for Kafka authentication.
Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
password_key
string
Name of the environment variable or secret that holds the SASL password.
username_key
string
Name of the environment variable or secret that holds the SASL username.
socket_timeout_ms
int64
Socket timeout in milliseconds for network requests.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
topic [required]
string
The Kafka topic name to publish logs to.
type [required]
enum
The destination type. The value should always be kafka.
Allowed enum values: kafka
default: kafka
Option 14
object
The microsoft_sentinel destination forwards logs to Microsoft Sentinel.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
client_id [required]
string
Azure AD client ID used for authentication.
client_secret_key
string
Name of the environment variable or secret that holds the Azure AD client secret.
dce_uri_key
string
Name of the environment variable or secret that holds the Data Collection Endpoint (DCE) URI.
dcr_immutable_id [required]
string
The immutable ID of the Data Collection Rule (DCR).
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
table [required]
string
The name of the Log Analytics table where logs are sent.
tenant_id [required]
string
Azure AD tenant ID.
type [required]
enum
The destination type. The value should always be microsoft_sentinel.
Allowed enum values: microsoft_sentinel
default: microsoft_sentinel
Option 15
object
The new_relic destination sends logs to the New Relic platform.
Supported pipeline types: logs
account_id_key
string
Name of the environment variable or secret that holds the New Relic account ID.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
license_key_key
string
Name of the environment variable or secret that holds the New Relic license key.
region [required]
enum
The New Relic region.
Allowed enum values: us,eu
type [required]
enum
The destination type. The value should always be new_relic.
Allowed enum values: new_relic
default: new_relic
Option 16
object
The opensearch destination writes logs to an OpenSearch cluster.
Supported pipeline types: logs
auth
object
Authentication settings for the Elasticsearch destination.
When strategy is basic, use username_key and password_key to reference credentials stored in environment variables or secrets.
password_key
string
Name of the environment variable or secret that holds the Elasticsearch password (used when strategy is basic).
strategy [required]
enum
The authentication strategy to use.
Allowed enum values: basic,aws
username_key
string
Name of the environment variable or secret that holds the Elasticsearch username (used when strategy is basic).
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
bulk_index
string
The index to write logs to.
data_stream
object
Configuration options for writing to OpenSearch Data Streams instead of a fixed index.
dataset
string
The data stream dataset for your logs. This groups logs by their source or application.
dtype
string
The data stream type for your logs. This determines how logs are categorized within the data stream.
namespace
string
The data stream namespace for your logs. This separates logs into different environments or domains.
endpoint_url_key
string
Name of the environment variable or secret that holds the OpenSearch endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be opensearch.
Allowed enum values: opensearch
default: opensearch
Option 17
object
The rsyslog destination forwards logs to an external rsyslog server over TCP or UDP using the syslog protocol.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
endpoint_url_key
string
Name of the environment variable or secret that holds the syslog server endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
keepalive
int64
Optional socket keepalive duration in milliseconds.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. The value should always be rsyslog.
Allowed enum values: rsyslog
default: rsyslog
Option 18
object
The sentinel_one destination sends logs to SentinelOne.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
region [required]
enum
The SentinelOne region to send logs to.
Allowed enum values: us,eu,ca,data_set_us
token_key
string
Name of the environment variable or secret that holds the SentinelOne API token.
type [required]
enum
The destination type. The value should always be sentinel_one.
Allowed enum values: sentinel_one
default: sentinel_one
Option 19
object
The socket destination sends logs over TCP or UDP to a remote server.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the socket address (host:port).
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json,raw_message
framing [required]
<oneOf>
Framing method configuration.
Option 1
object
Each log event is delimited by a newline character.
method [required]
enum
The definition of ObservabilityPipelineSocketDestinationFramingNewlineDelimitedMethod object.
Allowed enum values: newline_delimited
Option 2
object
Event data is not delimited at all.
method [required]
enum
The definition of ObservabilityPipelineSocketDestinationFramingBytesMethod object.
Allowed enum values: bytes
Option 3
object
Each log event is separated using the specified delimiter character.
delimiter [required]
string
A single ASCII character used as a delimiter.
method [required]
enum
The definition of ObservabilityPipelineSocketDestinationFramingCharacterDelimitedMethod object.
Allowed enum values: character_delimited
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
mode [required]
enum
Protocol used to send logs.
Allowed enum values: tcp,udp
tls
object
TLS configuration. Relevant only when mode is tcp.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. The value should always be socket.
Allowed enum values: socket
default: socket
Option 20
object
The splunk_hec destination forwards logs to Splunk using the HTTP Event Collector (HEC).
Supported pipeline types: logs
auto_extract_timestamp
boolean
If true, Splunk tries to extract timestamps from incoming log events.
If false, Splunk assigns the time the event was received.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
encoding
enum
Encoding format for log events.
Allowed enum values: json,raw_message
endpoint_url_key
string
Name of the environment variable or secret that holds the Splunk HEC endpoint URL.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
index
string
Optional name of the Splunk index where logs are written.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
sourcetype
string
The Splunk sourcetype to assign to log events.
token_key
string
Name of the environment variable or secret that holds the Splunk HEC token.
type [required]
enum
The destination type. Always splunk_hec.
Allowed enum values: splunk_hec
default: splunk_hec
Option 21
object
The sumo_logic destination forwards logs to Sumo Logic.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
encoding
enum
The output encoding format.
Allowed enum values: json,raw_message,logfmt
endpoint_url_key
string
Name of the environment variable or secret that holds the Sumo Logic HTTP endpoint URL.
header_custom_fields
[object]
A list of custom headers to include in the request to Sumo Logic.
name [required]
string
The header field name.
value [required]
string
The header field value.
header_host_name
string
Optional override for the host name header.
header_source_category
string
Optional override for the source category header.
header_source_name
string
Optional override for the source name header.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be sumo_logic.
Allowed enum values: sumo_logic
default: sumo_logic
Option 22
object
The syslog_ng destination forwards logs to an external syslog-ng server over TCP or UDP using the syslog protocol.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
endpoint_url_key
string
Name of the environment variable or secret that holds the syslog-ng server endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
keepalive
int64
Optional socket keepalive duration in milliseconds.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. The value should always be syslog_ng.
Allowed enum values: syslog_ng
default: syslog_ng
Option 23
object
The datadog_metrics destination forwards metrics to Datadog.
Supported pipeline types: metrics
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be datadog_metrics.
Allowed enum values: datadog_metrics
default: datadog_metrics
pipeline_type
enum
The type of data being ingested. Defaults to logs if not specified.
Allowed enum values: logs,metrics
default: logs
processor_groups
[object]
A list of processor groups that transform or enrich log data.
display_name
string
The display name for a component.
enabled [required]
boolean
Whether this processor group is enabled.
id [required]
string
The unique identifier for the processor group.
include [required]
string
Conditional expression for when this processor group should execute.
inputs [required]
[string]
A list of IDs for components whose output is used as the input for this processor group.
processors [required]
[ <oneOf>]
Processors applied sequentially within this group. Events flow through each processor in order.
Option 1
object
The filter processor allows conditional processing of logs/metrics based on a Datadog search query. Logs/metrics that match the include query are passed through; others are discarded.
Supported pipeline types: logs, metrics
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs/metrics should pass through the filter. Logs/metrics that match this query continue to downstream components; others are dropped.
type [required]
enum
The processor type. The value should always be filter.
Allowed enum values: filter
default: filter
Option 2
object
The add_env_vars processor adds environment variable values to log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this processor in the pipeline.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_env_vars.
Allowed enum values: add_env_vars
default: add_env_vars
variables [required]
[object]
A list of environment variable mappings to apply to log fields.
field [required]
string
The target field in the log event.
name [required]
string
The name of the environment variable to read.
Option 3
object
The add_fields processor adds static key-value fields to logs.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[object]
A list of static fields (key-value pairs) that is added to each log event processed by this component.
name [required]
string
The field name.
value [required]
string
The field value.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_fields.
Allowed enum values: add_fields
default: add_fields
Option 4
object
The add_hostname processor adds the hostname to log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_hostname.
Allowed enum values: add_hostname
default: add_hostname
Option 5
object
The custom_processor processor transforms events using Vector Remap Language (VRL) scripts with advanced filtering capabilities.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets. This field should always be set to * for the custom_processor processor.
default: *
remaps [required]
[object]
Array of VRL remap rules.
drop_on_error [required]
boolean
Whether to drop events that caused errors during processing.
enabled
boolean
Whether this remap rule is enabled.
include [required]
string
A Datadog search query used to filter events for this specific remap rule.
name [required]
string
A descriptive name for this remap rule.
source [required]
string
The VRL script source code that defines the processing logic.
type [required]
enum
The processor type. The value should always be custom_processor.
Allowed enum values: custom_processor
default: custom_processor
Option 6
object
The datadog_tags processor includes or excludes specific Datadog tags in your logs.
Supported pipeline types: logs
action [required]
enum
The action to take on tags with matching keys.
Allowed enum values: include,exclude
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
keys [required]
[string]
A list of tag keys.
mode [required]
enum
The processing mode.
Allowed enum values: filter
type [required]
enum
The processor type. The value should always be datadog_tags.
Allowed enum values: datadog_tags
default: datadog_tags
Option 7
object
The dedupe processor removes duplicate fields in log events.
Supported pipeline types: logs
cache
object
Configuration for the cache used to detect duplicates.
num_events [required]
int64
The number of events to cache for duplicate detection.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[string]
A list of log field paths to check for duplicates.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
mode [required]
enum
The deduplication mode to apply to the fields.
Allowed enum values: match,ignore
type [required]
enum
The processor type. The value should always be dedupe.
Allowed enum values: dedupe
default: dedupe
Option 8
object
The enrichment_table processor enriches logs using a static CSV file, GeoIP database, or reference table. Exactly one of file, geoip, or reference_table must be configured.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
file
object
Defines a static enrichment table loaded from a CSV file.
encoding [required]
object
File encoding format.
delimiter [required]
string
The encodingdelimiter.
includes_headers [required]
boolean
The encodingincludes_headers.
type [required]
enum
Specifies the encoding format (e.g., CSV) used for enrichment tables.
Allowed enum values: csv
key [required]
[object]
Key fields used to look up enrichment values.
column [required]
string
The itemscolumn.
comparison [required]
enum
Defines how to compare key fields for enrichment table lookups.
Allowed enum values: equals
field [required]
string
The itemsfield.
path [required]
string
Path to the CSV file.
schema [required]
[object]
Schema defining column names and their types.
column [required]
string
The itemscolumn.
type [required]
enum
Declares allowed data types for enrichment table columns.
Allowed enum values: string,boolean,integer,float,date,timestamp
geoip
object
Uses a GeoIP database to enrich logs based on an IP field.
key_field [required]
string
Path to the IP field in the log.
locale [required]
string
Locale used to resolve geographical names.
path [required]
string
Path to the GeoIP database file.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
reference_table
object
Uses a Datadog reference table to enrich logs.
app_key_key
string
Name of the environment variable or secret that holds the Datadog application key used to access the reference table.
columns
[string]
List of column names to include from the reference table. If not provided, all columns are included.
key_field [required]
string
Path to the field in the log event to match against the reference table.
table_id [required]
string
The unique identifier of the reference table.
target [required]
string
Path where enrichment results should be stored in the log.
type [required]
enum
The processor type. The value should always be enrichment_table.
Allowed enum values: enrichment_table
default: enrichment_table
Option 9
object
The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog.
Metrics can be counters, gauges, or distributions and optionally grouped by log fields.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline.
include
string
A Datadog search query used to determine which logs this processor targets.
metrics
[object]
Configuration for generating individual metrics.
group_by
[string]
Optional fields used to group the metric series.
include [required]
string
Datadog filter query to match logs for metric generation.
metric_type [required]
enum
Type of metric to create.
Allowed enum values: count,gauge,distribution
name [required]
string
Name of the custom metric to be created.
value [required]
<oneOf>
Specifies how the value of the generated metric is computed.
Option 1
object
Strategy that increments a generated metric by one for each matching event.
strategy [required]
enum
Increments the metric by 1 for each matching event.
Allowed enum values: increment_by_one
Option 2
object
Strategy that increments a generated metric based on the value of a log field.
field [required]
string
Name of the log field containing the numeric value to increment the metric by.
strategy [required]
enum
Uses a numeric field in the log event as the metric increment.
Allowed enum values: increment_by_field
type [required]
enum
The processor type. Always generate_datadog_metrics.
Allowed enum values: generate_datadog_metrics
default: generate_datadog_metrics
Option 10
object
The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
mappings [required]
[object]
A list of mapping rules to convert events to the OCSF format.
include [required]
string
A Datadog search query used to select the logs that this mapping should apply to.
mapping [required]
<oneOf>
Defines a single mapping rule for transforming logs into the OCSF schema.
Custom OCSF mapping configuration for transforming logs.
mapping [required]
[object]
A list of field mapping rules for transforming log fields to OCSF schema fields.
default
The default value to use if the source field is missing or empty.
dest [required]
string
The destination OCSF field path.
lookup
object
Lookup table configuration for mapping source values to destination values.
source
The source field path from the log event.
sources
Multiple source field paths for combined mapping.
value
A static value to use for the destination field.
metadata [required]
object
Metadata for the custom OCSF mapping.
class [required]
string
The OCSF event class name.
profiles
[string]
A list of OCSF profiles to apply.
version [required]
string
The OCSF schema version.
version [required]
int64
The version of the custom mapping configuration.
type [required]
enum
The processor type. The value should always be ocsf_mapper.
Allowed enum values: ocsf_mapper
default: ocsf_mapper
Option 11
object
The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.
Supported pipeline types: logs
disable_library_rules
boolean
If set to true, disables the default Grok rules provided by Datadog.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
A unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
rules [required]
[object]
The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.
match_rules [required]
[object]
A list of Grok parsing rules that define how to extract fields from the source field.
Each rule must contain a name and a valid Grok pattern.
name [required]
string
The name of the rule.
rule [required]
string
The definition of the Grok rule.
source [required]
string
The name of the field in the log event to apply the Grok rules to.
support_rules
[object]
A list of Grok helper rules that can be referenced by the parsing rules.
name [required]
string
The name of the Grok helper rule.
rule [required]
string
The definition of the Grok helper rule.
type [required]
enum
The processor type. The value should always be parse_grok.
Allowed enum values: parse_grok
default: parse_grok
Option 12
object
The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
field [required]
string
The name of the log field that contains a JSON string.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be parse_json.
Allowed enum values: parse_json
default: parse_json
Option 13
object
The parse_xml processor parses XML from a specified field and extracts it into the event.
Supported pipeline types: logs
always_use_text_key
boolean
Whether to always use a text key for element content.
attr_prefix
string
The prefix to use for XML attributes in the parsed output.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
field [required]
string
The name of the log field that contains an XML string.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
include_attr
boolean
Whether to include XML attributes in the parsed output.
parse_bool
boolean
Whether to parse boolean values from strings.
parse_null
boolean
Whether to parse null values.
parse_number
boolean
Whether to parse numeric values from strings.
text_key
string
The key name to use for text content within XML elements. Must be at least 1 character if specified.
type [required]
enum
The processor type. The value should always be parse_xml.
Allowed enum values: parse_xml
default: parse_xml
Option 14
object
The quota processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.
Supported pipeline types: logs
display_name
string
The display name for a component.
drop_events
boolean
If set to true, logs that match the quota filter and are sent after the quota is exceeded are dropped. Logs that do not match the filter continue through the pipeline. Note: You can set either drop_events or overflow_action, but not both.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
ignore_when_missing_partitions
boolean
If true, the processor skips quota checks when partition fields are missing from the logs.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
name [required]
string
Name of the quota.
overflow_action
enum
The action to take when the quota or bucket limit is exceeded. Options:
drop: Drop the event.
no_action: Let the event pass through.
overflow_routing: Route to an overflow destination.
Allowed enum values: drop,no_action,overflow_routing
overrides
[object]
A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.
fields [required]
[object]
A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.
name [required]
string
The field name.
value [required]
string
The field value.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
partition_fields
[string]
A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.
too_many_buckets_action
enum
The action to take when the quota or bucket limit is exceeded. Options:
drop: Drop the event.
no_action: Let the event pass through.
overflow_routing: Route to an overflow destination.
Allowed enum values: drop,no_action,overflow_routing
type [required]
enum
The processor type. The value should always be quota.
Allowed enum values: quota
default: quota
Option 15
object
The reduce processor aggregates and merges logs based on matching keys and merge strategies.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by [required]
[string]
A list of fields used to group log events for merging.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
merge_strategies [required]
[object]
List of merge strategies defining how values from grouped events should be combined.
path [required]
string
The field path in the log event.
strategy [required]
enum
The merge strategy to apply.
Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique
type [required]
enum
The processor type. The value should always be reduce.
Allowed enum values: reduce
default: reduce
Option 16
object
The remove_fields processor deletes specified fields from logs.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[string]
A list of field names to be removed from each log event.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be remove_fields.
Allowed enum values: remove_fields
default: remove_fields
Option 17
object
The rename_fields processor changes field names.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[object]
A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.
destination [required]
string
The field name to assign the renamed value to.
preserve_source [required]
boolean
Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.
source [required]
string
The original field name in the log event that should be renamed.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be rename_fields.
Allowed enum values: rename_fields
default: rename_fields
Option 18
object
The sample processor allows probabilistic sampling of logs at a fixed rate.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by
[string]
Optional list of fields to group events by. Each group is sampled independently.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
percentage [required]
double
The percentage of logs to sample.
type [required]
enum
The processor type. The value should always be sample.
Allowed enum values: sample
default: sample
Option 19
object
The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
rules [required]
[object]
A list of rules for identifying and acting on sensitive data patterns.
keyword_options
object
Configuration for keywords used to reinforce sensitive data pattern detection.
keywords [required]
[string]
A list of keywords to match near the sensitive pattern.
proximity [required]
int64
Maximum number of tokens between a keyword and a sensitive value match.
name [required]
string
A name identifying the rule.
on_match [required]
<oneOf>
Defines what action to take when sensitive data is matched.
Option 1
object
Configuration for completely redacting matched sensitive data.
action [required]
enum
Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility.
Allowed enum values: redact
options [required]
object
Configuration for fully redacting sensitive data.
replace [required]
string
The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptionsreplace.
Option 2
object
Configuration for hashing matched sensitive values.
action [required]
enum
Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content.
Allowed enum values: hash
options
object
The ObservabilityPipelineSensitiveDataScannerProcessorActionHashoptions.
Option 3
object
Configuration for partially redacting matched sensitive data.
action [required]
enum
Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card).
Allowed enum values: partial_redact
options [required]
object
Controls how partial redaction is applied, including character count and direction.
characters [required]
int64
The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptionscharacters.
direction [required]
enum
Indicates whether to redact characters from the first or last part of the matched value.
Allowed enum values: first,last
pattern [required]
<oneOf>
Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.
Option 1
object
Defines a custom regex-based pattern for identifying sensitive data in logs.
options [required]
object
Options for defining a custom regex pattern.
description
string
Human-readable description providing context about a sensitive data scanner rule
rule [required]
string
A regular expression used to detect sensitive values. Must be a valid regex.
type [required]
enum
Indicates a custom regular expression is used for matching.
Allowed enum values: custom
Option 2
object
Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.
options [required]
object
Options for selecting a predefined library pattern and enabling keyword support.
description
string
Human-readable description providing context about a sensitive data scanner rule
id [required]
string
Identifier for a predefined pattern from the sensitive data scanner pattern library.
use_recommended_keywords
boolean
Whether to augment the pattern with recommended keywords (optional).
type [required]
enum
Indicates that a predefined library pattern is used.
Allowed enum values: library
scope [required]
<oneOf>
Determines which parts of the log the pattern-matching rule should be applied to.
Option 1
object
Includes only specific fields for sensitive data scanning.
options [required]
object
Fields to which the scope rule applies.
fields [required]
[string]
The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptionsfields.
target [required]
enum
Applies the rule only to included fields.
Allowed enum values: include
Option 2
object
Excludes specific fields from sensitive data scanning.
options [required]
object
Fields to which the scope rule applies.
fields [required]
[string]
The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptionsfields.
target [required]
enum
Excludes specific fields from processing.
Allowed enum values: exclude
Option 3
object
Applies scanning across all available fields.
target [required]
enum
Applies the rule to all fields.
Allowed enum values: all
tags [required]
[string]
Tags assigned to this rule for filtering and classification.
type [required]
enum
The processor type. The value should always be sensitive_data_scanner.
Allowed enum values: sensitive_data_scanner
default: sensitive_data_scanner
Option 20
object
The split_array processor splits array fields into separate events based on configured rules.
Supported pipeline types: logs
arrays [required]
[object]
A list of array split configurations.
field [required]
string
The path to the array field to split.
include [required]
string
A Datadog search query used to determine which logs this array split operation targets.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets. For split_array, this should typically be *.
type [required]
enum
The processor type. The value should always be split_array.
Allowed enum values: split_array
default: split_array
Option 21
object
The throttle processor limits the number of events that pass through over a given time window.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by
[string]
Optional list of fields used to group events before the threshold has been reached.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
threshold [required]
int64
the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.
type [required]
enum
The processor type. The value should always be throttle.
Allowed enum values: throttle
default: throttle
window [required]
double
The time window in seconds over which the threshold applies.
Option 22
object
The metric_tags processor filters metrics based on their tags using Datadog tag key patterns.
Supported pipeline types: metrics
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query that determines which metrics the processor targets.
rules [required]
[object]
A list of rules for filtering metric tags.
action [required]
enum
The action to take on tags with matching keys.
Allowed enum values: include,exclude
include [required]
string
A Datadog search query used to determine which metrics this rule targets.
keys [required]
[string]
A list of tag keys to include or exclude.
mode [required]
enum
The processing mode for tag filtering.
Allowed enum values: filter
type [required]
enum
The processor type. The value should always be metric_tags.
Allowed enum values: metric_tags
default: metric_tags
processors
[object]
DEPRECATED: A list of processor groups that transform or enrich log data.
Deprecated: This field is deprecated, you should now use the processor_groups field.
display_name
string
The display name for a component.
enabled [required]
boolean
Whether this processor group is enabled.
id [required]
string
The unique identifier for the processor group.
include [required]
string
Conditional expression for when this processor group should execute.
inputs [required]
[string]
A list of IDs for components whose output is used as the input for this processor group.
processors [required]
[ <oneOf>]
Processors applied sequentially within this group. Events flow through each processor in order.
Option 1
object
The filter processor allows conditional processing of logs/metrics based on a Datadog search query. Logs/metrics that match the include query are passed through; others are discarded.
Supported pipeline types: logs, metrics
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs/metrics should pass through the filter. Logs/metrics that match this query continue to downstream components; others are dropped.
type [required]
enum
The processor type. The value should always be filter.
Allowed enum values: filter
default: filter
Option 2
object
The add_env_vars processor adds environment variable values to log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this processor in the pipeline.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_env_vars.
Allowed enum values: add_env_vars
default: add_env_vars
variables [required]
[object]
A list of environment variable mappings to apply to log fields.
field [required]
string
The target field in the log event.
name [required]
string
The name of the environment variable to read.
Option 3
object
The add_fields processor adds static key-value fields to logs.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[object]
A list of static fields (key-value pairs) that is added to each log event processed by this component.
name [required]
string
The field name.
value [required]
string
The field value.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_fields.
Allowed enum values: add_fields
default: add_fields
Option 4
object
The add_hostname processor adds the hostname to log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_hostname.
Allowed enum values: add_hostname
default: add_hostname
Option 5
object
The custom_processor processor transforms events using Vector Remap Language (VRL) scripts with advanced filtering capabilities.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets. This field should always be set to * for the custom_processor processor.
default: *
remaps [required]
[object]
Array of VRL remap rules.
drop_on_error [required]
boolean
Whether to drop events that caused errors during processing.
enabled
boolean
Whether this remap rule is enabled.
include [required]
string
A Datadog search query used to filter events for this specific remap rule.
name [required]
string
A descriptive name for this remap rule.
source [required]
string
The VRL script source code that defines the processing logic.
type [required]
enum
The processor type. The value should always be custom_processor.
Allowed enum values: custom_processor
default: custom_processor
Option 6
object
The datadog_tags processor includes or excludes specific Datadog tags in your logs.
Supported pipeline types: logs
action [required]
enum
The action to take on tags with matching keys.
Allowed enum values: include,exclude
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
keys [required]
[string]
A list of tag keys.
mode [required]
enum
The processing mode.
Allowed enum values: filter
type [required]
enum
The processor type. The value should always be datadog_tags.
Allowed enum values: datadog_tags
default: datadog_tags
Option 7
object
The dedupe processor removes duplicate fields in log events.
Supported pipeline types: logs
cache
object
Configuration for the cache used to detect duplicates.
num_events [required]
int64
The number of events to cache for duplicate detection.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[string]
A list of log field paths to check for duplicates.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
mode [required]
enum
The deduplication mode to apply to the fields.
Allowed enum values: match,ignore
type [required]
enum
The processor type. The value should always be dedupe.
Allowed enum values: dedupe
default: dedupe
Option 8
object
The enrichment_table processor enriches logs using a static CSV file, GeoIP database, or reference table. Exactly one of file, geoip, or reference_table must be configured.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
file
object
Defines a static enrichment table loaded from a CSV file.
encoding [required]
object
File encoding format.
delimiter [required]
string
The encodingdelimiter.
includes_headers [required]
boolean
The encodingincludes_headers.
type [required]
enum
Specifies the encoding format (e.g., CSV) used for enrichment tables.
Allowed enum values: csv
key [required]
[object]
Key fields used to look up enrichment values.
column [required]
string
The itemscolumn.
comparison [required]
enum
Defines how to compare key fields for enrichment table lookups.
Allowed enum values: equals
field [required]
string
The itemsfield.
path [required]
string
Path to the CSV file.
schema [required]
[object]
Schema defining column names and their types.
column [required]
string
The itemscolumn.
type [required]
enum
Declares allowed data types for enrichment table columns.
Allowed enum values: string,boolean,integer,float,date,timestamp
geoip
object
Uses a GeoIP database to enrich logs based on an IP field.
key_field [required]
string
Path to the IP field in the log.
locale [required]
string
Locale used to resolve geographical names.
path [required]
string
Path to the GeoIP database file.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
reference_table
object
Uses a Datadog reference table to enrich logs.
app_key_key
string
Name of the environment variable or secret that holds the Datadog application key used to access the reference table.
columns
[string]
List of column names to include from the reference table. If not provided, all columns are included.
key_field [required]
string
Path to the field in the log event to match against the reference table.
table_id [required]
string
The unique identifier of the reference table.
target [required]
string
Path where enrichment results should be stored in the log.
type [required]
enum
The processor type. The value should always be enrichment_table.
Allowed enum values: enrichment_table
default: enrichment_table
Option 9
object
The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog.
Metrics can be counters, gauges, or distributions and optionally grouped by log fields.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline.
include
string
A Datadog search query used to determine which logs this processor targets.
metrics
[object]
Configuration for generating individual metrics.
group_by
[string]
Optional fields used to group the metric series.
include [required]
string
Datadog filter query to match logs for metric generation.
metric_type [required]
enum
Type of metric to create.
Allowed enum values: count,gauge,distribution
name [required]
string
Name of the custom metric to be created.
value [required]
<oneOf>
Specifies how the value of the generated metric is computed.
Option 1
object
Strategy that increments a generated metric by one for each matching event.
strategy [required]
enum
Increments the metric by 1 for each matching event.
Allowed enum values: increment_by_one
Option 2
object
Strategy that increments a generated metric based on the value of a log field.
field [required]
string
Name of the log field containing the numeric value to increment the metric by.
strategy [required]
enum
Uses a numeric field in the log event as the metric increment.
Allowed enum values: increment_by_field
type [required]
enum
The processor type. Always generate_datadog_metrics.
Allowed enum values: generate_datadog_metrics
default: generate_datadog_metrics
Option 10
object
The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
mappings [required]
[object]
A list of mapping rules to convert events to the OCSF format.
include [required]
string
A Datadog search query used to select the logs that this mapping should apply to.
mapping [required]
<oneOf>
Defines a single mapping rule for transforming logs into the OCSF schema.
Custom OCSF mapping configuration for transforming logs.
mapping [required]
[object]
A list of field mapping rules for transforming log fields to OCSF schema fields.
default
The default value to use if the source field is missing or empty.
dest [required]
string
The destination OCSF field path.
lookup
object
Lookup table configuration for mapping source values to destination values.
source
The source field path from the log event.
sources
Multiple source field paths for combined mapping.
value
A static value to use for the destination field.
metadata [required]
object
Metadata for the custom OCSF mapping.
class [required]
string
The OCSF event class name.
profiles
[string]
A list of OCSF profiles to apply.
version [required]
string
The OCSF schema version.
version [required]
int64
The version of the custom mapping configuration.
type [required]
enum
The processor type. The value should always be ocsf_mapper.
Allowed enum values: ocsf_mapper
default: ocsf_mapper
Option 11
object
The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.
Supported pipeline types: logs
disable_library_rules
boolean
If set to true, disables the default Grok rules provided by Datadog.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
A unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
rules [required]
[object]
The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.
match_rules [required]
[object]
A list of Grok parsing rules that define how to extract fields from the source field.
Each rule must contain a name and a valid Grok pattern.
name [required]
string
The name of the rule.
rule [required]
string
The definition of the Grok rule.
source [required]
string
The name of the field in the log event to apply the Grok rules to.
support_rules
[object]
A list of Grok helper rules that can be referenced by the parsing rules.
name [required]
string
The name of the Grok helper rule.
rule [required]
string
The definition of the Grok helper rule.
type [required]
enum
The processor type. The value should always be parse_grok.
Allowed enum values: parse_grok
default: parse_grok
Option 12
object
The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
field [required]
string
The name of the log field that contains a JSON string.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be parse_json.
Allowed enum values: parse_json
default: parse_json
Option 13
object
The parse_xml processor parses XML from a specified field and extracts it into the event.
Supported pipeline types: logs
always_use_text_key
boolean
Whether to always use a text key for element content.
attr_prefix
string
The prefix to use for XML attributes in the parsed output.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
field [required]
string
The name of the log field that contains an XML string.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
include_attr
boolean
Whether to include XML attributes in the parsed output.
parse_bool
boolean
Whether to parse boolean values from strings.
parse_null
boolean
Whether to parse null values.
parse_number
boolean
Whether to parse numeric values from strings.
text_key
string
The key name to use for text content within XML elements. Must be at least 1 character if specified.
type [required]
enum
The processor type. The value should always be parse_xml.
Allowed enum values: parse_xml
default: parse_xml
Option 14
object
The quota processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.
Supported pipeline types: logs
display_name
string
The display name for a component.
drop_events
boolean
If set to true, logs that match the quota filter and are sent after the quota is exceeded are dropped. Logs that do not match the filter continue through the pipeline. Note: You can set either drop_events or overflow_action, but not both.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
ignore_when_missing_partitions
boolean
If true, the processor skips quota checks when partition fields are missing from the logs.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
name [required]
string
Name of the quota.
overflow_action
enum
The action to take when the quota or bucket limit is exceeded. Options:
drop: Drop the event.
no_action: Let the event pass through.
overflow_routing: Route to an overflow destination.
Allowed enum values: drop,no_action,overflow_routing
overrides
[object]
A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.
fields [required]
[object]
A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.
name [required]
string
The field name.
value [required]
string
The field value.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
partition_fields
[string]
A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.
too_many_buckets_action
enum
The action to take when the quota or bucket limit is exceeded. Options:
drop: Drop the event.
no_action: Let the event pass through.
overflow_routing: Route to an overflow destination.
Allowed enum values: drop,no_action,overflow_routing
type [required]
enum
The processor type. The value should always be quota.
Allowed enum values: quota
default: quota
Option 15
object
The reduce processor aggregates and merges logs based on matching keys and merge strategies.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by [required]
[string]
A list of fields used to group log events for merging.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
merge_strategies [required]
[object]
List of merge strategies defining how values from grouped events should be combined.
path [required]
string
The field path in the log event.
strategy [required]
enum
The merge strategy to apply.
Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique
type [required]
enum
The processor type. The value should always be reduce.
Allowed enum values: reduce
default: reduce
Option 16
object
The remove_fields processor deletes specified fields from logs.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[string]
A list of field names to be removed from each log event.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be remove_fields.
Allowed enum values: remove_fields
default: remove_fields
Option 17
object
The rename_fields processor changes field names.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[object]
A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.
destination [required]
string
The field name to assign the renamed value to.
preserve_source [required]
boolean
Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.
source [required]
string
The original field name in the log event that should be renamed.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be rename_fields.
Allowed enum values: rename_fields
default: rename_fields
Option 18
object
The sample processor allows probabilistic sampling of logs at a fixed rate.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by
[string]
Optional list of fields to group events by. Each group is sampled independently.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
percentage [required]
double
The percentage of logs to sample.
type [required]
enum
The processor type. The value should always be sample.
Allowed enum values: sample
default: sample
Option 19
object
The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
rules [required]
[object]
A list of rules for identifying and acting on sensitive data patterns.
keyword_options
object
Configuration for keywords used to reinforce sensitive data pattern detection.
keywords [required]
[string]
A list of keywords to match near the sensitive pattern.
proximity [required]
int64
Maximum number of tokens between a keyword and a sensitive value match.
name [required]
string
A name identifying the rule.
on_match [required]
<oneOf>
Defines what action to take when sensitive data is matched.
Option 1
object
Configuration for completely redacting matched sensitive data.
action [required]
enum
Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility.
Allowed enum values: redact
options [required]
object
Configuration for fully redacting sensitive data.
replace [required]
string
The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptionsreplace.
Option 2
object
Configuration for hashing matched sensitive values.
action [required]
enum
Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content.
Allowed enum values: hash
options
object
The ObservabilityPipelineSensitiveDataScannerProcessorActionHashoptions.
Option 3
object
Configuration for partially redacting matched sensitive data.
action [required]
enum
Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card).
Allowed enum values: partial_redact
options [required]
object
Controls how partial redaction is applied, including character count and direction.
characters [required]
int64
The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptionscharacters.
direction [required]
enum
Indicates whether to redact characters from the first or last part of the matched value.
Allowed enum values: first,last
pattern [required]
<oneOf>
Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.
Option 1
object
Defines a custom regex-based pattern for identifying sensitive data in logs.
options [required]
object
Options for defining a custom regex pattern.
description
string
Human-readable description providing context about a sensitive data scanner rule
rule [required]
string
A regular expression used to detect sensitive values. Must be a valid regex.
type [required]
enum
Indicates a custom regular expression is used for matching.
Allowed enum values: custom
Option 2
object
Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.
options [required]
object
Options for selecting a predefined library pattern and enabling keyword support.
description
string
Human-readable description providing context about a sensitive data scanner rule
id [required]
string
Identifier for a predefined pattern from the sensitive data scanner pattern library.
use_recommended_keywords
boolean
Whether to augment the pattern with recommended keywords (optional).
type [required]
enum
Indicates that a predefined library pattern is used.
Allowed enum values: library
scope [required]
<oneOf>
Determines which parts of the log the pattern-matching rule should be applied to.
Option 1
object
Includes only specific fields for sensitive data scanning.
options [required]
object
Fields to which the scope rule applies.
fields [required]
[string]
The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptionsfields.
target [required]
enum
Applies the rule only to included fields.
Allowed enum values: include
Option 2
object
Excludes specific fields from sensitive data scanning.
options [required]
object
Fields to which the scope rule applies.
fields [required]
[string]
The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptionsfields.
target [required]
enum
Excludes specific fields from processing.
Allowed enum values: exclude
Option 3
object
Applies scanning across all available fields.
target [required]
enum
Applies the rule to all fields.
Allowed enum values: all
tags [required]
[string]
Tags assigned to this rule for filtering and classification.
type [required]
enum
The processor type. The value should always be sensitive_data_scanner.
Allowed enum values: sensitive_data_scanner
default: sensitive_data_scanner
Option 20
object
The split_array processor splits array fields into separate events based on configured rules.
Supported pipeline types: logs
arrays [required]
[object]
A list of array split configurations.
field [required]
string
The path to the array field to split.
include [required]
string
A Datadog search query used to determine which logs this array split operation targets.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets. For split_array, this should typically be *.
type [required]
enum
The processor type. The value should always be split_array.
Allowed enum values: split_array
default: split_array
Option 21
object
The throttle processor limits the number of events that pass through over a given time window.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by
[string]
Optional list of fields used to group events before the threshold has been reached.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
threshold [required]
int64
the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.
type [required]
enum
The processor type. The value should always be throttle.
Allowed enum values: throttle
default: throttle
window [required]
double
The time window in seconds over which the threshold applies.
Option 22
object
The metric_tags processor filters metrics based on their tags using Datadog tag key patterns.
Supported pipeline types: metrics
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query that determines which metrics the processor targets.
rules [required]
[object]
A list of rules for filtering metric tags.
action [required]
enum
The action to take on tags with matching keys.
Allowed enum values: include,exclude
include [required]
string
A Datadog search query used to determine which metrics this rule targets.
keys [required]
[string]
A list of tag keys to include or exclude.
mode [required]
enum
The processing mode for tag filtering.
Allowed enum values: filter
type [required]
enum
The processor type. The value should always be metric_tags.
Allowed enum values: metric_tags
default: metric_tags
sources [required]
[ <oneOf>]
A list of configured data sources for the pipeline.
Option 1
object
The datadog_agent source collects logs/metrics from the Datadog Agent.
Supported pipeline types: logs, metrics
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be datadog_agent.
Allowed enum values: datadog_agent
default: datadog_agent
Option 2
object
The amazon_data_firehose source ingests logs from AWS Data Firehose.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the Firehose delivery stream address.
auth
object
AWS authentication credentials used for accessing AWS services such as S3.
If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).
assume_role
string
The Amazon Resource Name (ARN) of the role to assume.
external_id
string
A unique identifier for cross-account role assumption.
session_name
string
A session identifier used for logging and tracing the assumed role session.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be amazon_data_firehose.
Allowed enum values: amazon_data_firehose
default: amazon_data_firehose
Option 3
object
The amazon_s3 source ingests logs from an Amazon S3 bucket.
It supports AWS authentication and TLS encryption.
Supported pipeline types: logs
auth
object
AWS authentication credentials used for accessing AWS services such as S3.
If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).
assume_role
string
The Amazon Resource Name (ARN) of the role to assume.
external_id
string
A unique identifier for cross-account role assumption.
session_name
string
A session identifier used for logging and tracing the assumed role session.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
region [required]
string
AWS region where the S3 bucket resides.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. Always amazon_s3.
Allowed enum values: amazon_s3
default: amazon_s3
url_key
string
Name of the environment variable or secret that holds the S3 bucket URL.
Option 4
object
The fluent_bit source ingests logs from Fluent Bit.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Fluent Bit receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be fluent_bit.
Allowed enum values: fluent_bit
default: fluent_bit
Option 5
object
The fluentd source ingests logs from a Fluentd-compatible service.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Fluent receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be `fluentd.
Allowed enum values: fluentd
default: fluentd
Option 6
object
The google_pubsub source ingests logs from a Google Cloud Pub/Sub subscription.
Supported pipeline types: logs
auth
object
Google Cloud credentials used to authenticate with Google Cloud Storage.
credentials_file [required]
string
Path to the Google Cloud service account key file.
decoding [required]
enum
The decoding format used to interpret incoming logs.
Allowed enum values: bytes,gelf,json,syslog
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
project [required]
string
The Google Cloud project ID that owns the Pub/Sub subscription.
subscription [required]
string
The Pub/Sub subscription name from which messages are consumed.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be google_pubsub.
Allowed enum values: google_pubsub
default: google_pubsub
Option 7
object
The http_client source scrapes logs from HTTP endpoints at regular intervals.
Supported pipeline types: logs
auth_strategy
enum
Optional authentication strategy for HTTP requests.
Allowed enum values: none,basic,bearer,custom
custom_key
string
Name of the environment variable or secret that holds a custom header value (used with custom auth strategies).
decoding [required]
enum
The decoding format used to interpret incoming logs.
Allowed enum values: bytes,gelf,json,syslog
endpoint_url_key
string
Name of the environment variable or secret that holds the HTTP endpoint URL to scrape.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
password_key
string
Name of the environment variable or secret that holds the password (used when auth_strategy is basic).
scrape_interval_secs
int64
The interval (in seconds) between HTTP scrape requests.
scrape_timeout_secs
int64
The timeout (in seconds) for each scrape request.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
token_key
string
Name of the environment variable or secret that holds the bearer token (used when auth_strategy is bearer).
type [required]
enum
The source type. The value should always be http_client.
Allowed enum values: http_client
default: http_client
username_key
string
Name of the environment variable or secret that holds the username (used when auth_strategy is basic).
Option 8
object
The http_server source collects logs over HTTP POST from external services.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the HTTP server.
Name of the environment variable or secret that holds a custom header value (used with custom auth strategies).
decoding [required]
enum
The decoding format used to interpret incoming logs.
Allowed enum values: bytes,gelf,json,syslog
id [required]
string
Unique ID for the HTTP server source.
password_key
string
Name of the environment variable or secret that holds the password (used when auth_strategy is plain).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be http_server.
Allowed enum values: http_server
default: http_server
username_key
string
Name of the environment variable or secret that holds the username (used when auth_strategy is plain).
Option 9
object
The kafka source ingests data from Apache Kafka topics.
Supported pipeline types: logs
bootstrap_servers_key
string
Name of the environment variable or secret that holds the Kafka bootstrap servers list.
group_id [required]
string
Consumer group ID used by the Kafka client.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
librdkafka_options
[object]
Optional list of advanced Kafka client configuration options, defined as key-value pairs.
name [required]
string
The name of the librdkafka configuration option to set.
value [required]
string
The value assigned to the specified librdkafka configuration option.
sasl
object
Specifies the SASL mechanism for authenticating with a Kafka cluster.
mechanism
enum
SASL mechanism used for Kafka authentication.
Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
password_key
string
Name of the environment variable or secret that holds the SASL password.
username_key
string
Name of the environment variable or secret that holds the SASL username.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
topics [required]
[string]
A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.
type [required]
enum
The source type. The value should always be kafka.
Allowed enum values: kafka
default: kafka
Option 10
object
The logstash source ingests logs from a Logstash forwarder.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Logstash receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be logstash.
Allowed enum values: logstash
default: logstash
Option 11
object
The rsyslog source listens for logs over TCP or UDP from an rsyslog server using the syslog protocol.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the syslog receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
mode [required]
enum
Protocol used by the syslog source to receive messages.
Allowed enum values: tcp,udp
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be rsyslog.
Allowed enum values: rsyslog
default: rsyslog
Option 12
object
The socket source ingests logs over TCP or UDP.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the socket.
framing [required]
<oneOf>
Framing method configuration for the socket source.
Option 1
object
Byte frames which are delimited by a newline character.
method [required]
enum
Byte frames which are delimited by a newline character.
Allowed enum values: newline_delimited
Option 2
object
Byte frames are passed through as-is according to the underlying I/O boundaries (for example, split between messages or stream segments).
method [required]
enum
Byte frames are passed through as-is according to the underlying I/O boundaries (for example, split between messages or stream segments).
Allowed enum values: bytes
Option 3
object
Byte frames which are delimited by a chosen character.
delimiter [required]
string
A single ASCII character used to delimit events.
method [required]
enum
Byte frames which are delimited by a chosen character.
Allowed enum values: character_delimited
Option 4
object
Byte frames according to the octet counting format as per RFC6587.
method [required]
enum
Byte frames according to the octet counting format as per RFC6587.
Allowed enum values: octet_counting
Option 5
object
Byte frames which are chunked GELF messages.
method [required]
enum
Byte frames which are chunked GELF messages.
Allowed enum values: chunked_gelf
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
mode [required]
enum
Protocol used to receive logs.
Allowed enum values: tcp,udp
tls
object
TLS configuration. Relevant only when mode is tcp.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be socket.
Allowed enum values: socket
default: socket
Option 13
object
The splunk_hec source implements the Splunk HTTP Event Collector (HEC) API.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the HEC API.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. Always splunk_hec.
Allowed enum values: splunk_hec
default: splunk_hec
Option 14
object
The splunk_tcp source receives logs from a Splunk Universal Forwarder over TCP.
TLS is supported for secure transmission.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Splunk TCP receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. Always splunk_tcp.
Allowed enum values: splunk_tcp
default: splunk_tcp
Option 15
object
The sumo_logic source receives logs from Sumo Logic collectors.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Sumo Logic receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
type [required]
enum
The source type. The value should always be sumo_logic.
Allowed enum values: sumo_logic
default: sumo_logic
Option 16
object
The syslog_ng source listens for logs over TCP or UDP from a syslog-ng server using the syslog protocol.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the syslog-ng receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
mode [required]
enum
Protocol used by the syslog source to receive messages.
Allowed enum values: tcp,udp
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be syslog_ng.
Allowed enum values: syslog_ng
default: syslog_ng
Option 17
object
The opentelemetry source receives telemetry data using the OpenTelemetry Protocol (OTLP) over gRPC and HTTP.
Supported pipeline types: logs
grpc_address_key
string
Environment variable name containing the gRPC server address for receiving OTLP data. Must be a valid environment variable name (alphanumeric characters and underscores only).
http_address_key
string
Environment variable name containing the HTTP server address for receiving OTLP data. Must be a valid environment variable name (alphanumeric characters and underscores only).
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be opentelemetry.
Allowed enum values: opentelemetry
default: opentelemetry
use_legacy_search_syntax
boolean
Set to true to continue using the legacy search syntax while migrating filter queries. After migrating all queries to the new syntax, set to false.
The legacy syntax is deprecated and will eventually be removed.
Requires Observability Pipelines Worker 2.11 or later.
See Upgrade Your Filter Queries to the New Search Syntax for more information.
name [required]
string
Name of the pipeline.
id [required]
string
Unique identifier for the pipeline.
type [required]
string
The resource type identifier. For pipeline resources, this should always be set to pipelines.
Name of the environment variable or secret that holds a custom header value (used with custom auth strategies).
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
password_key
string
Name of the environment variable or secret that holds the password (used when auth_strategy is basic).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
token_key
string
Name of the environment variable or secret that holds the bearer token (used when auth_strategy is bearer).
type [required]
enum
The destination type. The value should always be http_client.
Allowed enum values: http_client
default: http_client
uri_key
string
Name of the environment variable or secret that holds the HTTP endpoint URI.
username_key
string
Name of the environment variable or secret that holds the username (used when auth_strategy is basic).
Option 2
object
The amazon_opensearch destination writes logs to Amazon OpenSearch.
Supported pipeline types: logs
auth [required]
object
Authentication settings for the Amazon OpenSearch destination.
The strategy field determines whether basic or AWS-based authentication is used.
assume_role
string
The ARN of the role to assume (used with aws strategy).
aws_region
string
AWS region
external_id
string
External ID for the assumed role (used with aws strategy).
session_name
string
Session name for the assumed role (used with aws strategy).
strategy [required]
enum
The authentication strategy to use.
Allowed enum values: basic,aws
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
bulk_index
string
The index to write logs to.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be amazon_opensearch.
Allowed enum values: amazon_opensearch
default: amazon_opensearch
Option 3
object
The amazon_s3 destination sends your logs in Datadog-rehydratable format to an Amazon S3 bucket for archiving.
Supported pipeline types: logs
auth
object
AWS authentication credentials used for accessing AWS services such as S3.
If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).
assume_role
string
The Amazon Resource Name (ARN) of the role to assume.
external_id
string
A unique identifier for cross-account role assumption.
session_name
string
A session identifier used for logging and tracing the assumed role session.
bucket [required]
string
S3 bucket name.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
Unique identifier for the destination component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. Always amazon_s3.
Allowed enum values: amazon_s3
default: amazon_s3
Option 4
object
The amazon_security_lake destination sends your logs to Amazon Security Lake.
Supported pipeline types: logs
auth
object
AWS authentication credentials used for accessing AWS services such as S3.
If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).
assume_role
string
The Amazon Resource Name (ARN) of the role to assume.
external_id
string
A unique identifier for cross-account role assumption.
session_name
string
A session identifier used for logging and tracing the assumed role session.
bucket [required]
string
Name of the Amazon S3 bucket in Security Lake (3-63 characters).
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
custom_source_name [required]
string
Custom source name for the logs in Security Lake.
id [required]
string
Unique identifier for the destination component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
region [required]
string
AWS region of the S3 bucket.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. Always amazon_security_lake.
Allowed enum values: amazon_security_lake
default: amazon_security_lake
Option 5
object
The azure_storage destination forwards logs to an Azure Blob Storage container.
Supported pipeline types: logs
blob_prefix
string
Optional prefix for blobs written to the container.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
connection_string_key
string
Name of the environment variable or secret that holds the Azure Storage connection string.
container_name [required]
string
The name of the Azure Blob Storage container to store logs in.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be azure_storage.
Allowed enum values: azure_storage
default: azure_storage
Option 6
object
The cloud_prem destination sends logs to Datadog CloudPrem.
Supported pipeline types: logs
endpoint_url_key
string
Name of the environment variable or secret that holds the CloudPrem endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be cloud_prem.
Allowed enum values: cloud_prem
default: cloud_prem
Option 7
object
The crowdstrike_next_gen_siem destination forwards logs to CrowdStrike Next Gen SIEM.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
compression
object
Compression configuration for log events.
algorithm [required]
enum
Compression algorithm for log events.
Allowed enum values: gzip,zlib
level
int64
Compression level.
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json,raw_message
endpoint_url_key
string
Name of the environment variable or secret that holds the CrowdStrike endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
token_key
string
Name of the environment variable or secret that holds the CrowdStrike API token.
type [required]
enum
The destination type. The value should always be crowdstrike_next_gen_siem.
Allowed enum values: crowdstrike_next_gen_siem
default: crowdstrike_next_gen_siem
Option 8
object
The datadog_logs destination forwards logs to Datadog Log Management.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
routes
[object]
A list of routing rules that forward matching logs to Datadog using dedicated API keys.
api_key_key
string
Name of the environment variable or secret that stores the Datadog API key used by this route.
include
string
A Datadog search query that determines which logs are forwarded using this route.
route_id
string
Unique identifier for this route within the destination.
site
string
Datadog site where matching logs are sent (for example, us1).
type [required]
enum
The destination type. The value should always be datadog_logs.
Allowed enum values: datadog_logs
default: datadog_logs
Option 9
object
The elasticsearch destination writes logs to an Elasticsearch cluster.
Supported pipeline types: logs
api_version
enum
The Elasticsearch API version to use. Set to auto to auto-detect.
Allowed enum values: auto,v6,v7,v8
auth
object
Authentication settings for the Elasticsearch destination.
When strategy is basic, use username_key and password_key to reference credentials stored in environment variables or secrets.
password_key
string
Name of the environment variable or secret that holds the Elasticsearch password (used when strategy is basic).
strategy [required]
enum
The authentication strategy to use.
Allowed enum values: basic,aws
username_key
string
Name of the environment variable or secret that holds the Elasticsearch username (used when strategy is basic).
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
bulk_index
string
The index to write logs to in Elasticsearch.
data_stream
object
Configuration options for writing to Elasticsearch Data Streams instead of a fixed index.
dataset
string
The data stream dataset for your logs. This groups logs by their source or application.
dtype
string
The data stream type for your logs. This determines how logs are categorized within the data stream.
namespace
string
The data stream namespace for your logs. This separates logs into different environments or domains.
endpoint_url_key
string
Name of the environment variable or secret that holds the Elasticsearch endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be elasticsearch.
Allowed enum values: elasticsearch
default: elasticsearch
Option 10
object
The google_chronicle destination sends logs to Google Chronicle.
Supported pipeline types: logs
auth
object
Google Cloud credentials used to authenticate with Google Cloud Storage.
credentials_file [required]
string
Path to the Google Cloud service account key file.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
customer_id [required]
string
The Google Chronicle customer ID.
encoding
enum
The encoding format for the logs sent to Chronicle.
Allowed enum values: json,raw_message
endpoint_url_key
string
Name of the environment variable or secret that holds the Google Chronicle endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
log_type
string
The log type metadata associated with the Chronicle destination.
type [required]
enum
The destination type. The value should always be google_chronicle.
Allowed enum values: google_chronicle
default: google_chronicle
Option 11
object
The google_cloud_storage destination stores logs in a Google Cloud Storage (GCS) bucket.
It requires a bucket name, Google Cloud authentication, and metadata fields.
Supported pipeline types: logs
acl
enum
Access control list setting for objects written to the bucket.
Allowed enum values: private,project-private,public-read,authenticated-read,bucket-owner-read,bucket-owner-full-control
auth
object
Google Cloud credentials used to authenticate with Google Cloud Storage.
credentials_file [required]
string
Path to the Google Cloud service account key file.
bucket [required]
string
Name of the GCS bucket.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
Unique identifier for the destination component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
key_prefix
string
Optional prefix for object keys within the GCS bucket.
metadata
[object]
Custom metadata to attach to each object uploaded to the GCS bucket.
name [required]
string
The metadata key.
value [required]
string
The metadata value.
storage_class [required]
enum
Storage class used for objects stored in GCS.
Allowed enum values: STANDARD,NEARLINE,COLDLINE,ARCHIVE
type [required]
enum
The destination type. Always google_cloud_storage.
Allowed enum values: google_cloud_storage
default: google_cloud_storage
Option 12
object
The google_pubsub destination publishes logs to a Google Cloud Pub/Sub topic.
Supported pipeline types: logs
auth
object
Google Cloud credentials used to authenticate with Google Cloud Storage.
credentials_file [required]
string
Path to the Google Cloud service account key file.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json,raw_message
endpoint_url_key
string
Name of the environment variable or secret that holds the Google Cloud Pub/Sub endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
project [required]
string
The Google Cloud project ID that owns the Pub/Sub topic.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
topic [required]
string
The Pub/Sub topic name to publish logs to.
type [required]
enum
The destination type. The value should always be google_pubsub.
Allowed enum values: google_pubsub
default: google_pubsub
Option 13
object
The kafka destination sends logs to Apache Kafka topics.
Supported pipeline types: logs
bootstrap_servers_key
string
Name of the environment variable or secret that holds the Kafka bootstrap servers list.
compression
enum
Compression codec for Kafka messages.
Allowed enum values: none,gzip,snappy,lz4,zstd
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json,raw_message
headers_key
string
The field name to use for Kafka message headers.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
key_field
string
The field name to use as the Kafka message key.
librdkafka_options
[object]
Optional list of advanced Kafka producer configuration options, defined as key-value pairs.
name [required]
string
The name of the librdkafka configuration option to set.
value [required]
string
The value assigned to the specified librdkafka configuration option.
message_timeout_ms
int64
Maximum time in milliseconds to wait for message delivery confirmation.
rate_limit_duration_secs
int64
Duration in seconds for the rate limit window.
rate_limit_num
int64
Maximum number of messages allowed per rate limit duration.
sasl
object
Specifies the SASL mechanism for authenticating with a Kafka cluster.
mechanism
enum
SASL mechanism used for Kafka authentication.
Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
password_key
string
Name of the environment variable or secret that holds the SASL password.
username_key
string
Name of the environment variable or secret that holds the SASL username.
socket_timeout_ms
int64
Socket timeout in milliseconds for network requests.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
topic [required]
string
The Kafka topic name to publish logs to.
type [required]
enum
The destination type. The value should always be kafka.
Allowed enum values: kafka
default: kafka
Option 14
object
The microsoft_sentinel destination forwards logs to Microsoft Sentinel.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
client_id [required]
string
Azure AD client ID used for authentication.
client_secret_key
string
Name of the environment variable or secret that holds the Azure AD client secret.
dce_uri_key
string
Name of the environment variable or secret that holds the Data Collection Endpoint (DCE) URI.
dcr_immutable_id [required]
string
The immutable ID of the Data Collection Rule (DCR).
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
table [required]
string
The name of the Log Analytics table where logs are sent.
tenant_id [required]
string
Azure AD tenant ID.
type [required]
enum
The destination type. The value should always be microsoft_sentinel.
Allowed enum values: microsoft_sentinel
default: microsoft_sentinel
Option 15
object
The new_relic destination sends logs to the New Relic platform.
Supported pipeline types: logs
account_id_key
string
Name of the environment variable or secret that holds the New Relic account ID.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
license_key_key
string
Name of the environment variable or secret that holds the New Relic license key.
region [required]
enum
The New Relic region.
Allowed enum values: us,eu
type [required]
enum
The destination type. The value should always be new_relic.
Allowed enum values: new_relic
default: new_relic
Option 16
object
The opensearch destination writes logs to an OpenSearch cluster.
Supported pipeline types: logs
auth
object
Authentication settings for the Elasticsearch destination.
When strategy is basic, use username_key and password_key to reference credentials stored in environment variables or secrets.
password_key
string
Name of the environment variable or secret that holds the Elasticsearch password (used when strategy is basic).
strategy [required]
enum
The authentication strategy to use.
Allowed enum values: basic,aws
username_key
string
Name of the environment variable or secret that holds the Elasticsearch username (used when strategy is basic).
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
bulk_index
string
The index to write logs to.
data_stream
object
Configuration options for writing to OpenSearch Data Streams instead of a fixed index.
dataset
string
The data stream dataset for your logs. This groups logs by their source or application.
dtype
string
The data stream type for your logs. This determines how logs are categorized within the data stream.
namespace
string
The data stream namespace for your logs. This separates logs into different environments or domains.
endpoint_url_key
string
Name of the environment variable or secret that holds the OpenSearch endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be opensearch.
Allowed enum values: opensearch
default: opensearch
Option 17
object
The rsyslog destination forwards logs to an external rsyslog server over TCP or UDP using the syslog protocol.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
endpoint_url_key
string
Name of the environment variable or secret that holds the syslog server endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
keepalive
int64
Optional socket keepalive duration in milliseconds.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. The value should always be rsyslog.
Allowed enum values: rsyslog
default: rsyslog
Option 18
object
The sentinel_one destination sends logs to SentinelOne.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
region [required]
enum
The SentinelOne region to send logs to.
Allowed enum values: us,eu,ca,data_set_us
token_key
string
Name of the environment variable or secret that holds the SentinelOne API token.
type [required]
enum
The destination type. The value should always be sentinel_one.
Allowed enum values: sentinel_one
default: sentinel_one
Option 19
object
The socket destination sends logs over TCP or UDP to a remote server.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the socket address (host:port).
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json,raw_message
framing [required]
<oneOf>
Framing method configuration.
Option 1
object
Each log event is delimited by a newline character.
method [required]
enum
The definition of ObservabilityPipelineSocketDestinationFramingNewlineDelimitedMethod object.
Allowed enum values: newline_delimited
Option 2
object
Event data is not delimited at all.
method [required]
enum
The definition of ObservabilityPipelineSocketDestinationFramingBytesMethod object.
Allowed enum values: bytes
Option 3
object
Each log event is separated using the specified delimiter character.
delimiter [required]
string
A single ASCII character used as a delimiter.
method [required]
enum
The definition of ObservabilityPipelineSocketDestinationFramingCharacterDelimitedMethod object.
Allowed enum values: character_delimited
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
mode [required]
enum
Protocol used to send logs.
Allowed enum values: tcp,udp
tls
object
TLS configuration. Relevant only when mode is tcp.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. The value should always be socket.
Allowed enum values: socket
default: socket
Option 20
object
The splunk_hec destination forwards logs to Splunk using the HTTP Event Collector (HEC).
Supported pipeline types: logs
auto_extract_timestamp
boolean
If true, Splunk tries to extract timestamps from incoming log events.
If false, Splunk assigns the time the event was received.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
encoding
enum
Encoding format for log events.
Allowed enum values: json,raw_message
endpoint_url_key
string
Name of the environment variable or secret that holds the Splunk HEC endpoint URL.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
index
string
Optional name of the Splunk index where logs are written.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
sourcetype
string
The Splunk sourcetype to assign to log events.
token_key
string
Name of the environment variable or secret that holds the Splunk HEC token.
type [required]
enum
The destination type. Always splunk_hec.
Allowed enum values: splunk_hec
default: splunk_hec
Option 21
object
The sumo_logic destination forwards logs to Sumo Logic.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
encoding
enum
The output encoding format.
Allowed enum values: json,raw_message,logfmt
endpoint_url_key
string
Name of the environment variable or secret that holds the Sumo Logic HTTP endpoint URL.
header_custom_fields
[object]
A list of custom headers to include in the request to Sumo Logic.
name [required]
string
The header field name.
value [required]
string
The header field value.
header_host_name
string
Optional override for the host name header.
header_source_category
string
Optional override for the source category header.
header_source_name
string
Optional override for the source name header.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be sumo_logic.
Allowed enum values: sumo_logic
default: sumo_logic
Option 22
object
The syslog_ng destination forwards logs to an external syslog-ng server over TCP or UDP using the syslog protocol.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
endpoint_url_key
string
Name of the environment variable or secret that holds the syslog-ng server endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
keepalive
int64
Optional socket keepalive duration in milliseconds.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. The value should always be syslog_ng.
Allowed enum values: syslog_ng
default: syslog_ng
Option 23
object
The datadog_metrics destination forwards metrics to Datadog.
Supported pipeline types: metrics
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be datadog_metrics.
Allowed enum values: datadog_metrics
default: datadog_metrics
pipeline_type
enum
The type of data being ingested. Defaults to logs if not specified.
Allowed enum values: logs,metrics
default: logs
processor_groups
[object]
A list of processor groups that transform or enrich log data.
display_name
string
The display name for a component.
enabled [required]
boolean
Whether this processor group is enabled.
id [required]
string
The unique identifier for the processor group.
include [required]
string
Conditional expression for when this processor group should execute.
inputs [required]
[string]
A list of IDs for components whose output is used as the input for this processor group.
processors [required]
[ <oneOf>]
Processors applied sequentially within this group. Events flow through each processor in order.
Option 1
object
The filter processor allows conditional processing of logs/metrics based on a Datadog search query. Logs/metrics that match the include query are passed through; others are discarded.
Supported pipeline types: logs, metrics
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs/metrics should pass through the filter. Logs/metrics that match this query continue to downstream components; others are dropped.
type [required]
enum
The processor type. The value should always be filter.
Allowed enum values: filter
default: filter
Option 2
object
The add_env_vars processor adds environment variable values to log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this processor in the pipeline.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_env_vars.
Allowed enum values: add_env_vars
default: add_env_vars
variables [required]
[object]
A list of environment variable mappings to apply to log fields.
field [required]
string
The target field in the log event.
name [required]
string
The name of the environment variable to read.
Option 3
object
The add_fields processor adds static key-value fields to logs.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[object]
A list of static fields (key-value pairs) that is added to each log event processed by this component.
name [required]
string
The field name.
value [required]
string
The field value.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_fields.
Allowed enum values: add_fields
default: add_fields
Option 4
object
The add_hostname processor adds the hostname to log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_hostname.
Allowed enum values: add_hostname
default: add_hostname
Option 5
object
The custom_processor processor transforms events using Vector Remap Language (VRL) scripts with advanced filtering capabilities.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets. This field should always be set to * for the custom_processor processor.
default: *
remaps [required]
[object]
Array of VRL remap rules.
drop_on_error [required]
boolean
Whether to drop events that caused errors during processing.
enabled
boolean
Whether this remap rule is enabled.
include [required]
string
A Datadog search query used to filter events for this specific remap rule.
name [required]
string
A descriptive name for this remap rule.
source [required]
string
The VRL script source code that defines the processing logic.
type [required]
enum
The processor type. The value should always be custom_processor.
Allowed enum values: custom_processor
default: custom_processor
Option 6
object
The datadog_tags processor includes or excludes specific Datadog tags in your logs.
Supported pipeline types: logs
action [required]
enum
The action to take on tags with matching keys.
Allowed enum values: include,exclude
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
keys [required]
[string]
A list of tag keys.
mode [required]
enum
The processing mode.
Allowed enum values: filter
type [required]
enum
The processor type. The value should always be datadog_tags.
Allowed enum values: datadog_tags
default: datadog_tags
Option 7
object
The dedupe processor removes duplicate fields in log events.
Supported pipeline types: logs
cache
object
Configuration for the cache used to detect duplicates.
num_events [required]
int64
The number of events to cache for duplicate detection.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[string]
A list of log field paths to check for duplicates.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
mode [required]
enum
The deduplication mode to apply to the fields.
Allowed enum values: match,ignore
type [required]
enum
The processor type. The value should always be dedupe.
Allowed enum values: dedupe
default: dedupe
Option 8
object
The enrichment_table processor enriches logs using a static CSV file, GeoIP database, or reference table. Exactly one of file, geoip, or reference_table must be configured.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
file
object
Defines a static enrichment table loaded from a CSV file.
encoding [required]
object
File encoding format.
delimiter [required]
string
The encodingdelimiter.
includes_headers [required]
boolean
The encodingincludes_headers.
type [required]
enum
Specifies the encoding format (e.g., CSV) used for enrichment tables.
Allowed enum values: csv
key [required]
[object]
Key fields used to look up enrichment values.
column [required]
string
The itemscolumn.
comparison [required]
enum
Defines how to compare key fields for enrichment table lookups.
Allowed enum values: equals
field [required]
string
The itemsfield.
path [required]
string
Path to the CSV file.
schema [required]
[object]
Schema defining column names and their types.
column [required]
string
The itemscolumn.
type [required]
enum
Declares allowed data types for enrichment table columns.
Allowed enum values: string,boolean,integer,float,date,timestamp
geoip
object
Uses a GeoIP database to enrich logs based on an IP field.
key_field [required]
string
Path to the IP field in the log.
locale [required]
string
Locale used to resolve geographical names.
path [required]
string
Path to the GeoIP database file.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
reference_table
object
Uses a Datadog reference table to enrich logs.
app_key_key
string
Name of the environment variable or secret that holds the Datadog application key used to access the reference table.
columns
[string]
List of column names to include from the reference table. If not provided, all columns are included.
key_field [required]
string
Path to the field in the log event to match against the reference table.
table_id [required]
string
The unique identifier of the reference table.
target [required]
string
Path where enrichment results should be stored in the log.
type [required]
enum
The processor type. The value should always be enrichment_table.
Allowed enum values: enrichment_table
default: enrichment_table
Option 9
object
The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog.
Metrics can be counters, gauges, or distributions and optionally grouped by log fields.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline.
include
string
A Datadog search query used to determine which logs this processor targets.
metrics
[object]
Configuration for generating individual metrics.
group_by
[string]
Optional fields used to group the metric series.
include [required]
string
Datadog filter query to match logs for metric generation.
metric_type [required]
enum
Type of metric to create.
Allowed enum values: count,gauge,distribution
name [required]
string
Name of the custom metric to be created.
value [required]
<oneOf>
Specifies how the value of the generated metric is computed.
Option 1
object
Strategy that increments a generated metric by one for each matching event.
strategy [required]
enum
Increments the metric by 1 for each matching event.
Allowed enum values: increment_by_one
Option 2
object
Strategy that increments a generated metric based on the value of a log field.
field [required]
string
Name of the log field containing the numeric value to increment the metric by.
strategy [required]
enum
Uses a numeric field in the log event as the metric increment.
Allowed enum values: increment_by_field
type [required]
enum
The processor type. Always generate_datadog_metrics.
Allowed enum values: generate_datadog_metrics
default: generate_datadog_metrics
Option 10
object
The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
mappings [required]
[object]
A list of mapping rules to convert events to the OCSF format.
include [required]
string
A Datadog search query used to select the logs that this mapping should apply to.
mapping [required]
<oneOf>
Defines a single mapping rule for transforming logs into the OCSF schema.
Custom OCSF mapping configuration for transforming logs.
mapping [required]
[object]
A list of field mapping rules for transforming log fields to OCSF schema fields.
default
The default value to use if the source field is missing or empty.
dest [required]
string
The destination OCSF field path.
lookup
object
Lookup table configuration for mapping source values to destination values.
source
The source field path from the log event.
sources
Multiple source field paths for combined mapping.
value
A static value to use for the destination field.
metadata [required]
object
Metadata for the custom OCSF mapping.
class [required]
string
The OCSF event class name.
profiles
[string]
A list of OCSF profiles to apply.
version [required]
string
The OCSF schema version.
version [required]
int64
The version of the custom mapping configuration.
type [required]
enum
The processor type. The value should always be ocsf_mapper.
Allowed enum values: ocsf_mapper
default: ocsf_mapper
Option 11
object
The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.
Supported pipeline types: logs
disable_library_rules
boolean
If set to true, disables the default Grok rules provided by Datadog.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
A unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
rules [required]
[object]
The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.
match_rules [required]
[object]
A list of Grok parsing rules that define how to extract fields from the source field.
Each rule must contain a name and a valid Grok pattern.
name [required]
string
The name of the rule.
rule [required]
string
The definition of the Grok rule.
source [required]
string
The name of the field in the log event to apply the Grok rules to.
support_rules
[object]
A list of Grok helper rules that can be referenced by the parsing rules.
name [required]
string
The name of the Grok helper rule.
rule [required]
string
The definition of the Grok helper rule.
type [required]
enum
The processor type. The value should always be parse_grok.
Allowed enum values: parse_grok
default: parse_grok
Option 12
object
The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
field [required]
string
The name of the log field that contains a JSON string.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be parse_json.
Allowed enum values: parse_json
default: parse_json
Option 13
object
The parse_xml processor parses XML from a specified field and extracts it into the event.
Supported pipeline types: logs
always_use_text_key
boolean
Whether to always use a text key for element content.
attr_prefix
string
The prefix to use for XML attributes in the parsed output.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
field [required]
string
The name of the log field that contains an XML string.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
include_attr
boolean
Whether to include XML attributes in the parsed output.
parse_bool
boolean
Whether to parse boolean values from strings.
parse_null
boolean
Whether to parse null values.
parse_number
boolean
Whether to parse numeric values from strings.
text_key
string
The key name to use for text content within XML elements. Must be at least 1 character if specified.
type [required]
enum
The processor type. The value should always be parse_xml.
Allowed enum values: parse_xml
default: parse_xml
Option 14
object
The quota processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.
Supported pipeline types: logs
display_name
string
The display name for a component.
drop_events
boolean
If set to true, logs that match the quota filter and are sent after the quota is exceeded are dropped. Logs that do not match the filter continue through the pipeline. Note: You can set either drop_events or overflow_action, but not both.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
ignore_when_missing_partitions
boolean
If true, the processor skips quota checks when partition fields are missing from the logs.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
name [required]
string
Name of the quota.
overflow_action
enum
The action to take when the quota or bucket limit is exceeded. Options:
drop: Drop the event.
no_action: Let the event pass through.
overflow_routing: Route to an overflow destination.
Allowed enum values: drop,no_action,overflow_routing
overrides
[object]
A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.
fields [required]
[object]
A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.
name [required]
string
The field name.
value [required]
string
The field value.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
partition_fields
[string]
A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.
too_many_buckets_action
enum
The action to take when the quota or bucket limit is exceeded. Options:
drop: Drop the event.
no_action: Let the event pass through.
overflow_routing: Route to an overflow destination.
Allowed enum values: drop,no_action,overflow_routing
type [required]
enum
The processor type. The value should always be quota.
Allowed enum values: quota
default: quota
Option 15
object
The reduce processor aggregates and merges logs based on matching keys and merge strategies.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by [required]
[string]
A list of fields used to group log events for merging.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
merge_strategies [required]
[object]
List of merge strategies defining how values from grouped events should be combined.
path [required]
string
The field path in the log event.
strategy [required]
enum
The merge strategy to apply.
Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique
type [required]
enum
The processor type. The value should always be reduce.
Allowed enum values: reduce
default: reduce
Option 16
object
The remove_fields processor deletes specified fields from logs.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[string]
A list of field names to be removed from each log event.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be remove_fields.
Allowed enum values: remove_fields
default: remove_fields
Option 17
object
The rename_fields processor changes field names.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[object]
A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.
destination [required]
string
The field name to assign the renamed value to.
preserve_source [required]
boolean
Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.
source [required]
string
The original field name in the log event that should be renamed.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be rename_fields.
Allowed enum values: rename_fields
default: rename_fields
Option 18
object
The sample processor allows probabilistic sampling of logs at a fixed rate.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by
[string]
Optional list of fields to group events by. Each group is sampled independently.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
percentage [required]
double
The percentage of logs to sample.
type [required]
enum
The processor type. The value should always be sample.
Allowed enum values: sample
default: sample
Option 19
object
The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
rules [required]
[object]
A list of rules for identifying and acting on sensitive data patterns.
keyword_options
object
Configuration for keywords used to reinforce sensitive data pattern detection.
keywords [required]
[string]
A list of keywords to match near the sensitive pattern.
proximity [required]
int64
Maximum number of tokens between a keyword and a sensitive value match.
name [required]
string
A name identifying the rule.
on_match [required]
<oneOf>
Defines what action to take when sensitive data is matched.
Option 1
object
Configuration for completely redacting matched sensitive data.
action [required]
enum
Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility.
Allowed enum values: redact
options [required]
object
Configuration for fully redacting sensitive data.
replace [required]
string
The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptionsreplace.
Option 2
object
Configuration for hashing matched sensitive values.
action [required]
enum
Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content.
Allowed enum values: hash
options
object
The ObservabilityPipelineSensitiveDataScannerProcessorActionHashoptions.
Option 3
object
Configuration for partially redacting matched sensitive data.
action [required]
enum
Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card).
Allowed enum values: partial_redact
options [required]
object
Controls how partial redaction is applied, including character count and direction.
characters [required]
int64
The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptionscharacters.
direction [required]
enum
Indicates whether to redact characters from the first or last part of the matched value.
Allowed enum values: first,last
pattern [required]
<oneOf>
Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.
Option 1
object
Defines a custom regex-based pattern for identifying sensitive data in logs.
options [required]
object
Options for defining a custom regex pattern.
description
string
Human-readable description providing context about a sensitive data scanner rule
rule [required]
string
A regular expression used to detect sensitive values. Must be a valid regex.
type [required]
enum
Indicates a custom regular expression is used for matching.
Allowed enum values: custom
Option 2
object
Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.
options [required]
object
Options for selecting a predefined library pattern and enabling keyword support.
description
string
Human-readable description providing context about a sensitive data scanner rule
id [required]
string
Identifier for a predefined pattern from the sensitive data scanner pattern library.
use_recommended_keywords
boolean
Whether to augment the pattern with recommended keywords (optional).
type [required]
enum
Indicates that a predefined library pattern is used.
Allowed enum values: library
scope [required]
<oneOf>
Determines which parts of the log the pattern-matching rule should be applied to.
Option 1
object
Includes only specific fields for sensitive data scanning.
options [required]
object
Fields to which the scope rule applies.
fields [required]
[string]
The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptionsfields.
target [required]
enum
Applies the rule only to included fields.
Allowed enum values: include
Option 2
object
Excludes specific fields from sensitive data scanning.
options [required]
object
Fields to which the scope rule applies.
fields [required]
[string]
The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptionsfields.
target [required]
enum
Excludes specific fields from processing.
Allowed enum values: exclude
Option 3
object
Applies scanning across all available fields.
target [required]
enum
Applies the rule to all fields.
Allowed enum values: all
tags [required]
[string]
Tags assigned to this rule for filtering and classification.
type [required]
enum
The processor type. The value should always be sensitive_data_scanner.
Allowed enum values: sensitive_data_scanner
default: sensitive_data_scanner
Option 20
object
The split_array processor splits array fields into separate events based on configured rules.
Supported pipeline types: logs
arrays [required]
[object]
A list of array split configurations.
field [required]
string
The path to the array field to split.
include [required]
string
A Datadog search query used to determine which logs this array split operation targets.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets. For split_array, this should typically be *.
type [required]
enum
The processor type. The value should always be split_array.
Allowed enum values: split_array
default: split_array
Option 21
object
The throttle processor limits the number of events that pass through over a given time window.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by
[string]
Optional list of fields used to group events before the threshold has been reached.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
threshold [required]
int64
the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.
type [required]
enum
The processor type. The value should always be throttle.
Allowed enum values: throttle
default: throttle
window [required]
double
The time window in seconds over which the threshold applies.
Option 22
object
The metric_tags processor filters metrics based on their tags using Datadog tag key patterns.
Supported pipeline types: metrics
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query that determines which metrics the processor targets.
rules [required]
[object]
A list of rules for filtering metric tags.
action [required]
enum
The action to take on tags with matching keys.
Allowed enum values: include,exclude
include [required]
string
A Datadog search query used to determine which metrics this rule targets.
keys [required]
[string]
A list of tag keys to include or exclude.
mode [required]
enum
The processing mode for tag filtering.
Allowed enum values: filter
type [required]
enum
The processor type. The value should always be metric_tags.
Allowed enum values: metric_tags
default: metric_tags
processors
[object]
DEPRECATED: A list of processor groups that transform or enrich log data.
Deprecated: This field is deprecated, you should now use the processor_groups field.
display_name
string
The display name for a component.
enabled [required]
boolean
Whether this processor group is enabled.
id [required]
string
The unique identifier for the processor group.
include [required]
string
Conditional expression for when this processor group should execute.
inputs [required]
[string]
A list of IDs for components whose output is used as the input for this processor group.
processors [required]
[ <oneOf>]
Processors applied sequentially within this group. Events flow through each processor in order.
Option 1
object
The filter processor allows conditional processing of logs/metrics based on a Datadog search query. Logs/metrics that match the include query are passed through; others are discarded.
Supported pipeline types: logs, metrics
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs/metrics should pass through the filter. Logs/metrics that match this query continue to downstream components; others are dropped.
type [required]
enum
The processor type. The value should always be filter.
Allowed enum values: filter
default: filter
Option 2
object
The add_env_vars processor adds environment variable values to log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this processor in the pipeline.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_env_vars.
Allowed enum values: add_env_vars
default: add_env_vars
variables [required]
[object]
A list of environment variable mappings to apply to log fields.
field [required]
string
The target field in the log event.
name [required]
string
The name of the environment variable to read.
Option 3
object
The add_fields processor adds static key-value fields to logs.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[object]
A list of static fields (key-value pairs) that is added to each log event processed by this component.
name [required]
string
The field name.
value [required]
string
The field value.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_fields.
Allowed enum values: add_fields
default: add_fields
Option 4
object
The add_hostname processor adds the hostname to log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_hostname.
Allowed enum values: add_hostname
default: add_hostname
Option 5
object
The custom_processor processor transforms events using Vector Remap Language (VRL) scripts with advanced filtering capabilities.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets. This field should always be set to * for the custom_processor processor.
default: *
remaps [required]
[object]
Array of VRL remap rules.
drop_on_error [required]
boolean
Whether to drop events that caused errors during processing.
enabled
boolean
Whether this remap rule is enabled.
include [required]
string
A Datadog search query used to filter events for this specific remap rule.
name [required]
string
A descriptive name for this remap rule.
source [required]
string
The VRL script source code that defines the processing logic.
type [required]
enum
The processor type. The value should always be custom_processor.
Allowed enum values: custom_processor
default: custom_processor
Option 6
object
The datadog_tags processor includes or excludes specific Datadog tags in your logs.
Supported pipeline types: logs
action [required]
enum
The action to take on tags with matching keys.
Allowed enum values: include,exclude
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
keys [required]
[string]
A list of tag keys.
mode [required]
enum
The processing mode.
Allowed enum values: filter
type [required]
enum
The processor type. The value should always be datadog_tags.
Allowed enum values: datadog_tags
default: datadog_tags
Option 7
object
The dedupe processor removes duplicate fields in log events.
Supported pipeline types: logs
cache
object
Configuration for the cache used to detect duplicates.
num_events [required]
int64
The number of events to cache for duplicate detection.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[string]
A list of log field paths to check for duplicates.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
mode [required]
enum
The deduplication mode to apply to the fields.
Allowed enum values: match,ignore
type [required]
enum
The processor type. The value should always be dedupe.
Allowed enum values: dedupe
default: dedupe
Option 8
object
The enrichment_table processor enriches logs using a static CSV file, GeoIP database, or reference table. Exactly one of file, geoip, or reference_table must be configured.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
file
object
Defines a static enrichment table loaded from a CSV file.
encoding [required]
object
File encoding format.
delimiter [required]
string
The encodingdelimiter.
includes_headers [required]
boolean
The encodingincludes_headers.
type [required]
enum
Specifies the encoding format (e.g., CSV) used for enrichment tables.
Allowed enum values: csv
key [required]
[object]
Key fields used to look up enrichment values.
column [required]
string
The itemscolumn.
comparison [required]
enum
Defines how to compare key fields for enrichment table lookups.
Allowed enum values: equals
field [required]
string
The itemsfield.
path [required]
string
Path to the CSV file.
schema [required]
[object]
Schema defining column names and their types.
column [required]
string
The itemscolumn.
type [required]
enum
Declares allowed data types for enrichment table columns.
Allowed enum values: string,boolean,integer,float,date,timestamp
geoip
object
Uses a GeoIP database to enrich logs based on an IP field.
key_field [required]
string
Path to the IP field in the log.
locale [required]
string
Locale used to resolve geographical names.
path [required]
string
Path to the GeoIP database file.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
reference_table
object
Uses a Datadog reference table to enrich logs.
app_key_key
string
Name of the environment variable or secret that holds the Datadog application key used to access the reference table.
columns
[string]
List of column names to include from the reference table. If not provided, all columns are included.
key_field [required]
string
Path to the field in the log event to match against the reference table.
table_id [required]
string
The unique identifier of the reference table.
target [required]
string
Path where enrichment results should be stored in the log.
type [required]
enum
The processor type. The value should always be enrichment_table.
Allowed enum values: enrichment_table
default: enrichment_table
Option 9
object
The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog.
Metrics can be counters, gauges, or distributions and optionally grouped by log fields.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline.
include
string
A Datadog search query used to determine which logs this processor targets.
metrics
[object]
Configuration for generating individual metrics.
group_by
[string]
Optional fields used to group the metric series.
include [required]
string
Datadog filter query to match logs for metric generation.
metric_type [required]
enum
Type of metric to create.
Allowed enum values: count,gauge,distribution
name [required]
string
Name of the custom metric to be created.
value [required]
<oneOf>
Specifies how the value of the generated metric is computed.
Option 1
object
Strategy that increments a generated metric by one for each matching event.
strategy [required]
enum
Increments the metric by 1 for each matching event.
Allowed enum values: increment_by_one
Option 2
object
Strategy that increments a generated metric based on the value of a log field.
field [required]
string
Name of the log field containing the numeric value to increment the metric by.
strategy [required]
enum
Uses a numeric field in the log event as the metric increment.
Allowed enum values: increment_by_field
type [required]
enum
The processor type. Always generate_datadog_metrics.
Allowed enum values: generate_datadog_metrics
default: generate_datadog_metrics
Option 10
object
The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
mappings [required]
[object]
A list of mapping rules to convert events to the OCSF format.
include [required]
string
A Datadog search query used to select the logs that this mapping should apply to.
mapping [required]
<oneOf>
Defines a single mapping rule for transforming logs into the OCSF schema.
Custom OCSF mapping configuration for transforming logs.
mapping [required]
[object]
A list of field mapping rules for transforming log fields to OCSF schema fields.
default
The default value to use if the source field is missing or empty.
dest [required]
string
The destination OCSF field path.
lookup
object
Lookup table configuration for mapping source values to destination values.
source
The source field path from the log event.
sources
Multiple source field paths for combined mapping.
value
A static value to use for the destination field.
metadata [required]
object
Metadata for the custom OCSF mapping.
class [required]
string
The OCSF event class name.
profiles
[string]
A list of OCSF profiles to apply.
version [required]
string
The OCSF schema version.
version [required]
int64
The version of the custom mapping configuration.
type [required]
enum
The processor type. The value should always be ocsf_mapper.
Allowed enum values: ocsf_mapper
default: ocsf_mapper
Option 11
object
The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.
Supported pipeline types: logs
disable_library_rules
boolean
If set to true, disables the default Grok rules provided by Datadog.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
A unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
rules [required]
[object]
The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.
match_rules [required]
[object]
A list of Grok parsing rules that define how to extract fields from the source field.
Each rule must contain a name and a valid Grok pattern.
name [required]
string
The name of the rule.
rule [required]
string
The definition of the Grok rule.
source [required]
string
The name of the field in the log event to apply the Grok rules to.
support_rules
[object]
A list of Grok helper rules that can be referenced by the parsing rules.
name [required]
string
The name of the Grok helper rule.
rule [required]
string
The definition of the Grok helper rule.
type [required]
enum
The processor type. The value should always be parse_grok.
Allowed enum values: parse_grok
default: parse_grok
Option 12
object
The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
field [required]
string
The name of the log field that contains a JSON string.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be parse_json.
Allowed enum values: parse_json
default: parse_json
Option 13
object
The parse_xml processor parses XML from a specified field and extracts it into the event.
Supported pipeline types: logs
always_use_text_key
boolean
Whether to always use a text key for element content.
attr_prefix
string
The prefix to use for XML attributes in the parsed output.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
field [required]
string
The name of the log field that contains an XML string.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
include_attr
boolean
Whether to include XML attributes in the parsed output.
parse_bool
boolean
Whether to parse boolean values from strings.
parse_null
boolean
Whether to parse null values.
parse_number
boolean
Whether to parse numeric values from strings.
text_key
string
The key name to use for text content within XML elements. Must be at least 1 character if specified.
type [required]
enum
The processor type. The value should always be parse_xml.
Allowed enum values: parse_xml
default: parse_xml
Option 14
object
The quota processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.
Supported pipeline types: logs
display_name
string
The display name for a component.
drop_events
boolean
If set to true, logs that match the quota filter and are sent after the quota is exceeded are dropped. Logs that do not match the filter continue through the pipeline. Note: You can set either drop_events or overflow_action, but not both.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
ignore_when_missing_partitions
boolean
If true, the processor skips quota checks when partition fields are missing from the logs.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
name [required]
string
Name of the quota.
overflow_action
enum
The action to take when the quota or bucket limit is exceeded. Options:
drop: Drop the event.
no_action: Let the event pass through.
overflow_routing: Route to an overflow destination.
Allowed enum values: drop,no_action,overflow_routing
overrides
[object]
A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.
fields [required]
[object]
A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.
name [required]
string
The field name.
value [required]
string
The field value.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
partition_fields
[string]
A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.
too_many_buckets_action
enum
The action to take when the quota or bucket limit is exceeded. Options:
drop: Drop the event.
no_action: Let the event pass through.
overflow_routing: Route to an overflow destination.
Allowed enum values: drop,no_action,overflow_routing
type [required]
enum
The processor type. The value should always be quota.
Allowed enum values: quota
default: quota
Option 15
object
The reduce processor aggregates and merges logs based on matching keys and merge strategies.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by [required]
[string]
A list of fields used to group log events for merging.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
merge_strategies [required]
[object]
List of merge strategies defining how values from grouped events should be combined.
path [required]
string
The field path in the log event.
strategy [required]
enum
The merge strategy to apply.
Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique
type [required]
enum
The processor type. The value should always be reduce.
Allowed enum values: reduce
default: reduce
Option 16
object
The remove_fields processor deletes specified fields from logs.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[string]
A list of field names to be removed from each log event.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be remove_fields.
Allowed enum values: remove_fields
default: remove_fields
Option 17
object
The rename_fields processor changes field names.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[object]
A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.
destination [required]
string
The field name to assign the renamed value to.
preserve_source [required]
boolean
Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.
source [required]
string
The original field name in the log event that should be renamed.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be rename_fields.
Allowed enum values: rename_fields
default: rename_fields
Option 18
object
The sample processor allows probabilistic sampling of logs at a fixed rate.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by
[string]
Optional list of fields to group events by. Each group is sampled independently.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
percentage [required]
double
The percentage of logs to sample.
type [required]
enum
The processor type. The value should always be sample.
Allowed enum values: sample
default: sample
Option 19
object
The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
rules [required]
[object]
A list of rules for identifying and acting on sensitive data patterns.
keyword_options
object
Configuration for keywords used to reinforce sensitive data pattern detection.
keywords [required]
[string]
A list of keywords to match near the sensitive pattern.
proximity [required]
int64
Maximum number of tokens between a keyword and a sensitive value match.
name [required]
string
A name identifying the rule.
on_match [required]
<oneOf>
Defines what action to take when sensitive data is matched.
Option 1
object
Configuration for completely redacting matched sensitive data.
action [required]
enum
Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility.
Allowed enum values: redact
options [required]
object
Configuration for fully redacting sensitive data.
replace [required]
string
The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptionsreplace.
Option 2
object
Configuration for hashing matched sensitive values.
action [required]
enum
Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content.
Allowed enum values: hash
options
object
The ObservabilityPipelineSensitiveDataScannerProcessorActionHashoptions.
Option 3
object
Configuration for partially redacting matched sensitive data.
action [required]
enum
Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card).
Allowed enum values: partial_redact
options [required]
object
Controls how partial redaction is applied, including character count and direction.
characters [required]
int64
The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptionscharacters.
direction [required]
enum
Indicates whether to redact characters from the first or last part of the matched value.
Allowed enum values: first,last
pattern [required]
<oneOf>
Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.
Option 1
object
Defines a custom regex-based pattern for identifying sensitive data in logs.
options [required]
object
Options for defining a custom regex pattern.
description
string
Human-readable description providing context about a sensitive data scanner rule
rule [required]
string
A regular expression used to detect sensitive values. Must be a valid regex.
type [required]
enum
Indicates a custom regular expression is used for matching.
Allowed enum values: custom
Option 2
object
Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.
options [required]
object
Options for selecting a predefined library pattern and enabling keyword support.
description
string
Human-readable description providing context about a sensitive data scanner rule
id [required]
string
Identifier for a predefined pattern from the sensitive data scanner pattern library.
use_recommended_keywords
boolean
Whether to augment the pattern with recommended keywords (optional).
type [required]
enum
Indicates that a predefined library pattern is used.
Allowed enum values: library
scope [required]
<oneOf>
Determines which parts of the log the pattern-matching rule should be applied to.
Option 1
object
Includes only specific fields for sensitive data scanning.
options [required]
object
Fields to which the scope rule applies.
fields [required]
[string]
The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptionsfields.
target [required]
enum
Applies the rule only to included fields.
Allowed enum values: include
Option 2
object
Excludes specific fields from sensitive data scanning.
options [required]
object
Fields to which the scope rule applies.
fields [required]
[string]
The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptionsfields.
target [required]
enum
Excludes specific fields from processing.
Allowed enum values: exclude
Option 3
object
Applies scanning across all available fields.
target [required]
enum
Applies the rule to all fields.
Allowed enum values: all
tags [required]
[string]
Tags assigned to this rule for filtering and classification.
type [required]
enum
The processor type. The value should always be sensitive_data_scanner.
Allowed enum values: sensitive_data_scanner
default: sensitive_data_scanner
Option 20
object
The split_array processor splits array fields into separate events based on configured rules.
Supported pipeline types: logs
arrays [required]
[object]
A list of array split configurations.
field [required]
string
The path to the array field to split.
include [required]
string
A Datadog search query used to determine which logs this array split operation targets.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets. For split_array, this should typically be *.
type [required]
enum
The processor type. The value should always be split_array.
Allowed enum values: split_array
default: split_array
Option 21
object
The throttle processor limits the number of events that pass through over a given time window.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by
[string]
Optional list of fields used to group events before the threshold has been reached.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
threshold [required]
int64
the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.
type [required]
enum
The processor type. The value should always be throttle.
Allowed enum values: throttle
default: throttle
window [required]
double
The time window in seconds over which the threshold applies.
Option 22
object
The metric_tags processor filters metrics based on their tags using Datadog tag key patterns.
Supported pipeline types: metrics
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query that determines which metrics the processor targets.
rules [required]
[object]
A list of rules for filtering metric tags.
action [required]
enum
The action to take on tags with matching keys.
Allowed enum values: include,exclude
include [required]
string
A Datadog search query used to determine which metrics this rule targets.
keys [required]
[string]
A list of tag keys to include or exclude.
mode [required]
enum
The processing mode for tag filtering.
Allowed enum values: filter
type [required]
enum
The processor type. The value should always be metric_tags.
Allowed enum values: metric_tags
default: metric_tags
sources [required]
[ <oneOf>]
A list of configured data sources for the pipeline.
Option 1
object
The datadog_agent source collects logs/metrics from the Datadog Agent.
Supported pipeline types: logs, metrics
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be datadog_agent.
Allowed enum values: datadog_agent
default: datadog_agent
Option 2
object
The amazon_data_firehose source ingests logs from AWS Data Firehose.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the Firehose delivery stream address.
auth
object
AWS authentication credentials used for accessing AWS services such as S3.
If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).
assume_role
string
The Amazon Resource Name (ARN) of the role to assume.
external_id
string
A unique identifier for cross-account role assumption.
session_name
string
A session identifier used for logging and tracing the assumed role session.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be amazon_data_firehose.
Allowed enum values: amazon_data_firehose
default: amazon_data_firehose
Option 3
object
The amazon_s3 source ingests logs from an Amazon S3 bucket.
It supports AWS authentication and TLS encryption.
Supported pipeline types: logs
auth
object
AWS authentication credentials used for accessing AWS services such as S3.
If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).
assume_role
string
The Amazon Resource Name (ARN) of the role to assume.
external_id
string
A unique identifier for cross-account role assumption.
session_name
string
A session identifier used for logging and tracing the assumed role session.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
region [required]
string
AWS region where the S3 bucket resides.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. Always amazon_s3.
Allowed enum values: amazon_s3
default: amazon_s3
url_key
string
Name of the environment variable or secret that holds the S3 bucket URL.
Option 4
object
The fluent_bit source ingests logs from Fluent Bit.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Fluent Bit receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be fluent_bit.
Allowed enum values: fluent_bit
default: fluent_bit
Option 5
object
The fluentd source ingests logs from a Fluentd-compatible service.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Fluent receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be `fluentd.
Allowed enum values: fluentd
default: fluentd
Option 6
object
The google_pubsub source ingests logs from a Google Cloud Pub/Sub subscription.
Supported pipeline types: logs
auth
object
Google Cloud credentials used to authenticate with Google Cloud Storage.
credentials_file [required]
string
Path to the Google Cloud service account key file.
decoding [required]
enum
The decoding format used to interpret incoming logs.
Allowed enum values: bytes,gelf,json,syslog
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
project [required]
string
The Google Cloud project ID that owns the Pub/Sub subscription.
subscription [required]
string
The Pub/Sub subscription name from which messages are consumed.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be google_pubsub.
Allowed enum values: google_pubsub
default: google_pubsub
Option 7
object
The http_client source scrapes logs from HTTP endpoints at regular intervals.
Supported pipeline types: logs
auth_strategy
enum
Optional authentication strategy for HTTP requests.
Allowed enum values: none,basic,bearer,custom
custom_key
string
Name of the environment variable or secret that holds a custom header value (used with custom auth strategies).
decoding [required]
enum
The decoding format used to interpret incoming logs.
Allowed enum values: bytes,gelf,json,syslog
endpoint_url_key
string
Name of the environment variable or secret that holds the HTTP endpoint URL to scrape.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
password_key
string
Name of the environment variable or secret that holds the password (used when auth_strategy is basic).
scrape_interval_secs
int64
The interval (in seconds) between HTTP scrape requests.
scrape_timeout_secs
int64
The timeout (in seconds) for each scrape request.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
token_key
string
Name of the environment variable or secret that holds the bearer token (used when auth_strategy is bearer).
type [required]
enum
The source type. The value should always be http_client.
Allowed enum values: http_client
default: http_client
username_key
string
Name of the environment variable or secret that holds the username (used when auth_strategy is basic).
Option 8
object
The http_server source collects logs over HTTP POST from external services.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the HTTP server.
Name of the environment variable or secret that holds a custom header value (used with custom auth strategies).
decoding [required]
enum
The decoding format used to interpret incoming logs.
Allowed enum values: bytes,gelf,json,syslog
id [required]
string
Unique ID for the HTTP server source.
password_key
string
Name of the environment variable or secret that holds the password (used when auth_strategy is plain).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be http_server.
Allowed enum values: http_server
default: http_server
username_key
string
Name of the environment variable or secret that holds the username (used when auth_strategy is plain).
Option 9
object
The kafka source ingests data from Apache Kafka topics.
Supported pipeline types: logs
bootstrap_servers_key
string
Name of the environment variable or secret that holds the Kafka bootstrap servers list.
group_id [required]
string
Consumer group ID used by the Kafka client.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
librdkafka_options
[object]
Optional list of advanced Kafka client configuration options, defined as key-value pairs.
name [required]
string
The name of the librdkafka configuration option to set.
value [required]
string
The value assigned to the specified librdkafka configuration option.
sasl
object
Specifies the SASL mechanism for authenticating with a Kafka cluster.
mechanism
enum
SASL mechanism used for Kafka authentication.
Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
password_key
string
Name of the environment variable or secret that holds the SASL password.
username_key
string
Name of the environment variable or secret that holds the SASL username.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
topics [required]
[string]
A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.
type [required]
enum
The source type. The value should always be kafka.
Allowed enum values: kafka
default: kafka
Option 10
object
The logstash source ingests logs from a Logstash forwarder.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Logstash receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be logstash.
Allowed enum values: logstash
default: logstash
Option 11
object
The rsyslog source listens for logs over TCP or UDP from an rsyslog server using the syslog protocol.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the syslog receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
mode [required]
enum
Protocol used by the syslog source to receive messages.
Allowed enum values: tcp,udp
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be rsyslog.
Allowed enum values: rsyslog
default: rsyslog
Option 12
object
The socket source ingests logs over TCP or UDP.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the socket.
framing [required]
<oneOf>
Framing method configuration for the socket source.
Option 1
object
Byte frames which are delimited by a newline character.
method [required]
enum
Byte frames which are delimited by a newline character.
Allowed enum values: newline_delimited
Option 2
object
Byte frames are passed through as-is according to the underlying I/O boundaries (for example, split between messages or stream segments).
method [required]
enum
Byte frames are passed through as-is according to the underlying I/O boundaries (for example, split between messages or stream segments).
Allowed enum values: bytes
Option 3
object
Byte frames which are delimited by a chosen character.
delimiter [required]
string
A single ASCII character used to delimit events.
method [required]
enum
Byte frames which are delimited by a chosen character.
Allowed enum values: character_delimited
Option 4
object
Byte frames according to the octet counting format as per RFC6587.
method [required]
enum
Byte frames according to the octet counting format as per RFC6587.
Allowed enum values: octet_counting
Option 5
object
Byte frames which are chunked GELF messages.
method [required]
enum
Byte frames which are chunked GELF messages.
Allowed enum values: chunked_gelf
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
mode [required]
enum
Protocol used to receive logs.
Allowed enum values: tcp,udp
tls
object
TLS configuration. Relevant only when mode is tcp.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be socket.
Allowed enum values: socket
default: socket
Option 13
object
The splunk_hec source implements the Splunk HTTP Event Collector (HEC) API.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the HEC API.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. Always splunk_hec.
Allowed enum values: splunk_hec
default: splunk_hec
Option 14
object
The splunk_tcp source receives logs from a Splunk Universal Forwarder over TCP.
TLS is supported for secure transmission.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Splunk TCP receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. Always splunk_tcp.
Allowed enum values: splunk_tcp
default: splunk_tcp
Option 15
object
The sumo_logic source receives logs from Sumo Logic collectors.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Sumo Logic receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
type [required]
enum
The source type. The value should always be sumo_logic.
Allowed enum values: sumo_logic
default: sumo_logic
Option 16
object
The syslog_ng source listens for logs over TCP or UDP from a syslog-ng server using the syslog protocol.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the syslog-ng receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
mode [required]
enum
Protocol used by the syslog source to receive messages.
Allowed enum values: tcp,udp
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be syslog_ng.
Allowed enum values: syslog_ng
default: syslog_ng
Option 17
object
The opentelemetry source receives telemetry data using the OpenTelemetry Protocol (OTLP) over gRPC and HTTP.
Supported pipeline types: logs
grpc_address_key
string
Environment variable name containing the gRPC server address for receiving OTLP data. Must be a valid environment variable name (alphanumeric characters and underscores only).
http_address_key
string
Environment variable name containing the HTTP server address for receiving OTLP data. Must be a valid environment variable name (alphanumeric characters and underscores only).
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be opentelemetry.
Allowed enum values: opentelemetry
default: opentelemetry
use_legacy_search_syntax
boolean
Set to true to continue using the legacy search syntax while migrating filter queries. After migrating all queries to the new syntax, set to false.
The legacy syntax is deprecated and will eventually be removed.
Requires Observability Pipelines Worker 2.11 or later.
See Upgrade Your Filter Queries to the New Search Syntax for more information.
name [required]
string
Name of the pipeline.
id [required]
string
Unique identifier for the pipeline.
type [required]
string
The resource type identifier. For pipeline resources, this should always be set to pipelines.
// Update a pipeline returns "OK" responsepackagemainimport("context""encoding/json""fmt""os""github.com/DataDog/datadog-api-client-go/v2/api/datadog""github.com/DataDog/datadog-api-client-go/v2/api/datadogV2")funcmain(){// there is a valid "pipeline" in the systemPipelineDataID:=os.Getenv("PIPELINE_DATA_ID")body:=datadogV2.ObservabilityPipeline{Data:datadogV2.ObservabilityPipelineData{Attributes:datadogV2.ObservabilityPipelineDataAttributes{Config:datadogV2.ObservabilityPipelineConfig{Destinations:[]datadogV2.ObservabilityPipelineConfigDestinationItem{datadogV2.ObservabilityPipelineConfigDestinationItem{ObservabilityPipelineDatadogLogsDestination:&datadogV2.ObservabilityPipelineDatadogLogsDestination{Id:"updated-datadog-logs-destination-id",Inputs:[]string{"my-processor-group",},Type:datadogV2.OBSERVABILITYPIPELINEDATADOGLOGSDESTINATIONTYPE_DATADOG_LOGS,}},},ProcessorGroups:[]datadogV2.ObservabilityPipelineConfigProcessorGroup{{Enabled:true,Id:"my-processor-group",Include:"service:my-service",Inputs:[]string{"datadog-agent-source",},Processors:[]datadogV2.ObservabilityPipelineConfigProcessorItem{datadogV2.ObservabilityPipelineConfigProcessorItem{ObservabilityPipelineFilterProcessor:&datadogV2.ObservabilityPipelineFilterProcessor{Enabled:true,Id:"filter-processor",Include:"status:error",Type:datadogV2.OBSERVABILITYPIPELINEFILTERPROCESSORTYPE_FILTER,}},},},},Sources:[]datadogV2.ObservabilityPipelineConfigSourceItem{datadogV2.ObservabilityPipelineConfigSourceItem{ObservabilityPipelineDatadogAgentSource:&datadogV2.ObservabilityPipelineDatadogAgentSource{Id:"datadog-agent-source",Type:datadogV2.OBSERVABILITYPIPELINEDATADOGAGENTSOURCETYPE_DATADOG_AGENT,}},},},Name:"Updated Pipeline Name",},Id:PipelineDataID,Type:"pipelines",},}ctx:=datadog.NewDefaultContext(context.Background())configuration:=datadog.NewConfiguration()apiClient:=datadog.NewAPIClient(configuration)api:=datadogV2.NewObservabilityPipelinesApi(apiClient)resp,r,err:=api.UpdatePipeline(ctx,PipelineDataID,body)iferr!=nil{fmt.Fprintf(os.Stderr,"Error when calling `ObservabilityPipelinesApi.UpdatePipeline`: %v\n",err)fmt.Fprintf(os.Stderr,"Full HTTP response: %v\n",r)}responseContent,_:=json.MarshalIndent(resp,""," ")fmt.Fprintf(os.Stdout,"Response from `ObservabilityPipelinesApi.UpdatePipeline`:\n%s\n",responseContent)}
// Update a pipeline returns "OK" responseimportcom.datadog.api.client.ApiClient;importcom.datadog.api.client.ApiException;importcom.datadog.api.client.v2.api.ObservabilityPipelinesApi;importcom.datadog.api.client.v2.model.ObservabilityPipeline;importcom.datadog.api.client.v2.model.ObservabilityPipelineConfig;importcom.datadog.api.client.v2.model.ObservabilityPipelineConfigDestinationItem;importcom.datadog.api.client.v2.model.ObservabilityPipelineConfigProcessorGroup;importcom.datadog.api.client.v2.model.ObservabilityPipelineConfigProcessorItem;importcom.datadog.api.client.v2.model.ObservabilityPipelineConfigSourceItem;importcom.datadog.api.client.v2.model.ObservabilityPipelineData;importcom.datadog.api.client.v2.model.ObservabilityPipelineDataAttributes;importcom.datadog.api.client.v2.model.ObservabilityPipelineDatadogAgentSource;importcom.datadog.api.client.v2.model.ObservabilityPipelineDatadogAgentSourceType;importcom.datadog.api.client.v2.model.ObservabilityPipelineDatadogLogsDestination;importcom.datadog.api.client.v2.model.ObservabilityPipelineDatadogLogsDestinationType;importcom.datadog.api.client.v2.model.ObservabilityPipelineFilterProcessor;importcom.datadog.api.client.v2.model.ObservabilityPipelineFilterProcessorType;importjava.util.Collections;publicclassExample{publicstaticvoidmain(String[]args){ApiClientdefaultClient=ApiClient.getDefaultApiClient();ObservabilityPipelinesApiapiInstance=newObservabilityPipelinesApi(defaultClient);// there is a valid "pipeline" in the systemStringPIPELINE_DATA_ID=System.getenv("PIPELINE_DATA_ID");ObservabilityPipelinebody=newObservabilityPipeline().data(newObservabilityPipelineData().attributes(newObservabilityPipelineDataAttributes().config(newObservabilityPipelineConfig().destinations(Collections.singletonList(newObservabilityPipelineConfigDestinationItem(newObservabilityPipelineDatadogLogsDestination().id("updated-datadog-logs-destination-id").inputs(Collections.singletonList("my-processor-group")).type(ObservabilityPipelineDatadogLogsDestinationType.DATADOG_LOGS)))).processorGroups(Collections.singletonList(newObservabilityPipelineConfigProcessorGroup().enabled(true).id("my-processor-group").include("service:my-service").inputs(Collections.singletonList("datadog-agent-source")).processors(Collections.singletonList(newObservabilityPipelineConfigProcessorItem(newObservabilityPipelineFilterProcessor().enabled(true).id("filter-processor").include("status:error").type(ObservabilityPipelineFilterProcessorType.FILTER)))))).sources(Collections.singletonList(newObservabilityPipelineConfigSourceItem(newObservabilityPipelineDatadogAgentSource().id("datadog-agent-source").type(ObservabilityPipelineDatadogAgentSourceType.DATADOG_AGENT))))).name("Updated Pipeline Name")).id(PIPELINE_DATA_ID).type("pipelines"));try{ObservabilityPipelineresult=apiInstance.updatePipeline(PIPELINE_DATA_ID,body);System.out.println(result);}catch(ApiExceptione){System.err.println("Exception when calling ObservabilityPipelinesApi#updatePipeline");System.err.println("Status code: "+e.getCode());System.err.println("Reason: "+e.getResponseBody());System.err.println("Response headers: "+e.getResponseHeaders());e.printStackTrace();}}}
"""
Update a pipeline returns "OK" response
"""fromosimportenvironfromdatadog_api_clientimportApiClient,Configurationfromdatadog_api_client.v2.api.observability_pipelines_apiimportObservabilityPipelinesApifromdatadog_api_client.v2.model.observability_pipelineimportObservabilityPipelinefromdatadog_api_client.v2.model.observability_pipeline_configimportObservabilityPipelineConfigfromdatadog_api_client.v2.model.observability_pipeline_config_processor_groupimport(ObservabilityPipelineConfigProcessorGroup,)fromdatadog_api_client.v2.model.observability_pipeline_dataimportObservabilityPipelineDatafromdatadog_api_client.v2.model.observability_pipeline_data_attributesimportObservabilityPipelineDataAttributesfromdatadog_api_client.v2.model.observability_pipeline_datadog_agent_sourceimport(ObservabilityPipelineDatadogAgentSource,)fromdatadog_api_client.v2.model.observability_pipeline_datadog_agent_source_typeimport(ObservabilityPipelineDatadogAgentSourceType,)fromdatadog_api_client.v2.model.observability_pipeline_datadog_logs_destinationimport(ObservabilityPipelineDatadogLogsDestination,)fromdatadog_api_client.v2.model.observability_pipeline_datadog_logs_destination_typeimport(ObservabilityPipelineDatadogLogsDestinationType,)fromdatadog_api_client.v2.model.observability_pipeline_filter_processorimportObservabilityPipelineFilterProcessorfromdatadog_api_client.v2.model.observability_pipeline_filter_processor_typeimport(ObservabilityPipelineFilterProcessorType,)# there is a valid "pipeline" in the systemPIPELINE_DATA_ID=environ["PIPELINE_DATA_ID"]body=ObservabilityPipeline(data=ObservabilityPipelineData(attributes=ObservabilityPipelineDataAttributes(config=ObservabilityPipelineConfig(destinations=[ObservabilityPipelineDatadogLogsDestination(id="updated-datadog-logs-destination-id",inputs=["my-processor-group",],type=ObservabilityPipelineDatadogLogsDestinationType.DATADOG_LOGS,),],processor_groups=[ObservabilityPipelineConfigProcessorGroup(enabled=True,id="my-processor-group",include="service:my-service",inputs=["datadog-agent-source",],processors=[ObservabilityPipelineFilterProcessor(enabled=True,id="filter-processor",include="status:error",type=ObservabilityPipelineFilterProcessorType.FILTER,),],),],sources=[ObservabilityPipelineDatadogAgentSource(id="datadog-agent-source",type=ObservabilityPipelineDatadogAgentSourceType.DATADOG_AGENT,),],),name="Updated Pipeline Name",),id=PIPELINE_DATA_ID,type="pipelines",),)configuration=Configuration()withApiClient(configuration)asapi_client:api_instance=ObservabilityPipelinesApi(api_client)response=api_instance.update_pipeline(pipeline_id=PIPELINE_DATA_ID,body=body)print(response)
# Update a pipeline returns "OK" responserequire"datadog_api_client"api_instance=DatadogAPIClient::V2::ObservabilityPipelinesAPI.new# there is a valid "pipeline" in the systemPIPELINE_DATA_ID=ENV["PIPELINE_DATA_ID"]body=DatadogAPIClient::V2::ObservabilityPipeline.new({data:DatadogAPIClient::V2::ObservabilityPipelineData.new({attributes:DatadogAPIClient::V2::ObservabilityPipelineDataAttributes.new({config:DatadogAPIClient::V2::ObservabilityPipelineConfig.new({destinations:[DatadogAPIClient::V2::ObservabilityPipelineDatadogLogsDestination.new({id:"updated-datadog-logs-destination-id",inputs:["my-processor-group",],type:DatadogAPIClient::V2::ObservabilityPipelineDatadogLogsDestinationType::DATADOG_LOGS,}),],processor_groups:[DatadogAPIClient::V2::ObservabilityPipelineConfigProcessorGroup.new({enabled:true,id:"my-processor-group",include:"service:my-service",inputs:["datadog-agent-source",],processors:[DatadogAPIClient::V2::ObservabilityPipelineFilterProcessor.new({enabled:true,id:"filter-processor",include:"status:error",type:DatadogAPIClient::V2::ObservabilityPipelineFilterProcessorType::FILTER,}),],}),],sources:[DatadogAPIClient::V2::ObservabilityPipelineDatadogAgentSource.new({id:"datadog-agent-source",type:DatadogAPIClient::V2::ObservabilityPipelineDatadogAgentSourceType::DATADOG_AGENT,}),],}),name:"Updated Pipeline Name",}),id:PIPELINE_DATA_ID,type:"pipelines",}),})papi_instance.update_pipeline(PIPELINE_DATA_ID,body)
// Update a pipeline returns "OK" response
usedatadog_api_client::datadog;usedatadog_api_client::datadogV2::api_observability_pipelines::ObservabilityPipelinesAPI;usedatadog_api_client::datadogV2::model::ObservabilityPipeline;usedatadog_api_client::datadogV2::model::ObservabilityPipelineConfig;usedatadog_api_client::datadogV2::model::ObservabilityPipelineConfigDestinationItem;usedatadog_api_client::datadogV2::model::ObservabilityPipelineConfigProcessorGroup;usedatadog_api_client::datadogV2::model::ObservabilityPipelineConfigProcessorItem;usedatadog_api_client::datadogV2::model::ObservabilityPipelineConfigSourceItem;usedatadog_api_client::datadogV2::model::ObservabilityPipelineData;usedatadog_api_client::datadogV2::model::ObservabilityPipelineDataAttributes;usedatadog_api_client::datadogV2::model::ObservabilityPipelineDatadogAgentSource;usedatadog_api_client::datadogV2::model::ObservabilityPipelineDatadogAgentSourceType;usedatadog_api_client::datadogV2::model::ObservabilityPipelineDatadogLogsDestination;usedatadog_api_client::datadogV2::model::ObservabilityPipelineDatadogLogsDestinationType;usedatadog_api_client::datadogV2::model::ObservabilityPipelineFilterProcessor;usedatadog_api_client::datadogV2::model::ObservabilityPipelineFilterProcessorType;#[tokio::main]asyncfnmain(){// there is a valid "pipeline" in the system
letpipeline_data_id=std::env::var("PIPELINE_DATA_ID").unwrap();letbody=ObservabilityPipeline::new(ObservabilityPipelineData::new(ObservabilityPipelineDataAttributes::new(ObservabilityPipelineConfig::new(vec![ObservabilityPipelineConfigDestinationItem::ObservabilityPipelineDatadogLogsDestination(Box::new(ObservabilityPipelineDatadogLogsDestination::new("updated-datadog-logs-destination-id".to_string(),vec!["my-processor-group".to_string()],ObservabilityPipelineDatadogLogsDestinationType::DATADOG_LOGS,),),)],vec![ObservabilityPipelineConfigSourceItem::ObservabilityPipelineDatadogAgentSource(Box::new(ObservabilityPipelineDatadogAgentSource::new("datadog-agent-source".to_string(),ObservabilityPipelineDatadogAgentSourceType::DATADOG_AGENT,),),)],).processor_groups(vec![ObservabilityPipelineConfigProcessorGroup::new(true,"my-processor-group".to_string(),"service:my-service".to_string(),vec!["datadog-agent-source".to_string()],vec![ObservabilityPipelineConfigProcessorItem::ObservabilityPipelineFilterProcessor(Box::new(ObservabilityPipelineFilterProcessor::new(true,"filter-processor".to_string(),"status:error".to_string(),ObservabilityPipelineFilterProcessorType::FILTER,),),)],)],),"Updated Pipeline Name".to_string(),),pipeline_data_id.clone(),"pipelines".to_string(),),);letconfiguration=datadog::Configuration::new();letapi=ObservabilityPipelinesAPI::with_config(configuration);letresp=api.update_pipeline(pipeline_data_id.clone(),body).await;ifletOk(value)=resp{println!("{:#?}",value);}else{println!("{:#?}",resp.unwrap_err());}}
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com"DD_API_KEY="<API-KEY>"DD_APP_KEY="<APP-KEY>"cargo run
/**
* Update a pipeline returns "OK" response
*/import{client,v2}from"@datadog/datadog-api-client";constconfiguration=client.createConfiguration();constapiInstance=newv2.ObservabilityPipelinesApi(configuration);// there is a valid "pipeline" in the system
constPIPELINE_DATA_ID=process.env.PIPELINE_DATA_IDasstring;constparams: v2.ObservabilityPipelinesApiUpdatePipelineRequest={body:{data:{attributes:{config:{destinations:[{id:"updated-datadog-logs-destination-id",inputs:["my-processor-group"],type:"datadog_logs",},],processorGroups:[{enabled: true,id:"my-processor-group",include:"service:my-service",inputs:["datadog-agent-source"],processors:[{enabled: true,id:"filter-processor",include:"status:error",type:"filter",},],},],sources:[{id:"datadog-agent-source",type:"datadog_agent",},],},name:"Updated Pipeline Name",},id: PIPELINE_DATA_ID,type:"pipelines",},},pipelineId: PIPELINE_DATA_ID,};apiInstance.updatePipeline(params).then((data: v2.ObservabilityPipeline)=>{console.log("API called successfully. Returned data: "+JSON.stringify(data));}).catch((error: any)=>console.error(error));
"""
Delete a pipeline returns "OK" response
"""fromosimportenvironfromdatadog_api_clientimportApiClient,Configurationfromdatadog_api_client.v2.api.observability_pipelines_apiimportObservabilityPipelinesApi# there is a valid "pipeline" in the systemPIPELINE_DATA_ID=environ["PIPELINE_DATA_ID"]configuration=Configuration()withApiClient(configuration)asapi_client:api_instance=ObservabilityPipelinesApi(api_client)api_instance.delete_pipeline(pipeline_id=PIPELINE_DATA_ID,)
# Delete a pipeline returns "OK" responserequire"datadog_api_client"api_instance=DatadogAPIClient::V2::ObservabilityPipelinesAPI.new# there is a valid "pipeline" in the systemPIPELINE_DATA_ID=ENV["PIPELINE_DATA_ID"]api_instance.delete_pipeline(PIPELINE_DATA_ID)
// Delete a pipeline returns "OK" responsepackagemainimport("context""fmt""os""github.com/DataDog/datadog-api-client-go/v2/api/datadog""github.com/DataDog/datadog-api-client-go/v2/api/datadogV2")funcmain(){// there is a valid "pipeline" in the systemPipelineDataID:=os.Getenv("PIPELINE_DATA_ID")ctx:=datadog.NewDefaultContext(context.Background())configuration:=datadog.NewConfiguration()apiClient:=datadog.NewAPIClient(configuration)api:=datadogV2.NewObservabilityPipelinesApi(apiClient)r,err:=api.DeletePipeline(ctx,PipelineDataID)iferr!=nil{fmt.Fprintf(os.Stderr,"Error when calling `ObservabilityPipelinesApi.DeletePipeline`: %v\n",err)fmt.Fprintf(os.Stderr,"Full HTTP response: %v\n",r)}}
// Delete a pipeline returns "OK" responseimportcom.datadog.api.client.ApiClient;importcom.datadog.api.client.ApiException;importcom.datadog.api.client.v2.api.ObservabilityPipelinesApi;publicclassExample{publicstaticvoidmain(String[]args){ApiClientdefaultClient=ApiClient.getDefaultApiClient();ObservabilityPipelinesApiapiInstance=newObservabilityPipelinesApi(defaultClient);// there is a valid "pipeline" in the systemStringPIPELINE_DATA_ID=System.getenv("PIPELINE_DATA_ID");try{apiInstance.deletePipeline(PIPELINE_DATA_ID);}catch(ApiExceptione){System.err.println("Exception when calling ObservabilityPipelinesApi#deletePipeline");System.err.println("Status code: "+e.getCode());System.err.println("Reason: "+e.getResponseBody());System.err.println("Response headers: "+e.getResponseHeaders());e.printStackTrace();}}}
// Delete a pipeline returns "OK" response
usedatadog_api_client::datadog;usedatadog_api_client::datadogV2::api_observability_pipelines::ObservabilityPipelinesAPI;#[tokio::main]asyncfnmain(){// there is a valid "pipeline" in the system
letpipeline_data_id=std::env::var("PIPELINE_DATA_ID").unwrap();letconfiguration=datadog::Configuration::new();letapi=ObservabilityPipelinesAPI::with_config(configuration);letresp=api.delete_pipeline(pipeline_data_id.clone()).await;ifletOk(value)=resp{println!("{:#?}",value);}else{println!("{:#?}",resp.unwrap_err());}}
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com"DD_API_KEY="<API-KEY>"DD_APP_KEY="<APP-KEY>"cargo run
/**
* Delete a pipeline returns "OK" response
*/import{client,v2}from"@datadog/datadog-api-client";constconfiguration=client.createConfiguration();constapiInstance=newv2.ObservabilityPipelinesApi(configuration);// there is a valid "pipeline" in the system
constPIPELINE_DATA_ID=process.env.PIPELINE_DATA_IDasstring;constparams: v2.ObservabilityPipelinesApiDeletePipelineRequest={pipelineId: PIPELINE_DATA_ID,};apiInstance.deletePipeline(params).then((data: any)=>{console.log("API called successfully. Returned data: "+JSON.stringify(data));}).catch((error: any)=>console.error(error));
Validates a pipeline configuration without creating or updating any resources.
Returns a list of validation errors, if any.
This endpoint requires the observability_pipelines_read permission.
Name of the environment variable or secret that holds a custom header value (used with custom auth strategies).
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
password_key
string
Name of the environment variable or secret that holds the password (used when auth_strategy is basic).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
token_key
string
Name of the environment variable or secret that holds the bearer token (used when auth_strategy is bearer).
type [required]
enum
The destination type. The value should always be http_client.
Allowed enum values: http_client
default: http_client
uri_key
string
Name of the environment variable or secret that holds the HTTP endpoint URI.
username_key
string
Name of the environment variable or secret that holds the username (used when auth_strategy is basic).
Option 2
object
The amazon_opensearch destination writes logs to Amazon OpenSearch.
Supported pipeline types: logs
auth [required]
object
Authentication settings for the Amazon OpenSearch destination.
The strategy field determines whether basic or AWS-based authentication is used.
assume_role
string
The ARN of the role to assume (used with aws strategy).
aws_region
string
AWS region
external_id
string
External ID for the assumed role (used with aws strategy).
session_name
string
Session name for the assumed role (used with aws strategy).
strategy [required]
enum
The authentication strategy to use.
Allowed enum values: basic,aws
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
bulk_index
string
The index to write logs to.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be amazon_opensearch.
Allowed enum values: amazon_opensearch
default: amazon_opensearch
Option 3
object
The amazon_s3 destination sends your logs in Datadog-rehydratable format to an Amazon S3 bucket for archiving.
Supported pipeline types: logs
auth
object
AWS authentication credentials used for accessing AWS services such as S3.
If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).
assume_role
string
The Amazon Resource Name (ARN) of the role to assume.
external_id
string
A unique identifier for cross-account role assumption.
session_name
string
A session identifier used for logging and tracing the assumed role session.
bucket [required]
string
S3 bucket name.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
Unique identifier for the destination component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. Always amazon_s3.
Allowed enum values: amazon_s3
default: amazon_s3
Option 4
object
The amazon_security_lake destination sends your logs to Amazon Security Lake.
Supported pipeline types: logs
auth
object
AWS authentication credentials used for accessing AWS services such as S3.
If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).
assume_role
string
The Amazon Resource Name (ARN) of the role to assume.
external_id
string
A unique identifier for cross-account role assumption.
session_name
string
A session identifier used for logging and tracing the assumed role session.
bucket [required]
string
Name of the Amazon S3 bucket in Security Lake (3-63 characters).
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
custom_source_name [required]
string
Custom source name for the logs in Security Lake.
id [required]
string
Unique identifier for the destination component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
region [required]
string
AWS region of the S3 bucket.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. Always amazon_security_lake.
Allowed enum values: amazon_security_lake
default: amazon_security_lake
Option 5
object
The azure_storage destination forwards logs to an Azure Blob Storage container.
Supported pipeline types: logs
blob_prefix
string
Optional prefix for blobs written to the container.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
connection_string_key
string
Name of the environment variable or secret that holds the Azure Storage connection string.
container_name [required]
string
The name of the Azure Blob Storage container to store logs in.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be azure_storage.
Allowed enum values: azure_storage
default: azure_storage
Option 6
object
The cloud_prem destination sends logs to Datadog CloudPrem.
Supported pipeline types: logs
endpoint_url_key
string
Name of the environment variable or secret that holds the CloudPrem endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be cloud_prem.
Allowed enum values: cloud_prem
default: cloud_prem
Option 7
object
The crowdstrike_next_gen_siem destination forwards logs to CrowdStrike Next Gen SIEM.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
compression
object
Compression configuration for log events.
algorithm [required]
enum
Compression algorithm for log events.
Allowed enum values: gzip,zlib
level
int64
Compression level.
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json,raw_message
endpoint_url_key
string
Name of the environment variable or secret that holds the CrowdStrike endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
token_key
string
Name of the environment variable or secret that holds the CrowdStrike API token.
type [required]
enum
The destination type. The value should always be crowdstrike_next_gen_siem.
Allowed enum values: crowdstrike_next_gen_siem
default: crowdstrike_next_gen_siem
Option 8
object
The datadog_logs destination forwards logs to Datadog Log Management.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
routes
[object]
A list of routing rules that forward matching logs to Datadog using dedicated API keys.
api_key_key
string
Name of the environment variable or secret that stores the Datadog API key used by this route.
include
string
A Datadog search query that determines which logs are forwarded using this route.
route_id
string
Unique identifier for this route within the destination.
site
string
Datadog site where matching logs are sent (for example, us1).
type [required]
enum
The destination type. The value should always be datadog_logs.
Allowed enum values: datadog_logs
default: datadog_logs
Option 9
object
The elasticsearch destination writes logs to an Elasticsearch cluster.
Supported pipeline types: logs
api_version
enum
The Elasticsearch API version to use. Set to auto to auto-detect.
Allowed enum values: auto,v6,v7,v8
auth
object
Authentication settings for the Elasticsearch destination.
When strategy is basic, use username_key and password_key to reference credentials stored in environment variables or secrets.
password_key
string
Name of the environment variable or secret that holds the Elasticsearch password (used when strategy is basic).
strategy [required]
enum
The authentication strategy to use.
Allowed enum values: basic,aws
username_key
string
Name of the environment variable or secret that holds the Elasticsearch username (used when strategy is basic).
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
bulk_index
string
The index to write logs to in Elasticsearch.
data_stream
object
Configuration options for writing to Elasticsearch Data Streams instead of a fixed index.
dataset
string
The data stream dataset for your logs. This groups logs by their source or application.
dtype
string
The data stream type for your logs. This determines how logs are categorized within the data stream.
namespace
string
The data stream namespace for your logs. This separates logs into different environments or domains.
endpoint_url_key
string
Name of the environment variable or secret that holds the Elasticsearch endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be elasticsearch.
Allowed enum values: elasticsearch
default: elasticsearch
Option 10
object
The google_chronicle destination sends logs to Google Chronicle.
Supported pipeline types: logs
auth
object
Google Cloud credentials used to authenticate with Google Cloud Storage.
credentials_file [required]
string
Path to the Google Cloud service account key file.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
customer_id [required]
string
The Google Chronicle customer ID.
encoding
enum
The encoding format for the logs sent to Chronicle.
Allowed enum values: json,raw_message
endpoint_url_key
string
Name of the environment variable or secret that holds the Google Chronicle endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
log_type
string
The log type metadata associated with the Chronicle destination.
type [required]
enum
The destination type. The value should always be google_chronicle.
Allowed enum values: google_chronicle
default: google_chronicle
Option 11
object
The google_cloud_storage destination stores logs in a Google Cloud Storage (GCS) bucket.
It requires a bucket name, Google Cloud authentication, and metadata fields.
Supported pipeline types: logs
acl
enum
Access control list setting for objects written to the bucket.
Allowed enum values: private,project-private,public-read,authenticated-read,bucket-owner-read,bucket-owner-full-control
auth
object
Google Cloud credentials used to authenticate with Google Cloud Storage.
credentials_file [required]
string
Path to the Google Cloud service account key file.
bucket [required]
string
Name of the GCS bucket.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
Unique identifier for the destination component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
key_prefix
string
Optional prefix for object keys within the GCS bucket.
metadata
[object]
Custom metadata to attach to each object uploaded to the GCS bucket.
name [required]
string
The metadata key.
value [required]
string
The metadata value.
storage_class [required]
enum
Storage class used for objects stored in GCS.
Allowed enum values: STANDARD,NEARLINE,COLDLINE,ARCHIVE
type [required]
enum
The destination type. Always google_cloud_storage.
Allowed enum values: google_cloud_storage
default: google_cloud_storage
Option 12
object
The google_pubsub destination publishes logs to a Google Cloud Pub/Sub topic.
Supported pipeline types: logs
auth
object
Google Cloud credentials used to authenticate with Google Cloud Storage.
credentials_file [required]
string
Path to the Google Cloud service account key file.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json,raw_message
endpoint_url_key
string
Name of the environment variable or secret that holds the Google Cloud Pub/Sub endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
project [required]
string
The Google Cloud project ID that owns the Pub/Sub topic.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
topic [required]
string
The Pub/Sub topic name to publish logs to.
type [required]
enum
The destination type. The value should always be google_pubsub.
Allowed enum values: google_pubsub
default: google_pubsub
Option 13
object
The kafka destination sends logs to Apache Kafka topics.
Supported pipeline types: logs
bootstrap_servers_key
string
Name of the environment variable or secret that holds the Kafka bootstrap servers list.
compression
enum
Compression codec for Kafka messages.
Allowed enum values: none,gzip,snappy,lz4,zstd
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json,raw_message
headers_key
string
The field name to use for Kafka message headers.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
key_field
string
The field name to use as the Kafka message key.
librdkafka_options
[object]
Optional list of advanced Kafka producer configuration options, defined as key-value pairs.
name [required]
string
The name of the librdkafka configuration option to set.
value [required]
string
The value assigned to the specified librdkafka configuration option.
message_timeout_ms
int64
Maximum time in milliseconds to wait for message delivery confirmation.
rate_limit_duration_secs
int64
Duration in seconds for the rate limit window.
rate_limit_num
int64
Maximum number of messages allowed per rate limit duration.
sasl
object
Specifies the SASL mechanism for authenticating with a Kafka cluster.
mechanism
enum
SASL mechanism used for Kafka authentication.
Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
password_key
string
Name of the environment variable or secret that holds the SASL password.
username_key
string
Name of the environment variable or secret that holds the SASL username.
socket_timeout_ms
int64
Socket timeout in milliseconds for network requests.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
topic [required]
string
The Kafka topic name to publish logs to.
type [required]
enum
The destination type. The value should always be kafka.
Allowed enum values: kafka
default: kafka
Option 14
object
The microsoft_sentinel destination forwards logs to Microsoft Sentinel.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
client_id [required]
string
Azure AD client ID used for authentication.
client_secret_key
string
Name of the environment variable or secret that holds the Azure AD client secret.
dce_uri_key
string
Name of the environment variable or secret that holds the Data Collection Endpoint (DCE) URI.
dcr_immutable_id [required]
string
The immutable ID of the Data Collection Rule (DCR).
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
table [required]
string
The name of the Log Analytics table where logs are sent.
tenant_id [required]
string
Azure AD tenant ID.
type [required]
enum
The destination type. The value should always be microsoft_sentinel.
Allowed enum values: microsoft_sentinel
default: microsoft_sentinel
Option 15
object
The new_relic destination sends logs to the New Relic platform.
Supported pipeline types: logs
account_id_key
string
Name of the environment variable or secret that holds the New Relic account ID.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
license_key_key
string
Name of the environment variable or secret that holds the New Relic license key.
region [required]
enum
The New Relic region.
Allowed enum values: us,eu
type [required]
enum
The destination type. The value should always be new_relic.
Allowed enum values: new_relic
default: new_relic
Option 16
object
The opensearch destination writes logs to an OpenSearch cluster.
Supported pipeline types: logs
auth
object
Authentication settings for the Elasticsearch destination.
When strategy is basic, use username_key and password_key to reference credentials stored in environment variables or secrets.
password_key
string
Name of the environment variable or secret that holds the Elasticsearch password (used when strategy is basic).
strategy [required]
enum
The authentication strategy to use.
Allowed enum values: basic,aws
username_key
string
Name of the environment variable or secret that holds the Elasticsearch username (used when strategy is basic).
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
bulk_index
string
The index to write logs to.
data_stream
object
Configuration options for writing to OpenSearch Data Streams instead of a fixed index.
dataset
string
The data stream dataset for your logs. This groups logs by their source or application.
dtype
string
The data stream type for your logs. This determines how logs are categorized within the data stream.
namespace
string
The data stream namespace for your logs. This separates logs into different environments or domains.
endpoint_url_key
string
Name of the environment variable or secret that holds the OpenSearch endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be opensearch.
Allowed enum values: opensearch
default: opensearch
Option 17
object
The rsyslog destination forwards logs to an external rsyslog server over TCP or UDP using the syslog protocol.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
endpoint_url_key
string
Name of the environment variable or secret that holds the syslog server endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
keepalive
int64
Optional socket keepalive duration in milliseconds.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. The value should always be rsyslog.
Allowed enum values: rsyslog
default: rsyslog
Option 18
object
The sentinel_one destination sends logs to SentinelOne.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
region [required]
enum
The SentinelOne region to send logs to.
Allowed enum values: us,eu,ca,data_set_us
token_key
string
Name of the environment variable or secret that holds the SentinelOne API token.
type [required]
enum
The destination type. The value should always be sentinel_one.
Allowed enum values: sentinel_one
default: sentinel_one
Option 19
object
The socket destination sends logs over TCP or UDP to a remote server.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the socket address (host:port).
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
encoding [required]
enum
Encoding format for log events.
Allowed enum values: json,raw_message
framing [required]
<oneOf>
Framing method configuration.
Option 1
object
Each log event is delimited by a newline character.
method [required]
enum
The definition of ObservabilityPipelineSocketDestinationFramingNewlineDelimitedMethod object.
Allowed enum values: newline_delimited
Option 2
object
Event data is not delimited at all.
method [required]
enum
The definition of ObservabilityPipelineSocketDestinationFramingBytesMethod object.
Allowed enum values: bytes
Option 3
object
Each log event is separated using the specified delimiter character.
delimiter [required]
string
A single ASCII character used as a delimiter.
method [required]
enum
The definition of ObservabilityPipelineSocketDestinationFramingCharacterDelimitedMethod object.
Allowed enum values: character_delimited
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
mode [required]
enum
Protocol used to send logs.
Allowed enum values: tcp,udp
tls
object
TLS configuration. Relevant only when mode is tcp.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. The value should always be socket.
Allowed enum values: socket
default: socket
Option 20
object
The splunk_hec destination forwards logs to Splunk using the HTTP Event Collector (HEC).
Supported pipeline types: logs
auto_extract_timestamp
boolean
If true, Splunk tries to extract timestamps from incoming log events.
If false, Splunk assigns the time the event was received.
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
encoding
enum
Encoding format for log events.
Allowed enum values: json,raw_message
endpoint_url_key
string
Name of the environment variable or secret that holds the Splunk HEC endpoint URL.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
index
string
Optional name of the Splunk index where logs are written.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
sourcetype
string
The Splunk sourcetype to assign to log events.
token_key
string
Name of the environment variable or secret that holds the Splunk HEC token.
type [required]
enum
The destination type. Always splunk_hec.
Allowed enum values: splunk_hec
default: splunk_hec
Option 21
object
The sumo_logic destination forwards logs to Sumo Logic.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
encoding
enum
The output encoding format.
Allowed enum values: json,raw_message,logfmt
endpoint_url_key
string
Name of the environment variable or secret that holds the Sumo Logic HTTP endpoint URL.
header_custom_fields
[object]
A list of custom headers to include in the request to Sumo Logic.
name [required]
string
The header field name.
value [required]
string
The header field value.
header_host_name
string
Optional override for the host name header.
header_source_category
string
Optional override for the source category header.
header_source_name
string
Optional override for the source name header.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be sumo_logic.
Allowed enum values: sumo_logic
default: sumo_logic
Option 22
object
The syslog_ng destination forwards logs to an external syslog-ng server over TCP or UDP using the syslog protocol.
Supported pipeline types: logs
buffer
<oneOf>
Configuration for buffer settings on destination components.
Option 1
object
Options for configuring a disk buffer.
max_size [required]
int64
Maximum size of the disk buffer.
type
enum
The type of the buffer that will be configured, a disk buffer.
Allowed enum values: disk
default: disk
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 2
object
Options for configuring a memory buffer by byte size.
max_size [required]
int64
Maximum size of the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
Option 3
object
Options for configuring a memory buffer by queue length.
max_events [required]
int64
Maximum events for the memory buffer.
type
enum
The type of the buffer that will be configured, a memory buffer.
Allowed enum values: memory
default: memory
when_full
enum
Behavior when the buffer is full (block and stop accepting new events, or drop new events)
Allowed enum values: block,drop_newest
default: block
endpoint_url_key
string
Name of the environment variable or secret that holds the syslog-ng server endpoint URL.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
keepalive
int64
Optional socket keepalive duration in milliseconds.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The destination type. The value should always be syslog_ng.
Allowed enum values: syslog_ng
default: syslog_ng
Option 23
object
The datadog_metrics destination forwards metrics to Datadog.
Supported pipeline types: metrics
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be datadog_metrics.
Allowed enum values: datadog_metrics
default: datadog_metrics
pipeline_type
enum
The type of data being ingested. Defaults to logs if not specified.
Allowed enum values: logs,metrics
default: logs
processor_groups
[object]
A list of processor groups that transform or enrich log data.
display_name
string
The display name for a component.
enabled [required]
boolean
Whether this processor group is enabled.
id [required]
string
The unique identifier for the processor group.
include [required]
string
Conditional expression for when this processor group should execute.
inputs [required]
[string]
A list of IDs for components whose output is used as the input for this processor group.
processors [required]
[ <oneOf>]
Processors applied sequentially within this group. Events flow through each processor in order.
Option 1
object
The filter processor allows conditional processing of logs/metrics based on a Datadog search query. Logs/metrics that match the include query are passed through; others are discarded.
Supported pipeline types: logs, metrics
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs/metrics should pass through the filter. Logs/metrics that match this query continue to downstream components; others are dropped.
type [required]
enum
The processor type. The value should always be filter.
Allowed enum values: filter
default: filter
Option 2
object
The add_env_vars processor adds environment variable values to log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this processor in the pipeline.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_env_vars.
Allowed enum values: add_env_vars
default: add_env_vars
variables [required]
[object]
A list of environment variable mappings to apply to log fields.
field [required]
string
The target field in the log event.
name [required]
string
The name of the environment variable to read.
Option 3
object
The add_fields processor adds static key-value fields to logs.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[object]
A list of static fields (key-value pairs) that is added to each log event processed by this component.
name [required]
string
The field name.
value [required]
string
The field value.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_fields.
Allowed enum values: add_fields
default: add_fields
Option 4
object
The add_hostname processor adds the hostname to log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_hostname.
Allowed enum values: add_hostname
default: add_hostname
Option 5
object
The custom_processor processor transforms events using Vector Remap Language (VRL) scripts with advanced filtering capabilities.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets. This field should always be set to * for the custom_processor processor.
default: *
remaps [required]
[object]
Array of VRL remap rules.
drop_on_error [required]
boolean
Whether to drop events that caused errors during processing.
enabled
boolean
Whether this remap rule is enabled.
include [required]
string
A Datadog search query used to filter events for this specific remap rule.
name [required]
string
A descriptive name for this remap rule.
source [required]
string
The VRL script source code that defines the processing logic.
type [required]
enum
The processor type. The value should always be custom_processor.
Allowed enum values: custom_processor
default: custom_processor
Option 6
object
The datadog_tags processor includes or excludes specific Datadog tags in your logs.
Supported pipeline types: logs
action [required]
enum
The action to take on tags with matching keys.
Allowed enum values: include,exclude
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
keys [required]
[string]
A list of tag keys.
mode [required]
enum
The processing mode.
Allowed enum values: filter
type [required]
enum
The processor type. The value should always be datadog_tags.
Allowed enum values: datadog_tags
default: datadog_tags
Option 7
object
The dedupe processor removes duplicate fields in log events.
Supported pipeline types: logs
cache
object
Configuration for the cache used to detect duplicates.
num_events [required]
int64
The number of events to cache for duplicate detection.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[string]
A list of log field paths to check for duplicates.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
mode [required]
enum
The deduplication mode to apply to the fields.
Allowed enum values: match,ignore
type [required]
enum
The processor type. The value should always be dedupe.
Allowed enum values: dedupe
default: dedupe
Option 8
object
The enrichment_table processor enriches logs using a static CSV file, GeoIP database, or reference table. Exactly one of file, geoip, or reference_table must be configured.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
file
object
Defines a static enrichment table loaded from a CSV file.
encoding [required]
object
File encoding format.
delimiter [required]
string
The encodingdelimiter.
includes_headers [required]
boolean
The encodingincludes_headers.
type [required]
enum
Specifies the encoding format (e.g., CSV) used for enrichment tables.
Allowed enum values: csv
key [required]
[object]
Key fields used to look up enrichment values.
column [required]
string
The itemscolumn.
comparison [required]
enum
Defines how to compare key fields for enrichment table lookups.
Allowed enum values: equals
field [required]
string
The itemsfield.
path [required]
string
Path to the CSV file.
schema [required]
[object]
Schema defining column names and their types.
column [required]
string
The itemscolumn.
type [required]
enum
Declares allowed data types for enrichment table columns.
Allowed enum values: string,boolean,integer,float,date,timestamp
geoip
object
Uses a GeoIP database to enrich logs based on an IP field.
key_field [required]
string
Path to the IP field in the log.
locale [required]
string
Locale used to resolve geographical names.
path [required]
string
Path to the GeoIP database file.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
reference_table
object
Uses a Datadog reference table to enrich logs.
app_key_key
string
Name of the environment variable or secret that holds the Datadog application key used to access the reference table.
columns
[string]
List of column names to include from the reference table. If not provided, all columns are included.
key_field [required]
string
Path to the field in the log event to match against the reference table.
table_id [required]
string
The unique identifier of the reference table.
target [required]
string
Path where enrichment results should be stored in the log.
type [required]
enum
The processor type. The value should always be enrichment_table.
Allowed enum values: enrichment_table
default: enrichment_table
Option 9
object
The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog.
Metrics can be counters, gauges, or distributions and optionally grouped by log fields.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline.
include
string
A Datadog search query used to determine which logs this processor targets.
metrics
[object]
Configuration for generating individual metrics.
group_by
[string]
Optional fields used to group the metric series.
include [required]
string
Datadog filter query to match logs for metric generation.
metric_type [required]
enum
Type of metric to create.
Allowed enum values: count,gauge,distribution
name [required]
string
Name of the custom metric to be created.
value [required]
<oneOf>
Specifies how the value of the generated metric is computed.
Option 1
object
Strategy that increments a generated metric by one for each matching event.
strategy [required]
enum
Increments the metric by 1 for each matching event.
Allowed enum values: increment_by_one
Option 2
object
Strategy that increments a generated metric based on the value of a log field.
field [required]
string
Name of the log field containing the numeric value to increment the metric by.
strategy [required]
enum
Uses a numeric field in the log event as the metric increment.
Allowed enum values: increment_by_field
type [required]
enum
The processor type. Always generate_datadog_metrics.
Allowed enum values: generate_datadog_metrics
default: generate_datadog_metrics
Option 10
object
The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
mappings [required]
[object]
A list of mapping rules to convert events to the OCSF format.
include [required]
string
A Datadog search query used to select the logs that this mapping should apply to.
mapping [required]
<oneOf>
Defines a single mapping rule for transforming logs into the OCSF schema.
Custom OCSF mapping configuration for transforming logs.
mapping [required]
[object]
A list of field mapping rules for transforming log fields to OCSF schema fields.
default
The default value to use if the source field is missing or empty.
dest [required]
string
The destination OCSF field path.
lookup
object
Lookup table configuration for mapping source values to destination values.
source
The source field path from the log event.
sources
Multiple source field paths for combined mapping.
value
A static value to use for the destination field.
metadata [required]
object
Metadata for the custom OCSF mapping.
class [required]
string
The OCSF event class name.
profiles
[string]
A list of OCSF profiles to apply.
version [required]
string
The OCSF schema version.
version [required]
int64
The version of the custom mapping configuration.
type [required]
enum
The processor type. The value should always be ocsf_mapper.
Allowed enum values: ocsf_mapper
default: ocsf_mapper
Option 11
object
The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.
Supported pipeline types: logs
disable_library_rules
boolean
If set to true, disables the default Grok rules provided by Datadog.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
A unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
rules [required]
[object]
The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.
match_rules [required]
[object]
A list of Grok parsing rules that define how to extract fields from the source field.
Each rule must contain a name and a valid Grok pattern.
name [required]
string
The name of the rule.
rule [required]
string
The definition of the Grok rule.
source [required]
string
The name of the field in the log event to apply the Grok rules to.
support_rules
[object]
A list of Grok helper rules that can be referenced by the parsing rules.
name [required]
string
The name of the Grok helper rule.
rule [required]
string
The definition of the Grok helper rule.
type [required]
enum
The processor type. The value should always be parse_grok.
Allowed enum values: parse_grok
default: parse_grok
Option 12
object
The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
field [required]
string
The name of the log field that contains a JSON string.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be parse_json.
Allowed enum values: parse_json
default: parse_json
Option 13
object
The parse_xml processor parses XML from a specified field and extracts it into the event.
Supported pipeline types: logs
always_use_text_key
boolean
Whether to always use a text key for element content.
attr_prefix
string
The prefix to use for XML attributes in the parsed output.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
field [required]
string
The name of the log field that contains an XML string.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
include_attr
boolean
Whether to include XML attributes in the parsed output.
parse_bool
boolean
Whether to parse boolean values from strings.
parse_null
boolean
Whether to parse null values.
parse_number
boolean
Whether to parse numeric values from strings.
text_key
string
The key name to use for text content within XML elements. Must be at least 1 character if specified.
type [required]
enum
The processor type. The value should always be parse_xml.
Allowed enum values: parse_xml
default: parse_xml
Option 14
object
The quota processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.
Supported pipeline types: logs
display_name
string
The display name for a component.
drop_events
boolean
If set to true, logs that match the quota filter and are sent after the quota is exceeded are dropped. Logs that do not match the filter continue through the pipeline. Note: You can set either drop_events or overflow_action, but not both.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
ignore_when_missing_partitions
boolean
If true, the processor skips quota checks when partition fields are missing from the logs.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
name [required]
string
Name of the quota.
overflow_action
enum
The action to take when the quota or bucket limit is exceeded. Options:
drop: Drop the event.
no_action: Let the event pass through.
overflow_routing: Route to an overflow destination.
Allowed enum values: drop,no_action,overflow_routing
overrides
[object]
A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.
fields [required]
[object]
A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.
name [required]
string
The field name.
value [required]
string
The field value.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
partition_fields
[string]
A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.
too_many_buckets_action
enum
The action to take when the quota or bucket limit is exceeded. Options:
drop: Drop the event.
no_action: Let the event pass through.
overflow_routing: Route to an overflow destination.
Allowed enum values: drop,no_action,overflow_routing
type [required]
enum
The processor type. The value should always be quota.
Allowed enum values: quota
default: quota
Option 15
object
The reduce processor aggregates and merges logs based on matching keys and merge strategies.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by [required]
[string]
A list of fields used to group log events for merging.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
merge_strategies [required]
[object]
List of merge strategies defining how values from grouped events should be combined.
path [required]
string
The field path in the log event.
strategy [required]
enum
The merge strategy to apply.
Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique
type [required]
enum
The processor type. The value should always be reduce.
Allowed enum values: reduce
default: reduce
Option 16
object
The remove_fields processor deletes specified fields from logs.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[string]
A list of field names to be removed from each log event.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be remove_fields.
Allowed enum values: remove_fields
default: remove_fields
Option 17
object
The rename_fields processor changes field names.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[object]
A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.
destination [required]
string
The field name to assign the renamed value to.
preserve_source [required]
boolean
Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.
source [required]
string
The original field name in the log event that should be renamed.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be rename_fields.
Allowed enum values: rename_fields
default: rename_fields
Option 18
object
The sample processor allows probabilistic sampling of logs at a fixed rate.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by
[string]
Optional list of fields to group events by. Each group is sampled independently.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
percentage [required]
double
The percentage of logs to sample.
type [required]
enum
The processor type. The value should always be sample.
Allowed enum values: sample
default: sample
Option 19
object
The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
rules [required]
[object]
A list of rules for identifying and acting on sensitive data patterns.
keyword_options
object
Configuration for keywords used to reinforce sensitive data pattern detection.
keywords [required]
[string]
A list of keywords to match near the sensitive pattern.
proximity [required]
int64
Maximum number of tokens between a keyword and a sensitive value match.
name [required]
string
A name identifying the rule.
on_match [required]
<oneOf>
Defines what action to take when sensitive data is matched.
Option 1
object
Configuration for completely redacting matched sensitive data.
action [required]
enum
Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility.
Allowed enum values: redact
options [required]
object
Configuration for fully redacting sensitive data.
replace [required]
string
The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptionsreplace.
Option 2
object
Configuration for hashing matched sensitive values.
action [required]
enum
Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content.
Allowed enum values: hash
options
object
The ObservabilityPipelineSensitiveDataScannerProcessorActionHashoptions.
Option 3
object
Configuration for partially redacting matched sensitive data.
action [required]
enum
Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card).
Allowed enum values: partial_redact
options [required]
object
Controls how partial redaction is applied, including character count and direction.
characters [required]
int64
The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptionscharacters.
direction [required]
enum
Indicates whether to redact characters from the first or last part of the matched value.
Allowed enum values: first,last
pattern [required]
<oneOf>
Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.
Option 1
object
Defines a custom regex-based pattern for identifying sensitive data in logs.
options [required]
object
Options for defining a custom regex pattern.
description
string
Human-readable description providing context about a sensitive data scanner rule
rule [required]
string
A regular expression used to detect sensitive values. Must be a valid regex.
type [required]
enum
Indicates a custom regular expression is used for matching.
Allowed enum values: custom
Option 2
object
Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.
options [required]
object
Options for selecting a predefined library pattern and enabling keyword support.
description
string
Human-readable description providing context about a sensitive data scanner rule
id [required]
string
Identifier for a predefined pattern from the sensitive data scanner pattern library.
use_recommended_keywords
boolean
Whether to augment the pattern with recommended keywords (optional).
type [required]
enum
Indicates that a predefined library pattern is used.
Allowed enum values: library
scope [required]
<oneOf>
Determines which parts of the log the pattern-matching rule should be applied to.
Option 1
object
Includes only specific fields for sensitive data scanning.
options [required]
object
Fields to which the scope rule applies.
fields [required]
[string]
The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptionsfields.
target [required]
enum
Applies the rule only to included fields.
Allowed enum values: include
Option 2
object
Excludes specific fields from sensitive data scanning.
options [required]
object
Fields to which the scope rule applies.
fields [required]
[string]
The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptionsfields.
target [required]
enum
Excludes specific fields from processing.
Allowed enum values: exclude
Option 3
object
Applies scanning across all available fields.
target [required]
enum
Applies the rule to all fields.
Allowed enum values: all
tags [required]
[string]
Tags assigned to this rule for filtering and classification.
type [required]
enum
The processor type. The value should always be sensitive_data_scanner.
Allowed enum values: sensitive_data_scanner
default: sensitive_data_scanner
Option 20
object
The split_array processor splits array fields into separate events based on configured rules.
Supported pipeline types: logs
arrays [required]
[object]
A list of array split configurations.
field [required]
string
The path to the array field to split.
include [required]
string
A Datadog search query used to determine which logs this array split operation targets.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets. For split_array, this should typically be *.
type [required]
enum
The processor type. The value should always be split_array.
Allowed enum values: split_array
default: split_array
Option 21
object
The throttle processor limits the number of events that pass through over a given time window.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by
[string]
Optional list of fields used to group events before the threshold has been reached.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
threshold [required]
int64
the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.
type [required]
enum
The processor type. The value should always be throttle.
Allowed enum values: throttle
default: throttle
window [required]
double
The time window in seconds over which the threshold applies.
Option 22
object
The metric_tags processor filters metrics based on their tags using Datadog tag key patterns.
Supported pipeline types: metrics
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query that determines which metrics the processor targets.
rules [required]
[object]
A list of rules for filtering metric tags.
action [required]
enum
The action to take on tags with matching keys.
Allowed enum values: include,exclude
include [required]
string
A Datadog search query used to determine which metrics this rule targets.
keys [required]
[string]
A list of tag keys to include or exclude.
mode [required]
enum
The processing mode for tag filtering.
Allowed enum values: filter
type [required]
enum
The processor type. The value should always be metric_tags.
Allowed enum values: metric_tags
default: metric_tags
processors
[object]
DEPRECATED: A list of processor groups that transform or enrich log data.
Deprecated: This field is deprecated, you should now use the processor_groups field.
display_name
string
The display name for a component.
enabled [required]
boolean
Whether this processor group is enabled.
id [required]
string
The unique identifier for the processor group.
include [required]
string
Conditional expression for when this processor group should execute.
inputs [required]
[string]
A list of IDs for components whose output is used as the input for this processor group.
processors [required]
[ <oneOf>]
Processors applied sequentially within this group. Events flow through each processor in order.
Option 1
object
The filter processor allows conditional processing of logs/metrics based on a Datadog search query. Logs/metrics that match the include query are passed through; others are discarded.
Supported pipeline types: logs, metrics
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs/metrics should pass through the filter. Logs/metrics that match this query continue to downstream components; others are dropped.
type [required]
enum
The processor type. The value should always be filter.
Allowed enum values: filter
default: filter
Option 2
object
The add_env_vars processor adds environment variable values to log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this processor in the pipeline.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_env_vars.
Allowed enum values: add_env_vars
default: add_env_vars
variables [required]
[object]
A list of environment variable mappings to apply to log fields.
field [required]
string
The target field in the log event.
name [required]
string
The name of the environment variable to read.
Option 3
object
The add_fields processor adds static key-value fields to logs.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[object]
A list of static fields (key-value pairs) that is added to each log event processed by this component.
name [required]
string
The field name.
value [required]
string
The field value.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_fields.
Allowed enum values: add_fields
default: add_fields
Option 4
object
The add_hostname processor adds the hostname to log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be add_hostname.
Allowed enum values: add_hostname
default: add_hostname
Option 5
object
The custom_processor processor transforms events using Vector Remap Language (VRL) scripts with advanced filtering capabilities.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets. This field should always be set to * for the custom_processor processor.
default: *
remaps [required]
[object]
Array of VRL remap rules.
drop_on_error [required]
boolean
Whether to drop events that caused errors during processing.
enabled
boolean
Whether this remap rule is enabled.
include [required]
string
A Datadog search query used to filter events for this specific remap rule.
name [required]
string
A descriptive name for this remap rule.
source [required]
string
The VRL script source code that defines the processing logic.
type [required]
enum
The processor type. The value should always be custom_processor.
Allowed enum values: custom_processor
default: custom_processor
Option 6
object
The datadog_tags processor includes or excludes specific Datadog tags in your logs.
Supported pipeline types: logs
action [required]
enum
The action to take on tags with matching keys.
Allowed enum values: include,exclude
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
keys [required]
[string]
A list of tag keys.
mode [required]
enum
The processing mode.
Allowed enum values: filter
type [required]
enum
The processor type. The value should always be datadog_tags.
Allowed enum values: datadog_tags
default: datadog_tags
Option 7
object
The dedupe processor removes duplicate fields in log events.
Supported pipeline types: logs
cache
object
Configuration for the cache used to detect duplicates.
num_events [required]
int64
The number of events to cache for duplicate detection.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[string]
A list of log field paths to check for duplicates.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
mode [required]
enum
The deduplication mode to apply to the fields.
Allowed enum values: match,ignore
type [required]
enum
The processor type. The value should always be dedupe.
Allowed enum values: dedupe
default: dedupe
Option 8
object
The enrichment_table processor enriches logs using a static CSV file, GeoIP database, or reference table. Exactly one of file, geoip, or reference_table must be configured.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
file
object
Defines a static enrichment table loaded from a CSV file.
encoding [required]
object
File encoding format.
delimiter [required]
string
The encodingdelimiter.
includes_headers [required]
boolean
The encodingincludes_headers.
type [required]
enum
Specifies the encoding format (e.g., CSV) used for enrichment tables.
Allowed enum values: csv
key [required]
[object]
Key fields used to look up enrichment values.
column [required]
string
The itemscolumn.
comparison [required]
enum
Defines how to compare key fields for enrichment table lookups.
Allowed enum values: equals
field [required]
string
The itemsfield.
path [required]
string
Path to the CSV file.
schema [required]
[object]
Schema defining column names and their types.
column [required]
string
The itemscolumn.
type [required]
enum
Declares allowed data types for enrichment table columns.
Allowed enum values: string,boolean,integer,float,date,timestamp
geoip
object
Uses a GeoIP database to enrich logs based on an IP field.
key_field [required]
string
Path to the IP field in the log.
locale [required]
string
Locale used to resolve geographical names.
path [required]
string
Path to the GeoIP database file.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
reference_table
object
Uses a Datadog reference table to enrich logs.
app_key_key
string
Name of the environment variable or secret that holds the Datadog application key used to access the reference table.
columns
[string]
List of column names to include from the reference table. If not provided, all columns are included.
key_field [required]
string
Path to the field in the log event to match against the reference table.
table_id [required]
string
The unique identifier of the reference table.
target [required]
string
Path where enrichment results should be stored in the log.
type [required]
enum
The processor type. The value should always be enrichment_table.
Allowed enum values: enrichment_table
default: enrichment_table
Option 9
object
The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog.
Metrics can be counters, gauges, or distributions and optionally grouped by log fields.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline.
include
string
A Datadog search query used to determine which logs this processor targets.
metrics
[object]
Configuration for generating individual metrics.
group_by
[string]
Optional fields used to group the metric series.
include [required]
string
Datadog filter query to match logs for metric generation.
metric_type [required]
enum
Type of metric to create.
Allowed enum values: count,gauge,distribution
name [required]
string
Name of the custom metric to be created.
value [required]
<oneOf>
Specifies how the value of the generated metric is computed.
Option 1
object
Strategy that increments a generated metric by one for each matching event.
strategy [required]
enum
Increments the metric by 1 for each matching event.
Allowed enum values: increment_by_one
Option 2
object
Strategy that increments a generated metric based on the value of a log field.
field [required]
string
Name of the log field containing the numeric value to increment the metric by.
strategy [required]
enum
Uses a numeric field in the log event as the metric increment.
Allowed enum values: increment_by_field
type [required]
enum
The processor type. Always generate_datadog_metrics.
Allowed enum values: generate_datadog_metrics
default: generate_datadog_metrics
Option 10
object
The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
mappings [required]
[object]
A list of mapping rules to convert events to the OCSF format.
include [required]
string
A Datadog search query used to select the logs that this mapping should apply to.
mapping [required]
<oneOf>
Defines a single mapping rule for transforming logs into the OCSF schema.
Custom OCSF mapping configuration for transforming logs.
mapping [required]
[object]
A list of field mapping rules for transforming log fields to OCSF schema fields.
default
The default value to use if the source field is missing or empty.
dest [required]
string
The destination OCSF field path.
lookup
object
Lookup table configuration for mapping source values to destination values.
source
The source field path from the log event.
sources
Multiple source field paths for combined mapping.
value
A static value to use for the destination field.
metadata [required]
object
Metadata for the custom OCSF mapping.
class [required]
string
The OCSF event class name.
profiles
[string]
A list of OCSF profiles to apply.
version [required]
string
The OCSF schema version.
version [required]
int64
The version of the custom mapping configuration.
type [required]
enum
The processor type. The value should always be ocsf_mapper.
Allowed enum values: ocsf_mapper
default: ocsf_mapper
Option 11
object
The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.
Supported pipeline types: logs
disable_library_rules
boolean
If set to true, disables the default Grok rules provided by Datadog.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
A unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
rules [required]
[object]
The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.
match_rules [required]
[object]
A list of Grok parsing rules that define how to extract fields from the source field.
Each rule must contain a name and a valid Grok pattern.
name [required]
string
The name of the rule.
rule [required]
string
The definition of the Grok rule.
source [required]
string
The name of the field in the log event to apply the Grok rules to.
support_rules
[object]
A list of Grok helper rules that can be referenced by the parsing rules.
name [required]
string
The name of the Grok helper rule.
rule [required]
string
The definition of the Grok helper rule.
type [required]
enum
The processor type. The value should always be parse_grok.
Allowed enum values: parse_grok
default: parse_grok
Option 12
object
The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
field [required]
string
The name of the log field that contains a JSON string.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be parse_json.
Allowed enum values: parse_json
default: parse_json
Option 13
object
The parse_xml processor parses XML from a specified field and extracts it into the event.
Supported pipeline types: logs
always_use_text_key
boolean
Whether to always use a text key for element content.
attr_prefix
string
The prefix to use for XML attributes in the parsed output.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
field [required]
string
The name of the log field that contains an XML string.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
include_attr
boolean
Whether to include XML attributes in the parsed output.
parse_bool
boolean
Whether to parse boolean values from strings.
parse_null
boolean
Whether to parse null values.
parse_number
boolean
Whether to parse numeric values from strings.
text_key
string
The key name to use for text content within XML elements. Must be at least 1 character if specified.
type [required]
enum
The processor type. The value should always be parse_xml.
Allowed enum values: parse_xml
default: parse_xml
Option 14
object
The quota processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.
Supported pipeline types: logs
display_name
string
The display name for a component.
drop_events
boolean
If set to true, logs that match the quota filter and are sent after the quota is exceeded are dropped. Logs that do not match the filter continue through the pipeline. Note: You can set either drop_events or overflow_action, but not both.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
ignore_when_missing_partitions
boolean
If true, the processor skips quota checks when partition fields are missing from the logs.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
name [required]
string
Name of the quota.
overflow_action
enum
The action to take when the quota or bucket limit is exceeded. Options:
drop: Drop the event.
no_action: Let the event pass through.
overflow_routing: Route to an overflow destination.
Allowed enum values: drop,no_action,overflow_routing
overrides
[object]
A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.
fields [required]
[object]
A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.
name [required]
string
The field name.
value [required]
string
The field value.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
partition_fields
[string]
A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.
too_many_buckets_action
enum
The action to take when the quota or bucket limit is exceeded. Options:
drop: Drop the event.
no_action: Let the event pass through.
overflow_routing: Route to an overflow destination.
Allowed enum values: drop,no_action,overflow_routing
type [required]
enum
The processor type. The value should always be quota.
Allowed enum values: quota
default: quota
Option 15
object
The reduce processor aggregates and merges logs based on matching keys and merge strategies.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by [required]
[string]
A list of fields used to group log events for merging.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
merge_strategies [required]
[object]
List of merge strategies defining how values from grouped events should be combined.
path [required]
string
The field path in the log event.
strategy [required]
enum
The merge strategy to apply.
Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique
type [required]
enum
The processor type. The value should always be reduce.
Allowed enum values: reduce
default: reduce
Option 16
object
The remove_fields processor deletes specified fields from logs.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[string]
A list of field names to be removed from each log event.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be remove_fields.
Allowed enum values: remove_fields
default: remove_fields
Option 17
object
The rename_fields processor changes field names.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
fields [required]
[object]
A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.
destination [required]
string
The field name to assign the renamed value to.
preserve_source [required]
boolean
Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.
source [required]
string
The original field name in the log event that should be renamed.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
type [required]
enum
The processor type. The value should always be rename_fields.
Allowed enum values: rename_fields
default: rename_fields
Option 18
object
The sample processor allows probabilistic sampling of logs at a fixed rate.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by
[string]
Optional list of fields to group events by. Each group is sampled independently.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
percentage [required]
double
The percentage of logs to sample.
type [required]
enum
The processor type. The value should always be sample.
Allowed enum values: sample
default: sample
Option 19
object
The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
rules [required]
[object]
A list of rules for identifying and acting on sensitive data patterns.
keyword_options
object
Configuration for keywords used to reinforce sensitive data pattern detection.
keywords [required]
[string]
A list of keywords to match near the sensitive pattern.
proximity [required]
int64
Maximum number of tokens between a keyword and a sensitive value match.
name [required]
string
A name identifying the rule.
on_match [required]
<oneOf>
Defines what action to take when sensitive data is matched.
Option 1
object
Configuration for completely redacting matched sensitive data.
action [required]
enum
Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility.
Allowed enum values: redact
options [required]
object
Configuration for fully redacting sensitive data.
replace [required]
string
The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptionsreplace.
Option 2
object
Configuration for hashing matched sensitive values.
action [required]
enum
Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content.
Allowed enum values: hash
options
object
The ObservabilityPipelineSensitiveDataScannerProcessorActionHashoptions.
Option 3
object
Configuration for partially redacting matched sensitive data.
action [required]
enum
Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card).
Allowed enum values: partial_redact
options [required]
object
Controls how partial redaction is applied, including character count and direction.
characters [required]
int64
The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptionscharacters.
direction [required]
enum
Indicates whether to redact characters from the first or last part of the matched value.
Allowed enum values: first,last
pattern [required]
<oneOf>
Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.
Option 1
object
Defines a custom regex-based pattern for identifying sensitive data in logs.
options [required]
object
Options for defining a custom regex pattern.
description
string
Human-readable description providing context about a sensitive data scanner rule
rule [required]
string
A regular expression used to detect sensitive values. Must be a valid regex.
type [required]
enum
Indicates a custom regular expression is used for matching.
Allowed enum values: custom
Option 2
object
Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.
options [required]
object
Options for selecting a predefined library pattern and enabling keyword support.
description
string
Human-readable description providing context about a sensitive data scanner rule
id [required]
string
Identifier for a predefined pattern from the sensitive data scanner pattern library.
use_recommended_keywords
boolean
Whether to augment the pattern with recommended keywords (optional).
type [required]
enum
Indicates that a predefined library pattern is used.
Allowed enum values: library
scope [required]
<oneOf>
Determines which parts of the log the pattern-matching rule should be applied to.
Option 1
object
Includes only specific fields for sensitive data scanning.
options [required]
object
Fields to which the scope rule applies.
fields [required]
[string]
The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptionsfields.
target [required]
enum
Applies the rule only to included fields.
Allowed enum values: include
Option 2
object
Excludes specific fields from sensitive data scanning.
options [required]
object
Fields to which the scope rule applies.
fields [required]
[string]
The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptionsfields.
target [required]
enum
Excludes specific fields from processing.
Allowed enum values: exclude
Option 3
object
Applies scanning across all available fields.
target [required]
enum
Applies the rule to all fields.
Allowed enum values: all
tags [required]
[string]
Tags assigned to this rule for filtering and classification.
type [required]
enum
The processor type. The value should always be sensitive_data_scanner.
Allowed enum values: sensitive_data_scanner
default: sensitive_data_scanner
Option 20
object
The split_array processor splits array fields into separate events based on configured rules.
Supported pipeline types: logs
arrays [required]
[object]
A list of array split configurations.
field [required]
string
The path to the array field to split.
include [required]
string
A Datadog search query used to determine which logs this array split operation targets.
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets. For split_array, this should typically be *.
type [required]
enum
The processor type. The value should always be split_array.
Allowed enum values: split_array
default: split_array
Option 21
object
The throttle processor limits the number of events that pass through over a given time window.
Supported pipeline types: logs
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
group_by
[string]
Optional list of fields used to group events before the threshold has been reached.
id [required]
string
The unique identifier for this processor.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
threshold [required]
int64
the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.
type [required]
enum
The processor type. The value should always be throttle.
Allowed enum values: throttle
default: throttle
window [required]
double
The time window in seconds over which the threshold applies.
Option 22
object
The metric_tags processor filters metrics based on their tags using Datadog tag key patterns.
Supported pipeline types: metrics
display_name
string
The display name for a component.
enabled [required]
boolean
Indicates whether the processor is enabled.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
include [required]
string
A Datadog search query that determines which metrics the processor targets.
rules [required]
[object]
A list of rules for filtering metric tags.
action [required]
enum
The action to take on tags with matching keys.
Allowed enum values: include,exclude
include [required]
string
A Datadog search query used to determine which metrics this rule targets.
keys [required]
[string]
A list of tag keys to include or exclude.
mode [required]
enum
The processing mode for tag filtering.
Allowed enum values: filter
type [required]
enum
The processor type. The value should always be metric_tags.
Allowed enum values: metric_tags
default: metric_tags
sources [required]
[ <oneOf>]
A list of configured data sources for the pipeline.
Option 1
object
The datadog_agent source collects logs/metrics from the Datadog Agent.
Supported pipeline types: logs, metrics
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be datadog_agent.
Allowed enum values: datadog_agent
default: datadog_agent
Option 2
object
The amazon_data_firehose source ingests logs from AWS Data Firehose.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the Firehose delivery stream address.
auth
object
AWS authentication credentials used for accessing AWS services such as S3.
If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).
assume_role
string
The Amazon Resource Name (ARN) of the role to assume.
external_id
string
A unique identifier for cross-account role assumption.
session_name
string
A session identifier used for logging and tracing the assumed role session.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be amazon_data_firehose.
Allowed enum values: amazon_data_firehose
default: amazon_data_firehose
Option 3
object
The amazon_s3 source ingests logs from an Amazon S3 bucket.
It supports AWS authentication and TLS encryption.
Supported pipeline types: logs
auth
object
AWS authentication credentials used for accessing AWS services such as S3.
If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).
assume_role
string
The Amazon Resource Name (ARN) of the role to assume.
external_id
string
A unique identifier for cross-account role assumption.
session_name
string
A session identifier used for logging and tracing the assumed role session.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
region [required]
string
AWS region where the S3 bucket resides.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. Always amazon_s3.
Allowed enum values: amazon_s3
default: amazon_s3
url_key
string
Name of the environment variable or secret that holds the S3 bucket URL.
Option 4
object
The fluent_bit source ingests logs from Fluent Bit.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Fluent Bit receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be fluent_bit.
Allowed enum values: fluent_bit
default: fluent_bit
Option 5
object
The fluentd source ingests logs from a Fluentd-compatible service.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Fluent receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be `fluentd.
Allowed enum values: fluentd
default: fluentd
Option 6
object
The google_pubsub source ingests logs from a Google Cloud Pub/Sub subscription.
Supported pipeline types: logs
auth
object
Google Cloud credentials used to authenticate with Google Cloud Storage.
credentials_file [required]
string
Path to the Google Cloud service account key file.
decoding [required]
enum
The decoding format used to interpret incoming logs.
Allowed enum values: bytes,gelf,json,syslog
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
project [required]
string
The Google Cloud project ID that owns the Pub/Sub subscription.
subscription [required]
string
The Pub/Sub subscription name from which messages are consumed.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be google_pubsub.
Allowed enum values: google_pubsub
default: google_pubsub
Option 7
object
The http_client source scrapes logs from HTTP endpoints at regular intervals.
Supported pipeline types: logs
auth_strategy
enum
Optional authentication strategy for HTTP requests.
Allowed enum values: none,basic,bearer,custom
custom_key
string
Name of the environment variable or secret that holds a custom header value (used with custom auth strategies).
decoding [required]
enum
The decoding format used to interpret incoming logs.
Allowed enum values: bytes,gelf,json,syslog
endpoint_url_key
string
Name of the environment variable or secret that holds the HTTP endpoint URL to scrape.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
password_key
string
Name of the environment variable or secret that holds the password (used when auth_strategy is basic).
scrape_interval_secs
int64
The interval (in seconds) between HTTP scrape requests.
scrape_timeout_secs
int64
The timeout (in seconds) for each scrape request.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
token_key
string
Name of the environment variable or secret that holds the bearer token (used when auth_strategy is bearer).
type [required]
enum
The source type. The value should always be http_client.
Allowed enum values: http_client
default: http_client
username_key
string
Name of the environment variable or secret that holds the username (used when auth_strategy is basic).
Option 8
object
The http_server source collects logs over HTTP POST from external services.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the HTTP server.
Name of the environment variable or secret that holds a custom header value (used with custom auth strategies).
decoding [required]
enum
The decoding format used to interpret incoming logs.
Allowed enum values: bytes,gelf,json,syslog
id [required]
string
Unique ID for the HTTP server source.
password_key
string
Name of the environment variable or secret that holds the password (used when auth_strategy is plain).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be http_server.
Allowed enum values: http_server
default: http_server
username_key
string
Name of the environment variable or secret that holds the username (used when auth_strategy is plain).
Option 9
object
The kafka source ingests data from Apache Kafka topics.
Supported pipeline types: logs
bootstrap_servers_key
string
Name of the environment variable or secret that holds the Kafka bootstrap servers list.
group_id [required]
string
Consumer group ID used by the Kafka client.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
librdkafka_options
[object]
Optional list of advanced Kafka client configuration options, defined as key-value pairs.
name [required]
string
The name of the librdkafka configuration option to set.
value [required]
string
The value assigned to the specified librdkafka configuration option.
sasl
object
Specifies the SASL mechanism for authenticating with a Kafka cluster.
mechanism
enum
SASL mechanism used for Kafka authentication.
Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
password_key
string
Name of the environment variable or secret that holds the SASL password.
username_key
string
Name of the environment variable or secret that holds the SASL username.
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
topics [required]
[string]
A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.
type [required]
enum
The source type. The value should always be kafka.
Allowed enum values: kafka
default: kafka
Option 10
object
The logstash source ingests logs from a Logstash forwarder.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Logstash receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be logstash.
Allowed enum values: logstash
default: logstash
Option 11
object
The rsyslog source listens for logs over TCP or UDP from an rsyslog server using the syslog protocol.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the syslog receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
mode [required]
enum
Protocol used by the syslog source to receive messages.
Allowed enum values: tcp,udp
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be rsyslog.
Allowed enum values: rsyslog
default: rsyslog
Option 12
object
The socket source ingests logs over TCP or UDP.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the socket.
framing [required]
<oneOf>
Framing method configuration for the socket source.
Option 1
object
Byte frames which are delimited by a newline character.
method [required]
enum
Byte frames which are delimited by a newline character.
Allowed enum values: newline_delimited
Option 2
object
Byte frames are passed through as-is according to the underlying I/O boundaries (for example, split between messages or stream segments).
method [required]
enum
Byte frames are passed through as-is according to the underlying I/O boundaries (for example, split between messages or stream segments).
Allowed enum values: bytes
Option 3
object
Byte frames which are delimited by a chosen character.
delimiter [required]
string
A single ASCII character used to delimit events.
method [required]
enum
Byte frames which are delimited by a chosen character.
Allowed enum values: character_delimited
Option 4
object
Byte frames according to the octet counting format as per RFC6587.
method [required]
enum
Byte frames according to the octet counting format as per RFC6587.
Allowed enum values: octet_counting
Option 5
object
Byte frames which are chunked GELF messages.
method [required]
enum
Byte frames which are chunked GELF messages.
Allowed enum values: chunked_gelf
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
mode [required]
enum
Protocol used to receive logs.
Allowed enum values: tcp,udp
tls
object
TLS configuration. Relevant only when mode is tcp.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be socket.
Allowed enum values: socket
default: socket
Option 13
object
The splunk_hec source implements the Splunk HTTP Event Collector (HEC) API.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the HEC API.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. Always splunk_hec.
Allowed enum values: splunk_hec
default: splunk_hec
Option 14
object
The splunk_tcp source receives logs from a Splunk Universal Forwarder over TCP.
TLS is supported for secure transmission.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Splunk TCP receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. Always splunk_tcp.
Allowed enum values: splunk_tcp
default: splunk_tcp
Option 15
object
The sumo_logic source receives logs from Sumo Logic collectors.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the Sumo Logic receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
type [required]
enum
The source type. The value should always be sumo_logic.
Allowed enum values: sumo_logic
default: sumo_logic
Option 16
object
The syslog_ng source listens for logs over TCP or UDP from a syslog-ng server using the syslog protocol.
Supported pipeline types: logs
address_key
string
Name of the environment variable or secret that holds the listen address for the syslog-ng receiver.
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
mode [required]
enum
Protocol used by the syslog source to receive messages.
Allowed enum values: tcp,udp
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be syslog_ng.
Allowed enum values: syslog_ng
default: syslog_ng
Option 17
object
The opentelemetry source receives telemetry data using the OpenTelemetry Protocol (OTLP) over gRPC and HTTP.
Supported pipeline types: logs
grpc_address_key
string
Environment variable name containing the gRPC server address for receiving OTLP data. Must be a valid environment variable name (alphanumeric characters and underscores only).
http_address_key
string
Environment variable name containing the HTTP server address for receiving OTLP data. Must be a valid environment variable name (alphanumeric characters and underscores only).
id [required]
string
The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).
tls
object
Configuration for enabling TLS encryption between the pipeline component and external services.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
key_pass_key
string
Name of the environment variable or secret that holds the passphrase for the private key file.
type [required]
enum
The source type. The value should always be opentelemetry.
Allowed enum values: opentelemetry
default: opentelemetry
use_legacy_search_syntax
boolean
Set to true to continue using the legacy search syntax while migrating filter queries. After migrating all queries to the new syntax, set to false.
The legacy syntax is deprecated and will eventually be removed.
Requires Observability Pipelines Worker 2.11 or later.
See Upgrade Your Filter Queries to the New Search Syntax for more information.
name [required]
string
Name of the pipeline.
type [required]
string
The resource type identifier. For pipeline resources, this should always be set to pipelines.
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com"DD_API_KEY="<API-KEY>"DD_APP_KEY="<APP-KEY>"cargo run
/**
* Validate an observability pipeline returns "OK" response
*/import{client,v2}from"@datadog/datadog-api-client";constconfiguration=client.createConfiguration();constapiInstance=newv2.ObservabilityPipelinesApi(configuration);constparams: v2.ObservabilityPipelinesApiValidatePipelineRequest={body:{data:{attributes:{config:{destinations:[{id:"datadog-logs-destination",inputs:["my-processor-group"],type:"datadog_logs",},],processorGroups:[{enabled: true,id:"my-processor-group",include:"service:my-service",inputs:["datadog-agent-source"],processors:[{enabled: true,id:"filter-processor",include:"status:error",type:"filter",},],},],sources:[{id:"datadog-agent-source",type:"datadog_agent",},],},name:"Main Observability Pipeline",},type:"pipelines",},},};apiInstance.validatePipeline(params).then((data: v2.ValidationResponse)=>{console.log("API called successfully. Returned data: "+JSON.stringify(data));}).catch((error: any)=>console.error(error));