Observability Pipelines

Observability Pipelines allows you to collect and process logs within your own infrastructure, and then route them to downstream integrations.

Note: This endpoint is in Preview. Fill out this form to request access.

GET https://api.ap1.datadoghq.com/api/v2/obs-pipelines/pipelineshttps://api.ap2.datadoghq.com/api/v2/obs-pipelines/pipelineshttps://api.datadoghq.eu/api/v2/obs-pipelines/pipelineshttps://api.ddog-gov.com/api/v2/obs-pipelines/pipelineshttps://api.datadoghq.com/api/v2/obs-pipelines/pipelineshttps://api.us3.datadoghq.com/api/v2/obs-pipelines/pipelineshttps://api.us5.datadoghq.com/api/v2/obs-pipelines/pipelines

Présentation

Retrieve a list of pipelines. This endpoint requires the observability_pipelines_read permission.

Arguments

Chaînes de requête

Nom

Type

Description

page[size]

integer

Size for a given page. The maximum allowed value is 100.

page[number]

integer

Specific page number to return.

Réponse

OK

Represents the response payload containing a list of pipelines and associated metadata.

Expand All

Champ

Type

Description

data [required]

[object]

The schema data.

attributes [required]

object

Defines the pipeline’s name and its components (sources, processors, and destinations).

config [required]

object

Specifies the pipeline's configuration, including its sources, processors, and destinations.

destinations [required]

[ <oneOf>]

A list of destination components where processed logs are sent.

Option 1

object

The http_client destination sends data to an HTTP endpoint.

Supported pipeline types: logs, metrics

auth_strategy

enum

HTTP authentication strategy. Allowed enum values: none,basic,bearer

compression

object

Compression configuration for HTTP requests.

algorithm [required]

enum

Compression algorithm. Allowed enum values: gzip

encoding [required]

enum

Encoding format for log events. Allowed enum values: json

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be http_client. Allowed enum values: http_client

default: http_client

Option 2

object

The amazon_opensearch destination writes logs to Amazon OpenSearch.

Supported pipeline types: logs

auth [required]

object

Authentication settings for the Amazon OpenSearch destination. The strategy field determines whether basic or AWS-based authentication is used.

assume_role

string

The ARN of the role to assume (used with aws strategy).

aws_region

string

AWS region

external_id

string

External ID for the assumed role (used with aws strategy).

session_name

string

Session name for the assumed role (used with aws strategy).

strategy [required]

enum

The authentication strategy to use. Allowed enum values: basic,aws

bulk_index

string

The index to write logs to.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be amazon_opensearch. Allowed enum values: amazon_opensearch

default: amazon_opensearch

Option 3

object

The amazon_s3 destination sends your logs in Datadog-rehydratable format to an Amazon S3 bucket for archiving.

Supported pipeline types: logs

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

bucket [required]

string

S3 bucket name.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_prefix

string

Optional prefix for object keys.

region [required]

string

AWS region of the S3 bucket.

storage_class [required]

enum

S3 storage class. Allowed enum values: STANDARD,REDUCED_REDUNDANCY,INTELLIGENT_TIERING,STANDARD_IA,EXPRESS_ONEZONE,ONEZONE_IA,GLACIER,GLACIER_IR,DEEP_ARCHIVE

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. Always amazon_s3. Allowed enum values: amazon_s3

default: amazon_s3

Option 4

object

The amazon_security_lake destination sends your logs to Amazon Security Lake.

Supported pipeline types: logs

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

bucket [required]

string

Name of the Amazon S3 bucket in Security Lake (3-63 characters).

custom_source_name [required]

string

Custom source name for the logs in Security Lake.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

string

AWS region of the S3 bucket.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. Always amazon_security_lake. Allowed enum values: amazon_security_lake

default: amazon_security_lake

Option 5

object

The azure_storage destination forwards logs to an Azure Blob Storage container.

Supported pipeline types: logs

blob_prefix

string

Optional prefix for blobs written to the container.

container_name [required]

string

The name of the Azure Blob Storage container to store logs in.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be azure_storage. Allowed enum values: azure_storage

default: azure_storage

Option 6

object

The cloud_prem destination sends logs to Datadog CloudPrem.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be cloud_prem. Allowed enum values: cloud_prem

default: cloud_prem

Option 7

object

The crowdstrike_next_gen_siem destination forwards logs to CrowdStrike Next Gen SIEM.

Supported pipeline types: logs

compression

object

Compression configuration for log events.

algorithm [required]

enum

Compression algorithm for log events. Allowed enum values: gzip,zlib

level

int64

Compression level.

encoding [required]

enum

Encoding format for log events. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be crowdstrike_next_gen_siem. Allowed enum values: crowdstrike_next_gen_siem

default: crowdstrike_next_gen_siem

Option 8

object

The datadog_logs destination forwards logs to Datadog Log Management.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

routes

[object]

A list of routing rules that forward matching logs to Datadog using dedicated API keys.

api_key_key

string

Name of the environment variable or secret that stores the Datadog API key used by this route.

include

string

A Datadog search query that determines which logs are forwarded using this route.

route_id

string

Unique identifier for this route within the destination.

site

string

Datadog site where matching logs are sent (for example, us1).

type [required]

enum

The destination type. The value should always be datadog_logs. Allowed enum values: datadog_logs

default: datadog_logs

Option 9

object

The elasticsearch destination writes logs to an Elasticsearch cluster.

Supported pipeline types: logs

api_version

enum

The Elasticsearch API version to use. Set to auto to auto-detect. Allowed enum values: auto,v6,v7,v8

bulk_index

string

The index to write logs to in Elasticsearch.

data_stream

object

Configuration options for writing to Elasticsearch Data Streams instead of a fixed index.

dataset

string

The data stream dataset for your logs. This groups logs by their source or application.

dtype

string

The data stream type for your logs. This determines how logs are categorized within the data stream.

namespace

string

The data stream namespace for your logs. This separates logs into different environments or domains.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be elasticsearch. Allowed enum values: elasticsearch

default: elasticsearch

Option 10

object

The google_chronicle destination sends logs to Google Chronicle.

Supported pipeline types: logs

auth

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

customer_id [required]

string

The Google Chronicle customer ID.

encoding

enum

The encoding format for the logs sent to Chronicle. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

log_type

string

The log type metadata associated with the Chronicle destination.

type [required]

enum

The destination type. The value should always be google_chronicle. Allowed enum values: google_chronicle

default: google_chronicle

Option 11

object

The google_cloud_storage destination stores logs in a Google Cloud Storage (GCS) bucket. It requires a bucket name, GCP authentication, and metadata fields.

Supported pipeline types: logs

acl

enum

Access control list setting for objects written to the bucket. Allowed enum values: private,project-private,public-read,authenticated-read,bucket-owner-read,bucket-owner-full-control

auth

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

bucket [required]

string

Name of the GCS bucket.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_prefix

string

Optional prefix for object keys within the GCS bucket.

metadata

[object]

Custom metadata to attach to each object uploaded to the GCS bucket.

name [required]

string

The metadata key.

value [required]

string

The metadata value.

storage_class [required]

enum

Storage class used for objects stored in GCS. Allowed enum values: STANDARD,NEARLINE,COLDLINE,ARCHIVE

type [required]

enum

The destination type. Always google_cloud_storage. Allowed enum values: google_cloud_storage

default: google_cloud_storage

Option 12

object

The google_pubsub destination publishes logs to a Google Cloud Pub/Sub topic.

Supported pipeline types: logs

auth

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

encoding [required]

enum

Encoding format for log events. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

project [required]

string

The GCP project ID that owns the Pub/Sub topic.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topic [required]

string

The Pub/Sub topic name to publish logs to.

type [required]

enum

The destination type. The value should always be google_pubsub. Allowed enum values: google_pubsub

default: google_pubsub

Option 13

object

The kafka destination sends logs to Apache Kafka topics.

Supported pipeline types: logs

compression

enum

Compression codec for Kafka messages. Allowed enum values: none,gzip,snappy,lz4,zstd

encoding [required]

enum

Encoding format for log events. Allowed enum values: json,raw_message

headers_key

string

The field name to use for Kafka message headers.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_field

string

The field name to use as the Kafka message key.

librdkafka_options

[object]

Optional list of advanced Kafka producer configuration options, defined as key-value pairs.

name [required]

string

The name of the librdkafka configuration option to set.

value [required]

string

The value assigned to the specified librdkafka configuration option.

message_timeout_ms

int64

Maximum time in milliseconds to wait for message delivery confirmation.

rate_limit_duration_secs

int64

Duration in seconds for the rate limit window.

rate_limit_num

int64

Maximum number of messages allowed per rate limit duration.

sasl

object

Specifies the SASL mechanism for authenticating with a Kafka cluster.

mechanism

enum

SASL mechanism used for Kafka authentication. Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512

socket_timeout_ms

int64

Socket timeout in milliseconds for network requests.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topic [required]

string

The Kafka topic name to publish logs to.

type [required]

enum

The destination type. The value should always be kafka. Allowed enum values: kafka

default: kafka

Option 14

object

The microsoft_sentinel destination forwards logs to Microsoft Sentinel.

Supported pipeline types: logs

client_id [required]

string

Azure AD client ID used for authentication.

dcr_immutable_id [required]

string

The immutable ID of the Data Collection Rule (DCR).

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

table [required]

string

The name of the Log Analytics table where logs are sent.

tenant_id [required]

string

Azure AD tenant ID.

type [required]

enum

The destination type. The value should always be microsoft_sentinel. Allowed enum values: microsoft_sentinel

default: microsoft_sentinel

Option 15

object

The new_relic destination sends logs to the New Relic platform.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

enum

The New Relic region. Allowed enum values: us,eu

type [required]

enum

The destination type. The value should always be new_relic. Allowed enum values: new_relic

default: new_relic

Option 16

object

The opensearch destination writes logs to an OpenSearch cluster.

Supported pipeline types: logs

bulk_index

string

The index to write logs to.

data_stream

object

Configuration options for writing to OpenSearch Data Streams instead of a fixed index.

dataset

string

The data stream dataset for your logs. This groups logs by their source or application.

dtype

string

The data stream type for your logs. This determines how logs are categorized within the data stream.

namespace

string

The data stream namespace for your logs. This separates logs into different environments or domains.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be opensearch. Allowed enum values: opensearch

default: opensearch

Option 17

object

The rsyslog destination forwards logs to an external rsyslog server over TCP or UDP using the syslog protocol.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

keepalive

int64

Optional socket keepalive duration in milliseconds.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be rsyslog. Allowed enum values: rsyslog

default: rsyslog

Option 18

object

The sentinel_one destination sends logs to SentinelOne.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

enum

The SentinelOne region to send logs to. Allowed enum values: us,eu,ca,data_set_us

type [required]

enum

The destination type. The value should always be sentinel_one. Allowed enum values: sentinel_one

default: sentinel_one

Option 19

object

The socket destination sends logs over TCP or UDP to a remote server.

Supported pipeline types: logs

encoding [required]

enum

Encoding format for log events. Allowed enum values: json,raw_message

framing [required]

 <oneOf>

Framing method configuration.

Option 1

object

Each log event is delimited by a newline character.

method [required]

enum

The definition of ObservabilityPipelineSocketDestinationFramingNewlineDelimitedMethod object. Allowed enum values: newline_delimited

Option 2

object

Event data is not delimited at all.

method [required]

enum

The definition of ObservabilityPipelineSocketDestinationFramingBytesMethod object. Allowed enum values: bytes

Option 3

object

Each log event is separated using the specified delimiter character.

delimiter [required]

string

A single ASCII character used as a delimiter.

method [required]

enum

The definition of ObservabilityPipelineSocketDestinationFramingCharacterDelimitedMethod object. Allowed enum values: character_delimited

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

mode [required]

enum

Protocol used to send logs. Allowed enum values: tcp,udp

tls

object

TLS configuration. Relevant only when mode is tcp.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be socket. Allowed enum values: socket

default: socket

Option 20

object

The splunk_hec destination forwards logs to Splunk using the HTTP Event Collector (HEC).

Supported pipeline types: logs

auto_extract_timestamp

boolean

If true, Splunk tries to extract timestamps from incoming log events. If false, Splunk assigns the time the event was received.

encoding

enum

Encoding format for log events. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

index

string

Optional name of the Splunk index where logs are written.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

sourcetype

string

The Splunk sourcetype to assign to log events.

type [required]

enum

The destination type. Always splunk_hec. Allowed enum values: splunk_hec

default: splunk_hec

Option 21

object

The sumo_logic destination forwards logs to Sumo Logic.

Supported pipeline types: logs

encoding

enum

The output encoding format. Allowed enum values: json,raw_message,logfmt

header_custom_fields

[object]

A list of custom headers to include in the request to Sumo Logic.

name [required]

string

The header field name.

value [required]

string

The header field value.

header_host_name

string

Optional override for the host name header.

header_source_category

string

Optional override for the source category header.

header_source_name

string

Optional override for the source name header.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be sumo_logic. Allowed enum values: sumo_logic

default: sumo_logic

Option 22

object

The syslog_ng destination forwards logs to an external syslog-ng server over TCP or UDP using the syslog protocol.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

keepalive

int64

Optional socket keepalive duration in milliseconds.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be syslog_ng. Allowed enum values: syslog_ng

default: syslog_ng

Option 23

object

The datadog_metrics destination forwards metrics to Datadog.

Supported pipeline types: metrics

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be datadog_metrics. Allowed enum values: datadog_metrics

default: datadog_metrics

pipeline_type

enum

The type of data being ingested. Defaults to logs if not specified. Allowed enum values: logs,metrics

default: logs

processor_groups

[object]

A list of processor groups that transform or enrich log data.

display_name

string

The display name for a component.

enabled [required]

boolean

Whether this processor group is enabled.

id [required]

string

The unique identifier for the processor group.

include [required]

string

Conditional expression for when this processor group should execute.

inputs [required]

[string]

A list of IDs for components whose output is used as the input for this processor group.

processors [required]

[ <oneOf>]

Processors applied sequentially within this group. Events flow through each processor in order.

Option 1

object

The filter processor allows conditional processing of logs/metrics based on a Datadog search query. Logs/metrics that match the include query are passed through; others are discarded.

Supported pipeline types: logs, metrics

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs/metrics should pass through the filter. Logs/metrics that match this query continue to downstream components; others are dropped.

type [required]

enum

The processor type. The value should always be filter. Allowed enum values: filter

default: filter

Option 2

object

The add_env_vars processor adds environment variable values to log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this processor in the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_env_vars. Allowed enum values: add_env_vars

default: add_env_vars

variables [required]

[object]

A list of environment variable mappings to apply to log fields.

field [required]

string

The target field in the log event.

name [required]

string

The name of the environment variable to read.

Option 3

object

The add_fields processor adds static key-value fields to logs.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[object]

A list of static fields (key-value pairs) that is added to each log event processed by this component.

name [required]

string

The field name.

value [required]

string

The field value.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_fields. Allowed enum values: add_fields

default: add_fields

Option 4

object

The add_hostname processor adds the hostname to log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_hostname. Allowed enum values: add_hostname

default: add_hostname

Option 5

object

The custom_processor processor transforms events using Vector Remap Language (VRL) scripts with advanced filtering capabilities.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets. This field should always be set to * for the custom_processor processor.

default: *

remaps [required]

[object]

Array of VRL remap rules.

drop_on_error [required]

boolean

Whether to drop events that caused errors during processing.

enabled

boolean

Whether this remap rule is enabled.

include [required]

string

A Datadog search query used to filter events for this specific remap rule.

name [required]

string

A descriptive name for this remap rule.

source [required]

string

The VRL script source code that defines the processing logic.

type [required]

enum

The processor type. The value should always be custom_processor. Allowed enum values: custom_processor

default: custom_processor

Option 6

object

The datadog_tags processor includes or excludes specific Datadog tags in your logs.

Supported pipeline types: logs

action [required]

enum

The action to take on tags with matching keys. Allowed enum values: include,exclude

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

keys [required]

[string]

A list of tag keys.

mode [required]

enum

The processing mode. Allowed enum values: filter

type [required]

enum

The processor type. The value should always be datadog_tags. Allowed enum values: datadog_tags

default: datadog_tags

Option 7

object

The dedupe processor removes duplicate fields in log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[string]

A list of log field paths to check for duplicates.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

mode [required]

enum

The deduplication mode to apply to the fields. Allowed enum values: match,ignore

type [required]

enum

The processor type. The value should always be dedupe. Allowed enum values: dedupe

default: dedupe

Option 8

object

The enrichment_table processor enriches logs using a static CSV file, GeoIP database, or reference table. Exactly one of file, geoip, or reference_table must be configured.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

file

object

Defines a static enrichment table loaded from a CSV file.

encoding [required]

object

File encoding format.

delimiter [required]

string

The encoding delimiter.

includes_headers [required]

boolean

The encoding includes_headers.

type [required]

enum

Specifies the encoding format (e.g., CSV) used for enrichment tables. Allowed enum values: csv

key [required]

[object]

Key fields used to look up enrichment values.

column [required]

string

The items column.

comparison [required]

enum

Defines how to compare key fields for enrichment table lookups. Allowed enum values: equals

field [required]

string

The items field.

path [required]

string

Path to the CSV file.

schema [required]

[object]

Schema defining column names and their types.

column [required]

string

The items column.

type [required]

enum

Declares allowed data types for enrichment table columns. Allowed enum values: string,boolean,integer,float,date,timestamp

geoip

object

Uses a GeoIP database to enrich logs based on an IP field.

key_field [required]

string

Path to the IP field in the log.

locale [required]

string

Locale used to resolve geographical names.

path [required]

string

Path to the GeoIP database file.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

reference_table

object

Uses a Datadog reference table to enrich logs.

columns

[string]

List of column names to include from the reference table. If not provided, all columns are included.

key_field [required]

string

Path to the field in the log event to match against the reference table.

table_id [required]

string

The unique identifier of the reference table.

target [required]

string

Path where enrichment results should be stored in the log.

type [required]

enum

The processor type. The value should always be enrichment_table. Allowed enum values: enrichment_table

default: enrichment_table

Option 9

object

The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog. Metrics can be counters, gauges, or distributions and optionally grouped by log fields.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include

string

A Datadog search query used to determine which logs this processor targets.

metrics

[object]

Configuration for generating individual metrics.

group_by

[string]

Optional fields used to group the metric series.

include [required]

string

Datadog filter query to match logs for metric generation.

metric_type [required]

enum

Type of metric to create. Allowed enum values: count,gauge,distribution

name [required]

string

Name of the custom metric to be created.

value [required]

 <oneOf>

Specifies how the value of the generated metric is computed.

Option 1

object

Strategy that increments a generated metric by one for each matching event.

strategy [required]

enum

Increments the metric by 1 for each matching event. Allowed enum values: increment_by_one

Option 2

object

Strategy that increments a generated metric based on the value of a log field.

field [required]

string

Name of the log field containing the numeric value to increment the metric by.

strategy [required]

enum

Uses a numeric field in the log event as the metric increment. Allowed enum values: increment_by_field

type [required]

enum

The processor type. Always generate_datadog_metrics. Allowed enum values: generate_datadog_metrics

default: generate_datadog_metrics

Option 10

object

The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

mappings [required]

[object]

A list of mapping rules to convert events to the OCSF format.

include [required]

string

A Datadog search query used to select the logs that this mapping should apply to.

mapping [required]

 <oneOf>

Defines a single mapping rule for transforming logs into the OCSF schema.

Option 1

enum

Predefined library mappings for common log formats. Allowed enum values: CloudTrail Account Change,GCP Cloud Audit CreateBucket,GCP Cloud Audit CreateSink,GCP Cloud Audit SetIamPolicy,GCP Cloud Audit UpdateSink,Github Audit Log API Activity,Google Workspace Admin Audit addPrivilege,Microsoft 365 Defender Incident,Microsoft 365 Defender UserLoggedIn,Okta System Log Authentication,Palo Alto Networks Firewall Traffic

type [required]

enum

The processor type. The value should always be ocsf_mapper. Allowed enum values: ocsf_mapper

default: ocsf_mapper

Option 11

object

The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.

Supported pipeline types: logs

disable_library_rules

boolean

If set to true, disables the default Grok rules provided by Datadog.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

A unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

rules [required]

[object]

The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.

match_rules [required]

[object]

A list of Grok parsing rules that define how to extract fields from the source field. Each rule must contain a name and a valid Grok pattern.

name [required]

string

The name of the rule.

rule [required]

string

The definition of the Grok rule.

source [required]

string

The name of the field in the log event to apply the Grok rules to.

support_rules

[object]

A list of Grok helper rules that can be referenced by the parsing rules.

name [required]

string

The name of the Grok helper rule.

rule [required]

string

The definition of the Grok helper rule.

type [required]

enum

The processor type. The value should always be parse_grok. Allowed enum values: parse_grok

default: parse_grok

Option 12

object

The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

field [required]

string

The name of the log field that contains a JSON string.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be parse_json. Allowed enum values: parse_json

default: parse_json

Option 13

object

The parse_xml processor parses XML from a specified field and extracts it into the event.

Supported pipeline types: logs

always_use_text_key

boolean

Whether to always use a text key for element content.

attr_prefix

string

The prefix to use for XML attributes in the parsed output.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

field [required]

string

The name of the log field that contains an XML string.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

include_attr

boolean

Whether to include XML attributes in the parsed output.

parse_bool

boolean

Whether to parse boolean values from strings.

parse_null

boolean

Whether to parse null values.

parse_number

boolean

Whether to parse numeric values from strings.

text_key

string

The key name to use for text content within XML elements. Must be at least 1 character if specified.

type [required]

enum

The processor type. The value should always be parse_xml. Allowed enum values: parse_xml

default: parse_xml

Option 14

object

The quota processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.

Supported pipeline types: logs

display_name

string

The display name for a component.

drop_events

boolean

If set to true, logs that match the quota filter and are sent after the quota is exceeded are dropped. Logs that do not match the filter continue through the pipeline. Note: You can set either drop_events or overflow_action, but not both.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

ignore_when_missing_partitions

boolean

If true, the processor skips quota checks when partition fields are missing from the logs.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

name [required]

string

Name of the quota.

overflow_action

enum

The action to take when the quota or bucket limit is exceeded. Options:

  • drop: Drop the event.
  • no_action: Let the event pass through.
  • overflow_routing: Route to an overflow destination. Allowed enum values: drop,no_action,overflow_routing

overrides

[object]

A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.

fields [required]

[object]

A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.

name [required]

string

The field name.

value [required]

string

The field value.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

partition_fields

[string]

A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.

too_many_buckets_action

enum

The action to take when the quota or bucket limit is exceeded. Options:

  • drop: Drop the event.
  • no_action: Let the event pass through.
  • overflow_routing: Route to an overflow destination. Allowed enum values: drop,no_action,overflow_routing

type [required]

enum

The processor type. The value should always be quota. Allowed enum values: quota

default: quota

Option 15

object

The reduce processor aggregates and merges logs based on matching keys and merge strategies.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by [required]

[string]

A list of fields used to group log events for merging.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

merge_strategies [required]

[object]

List of merge strategies defining how values from grouped events should be combined.

path [required]

string

The field path in the log event.

strategy [required]

enum

The merge strategy to apply. Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique

type [required]

enum

The processor type. The value should always be reduce. Allowed enum values: reduce

default: reduce

Option 16

object

The remove_fields processor deletes specified fields from logs.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[string]

A list of field names to be removed from each log event.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be remove_fields. Allowed enum values: remove_fields

default: remove_fields

Option 17

object

The rename_fields processor changes field names.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[object]

A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.

destination [required]

string

The field name to assign the renamed value to.

preserve_source [required]

boolean

Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.

source [required]

string

The original field name in the log event that should be renamed.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be rename_fields. Allowed enum values: rename_fields

default: rename_fields

Option 18

object

The sample processor allows probabilistic sampling of logs at a fixed rate.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by

[string]

Optional list of fields to group events by. Each group is sampled independently.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

percentage [required]

double

The percentage of logs to sample.

type [required]

enum

The processor type. The value should always be sample. Allowed enum values: sample

default: sample

Option 19

object

The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

rules [required]

[object]

A list of rules for identifying and acting on sensitive data patterns.

keyword_options

object

Configuration for keywords used to reinforce sensitive data pattern detection.

keywords [required]

[string]

A list of keywords to match near the sensitive pattern.

proximity [required]

int64

Maximum number of tokens between a keyword and a sensitive value match.

name [required]

string

A name identifying the rule.

on_match [required]

 <oneOf>

Defines what action to take when sensitive data is matched.

Option 1

object

Configuration for completely redacting matched sensitive data.

action [required]

enum

Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility. Allowed enum values: redact

options [required]

object

Configuration for fully redacting sensitive data.

replace [required]

string

The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptions replace.

Option 2

object

Configuration for hashing matched sensitive values.

action [required]

enum

Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content. Allowed enum values: hash

options

object

The ObservabilityPipelineSensitiveDataScannerProcessorActionHash options.

Option 3

object

Configuration for partially redacting matched sensitive data.

action [required]

enum

Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card). Allowed enum values: partial_redact

options [required]

object

Controls how partial redaction is applied, including character count and direction.

characters [required]

int64

The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptions characters.

direction [required]

enum

Indicates whether to redact characters from the first or last part of the matched value. Allowed enum values: first,last

pattern [required]

 <oneOf>

Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.

Option 1

object

Defines a custom regex-based pattern for identifying sensitive data in logs.

options [required]

object

Options for defining a custom regex pattern.

description

string

Human-readable description providing context about a sensitive data scanner rule

rule [required]

string

A regular expression used to detect sensitive values. Must be a valid regex.

type [required]

enum

Indicates a custom regular expression is used for matching. Allowed enum values: custom

Option 2

object

Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.

options [required]

object

Options for selecting a predefined library pattern and enabling keyword support.

description

string

Human-readable description providing context about a sensitive data scanner rule

id [required]

string

Identifier for a predefined pattern from the sensitive data scanner pattern library.

use_recommended_keywords

boolean

Whether to augment the pattern with recommended keywords (optional).

type [required]

enum

Indicates that a predefined library pattern is used. Allowed enum values: library

scope [required]

 <oneOf>

Determines which parts of the log the pattern-matching rule should be applied to.

Option 1

object

Includes only specific fields for sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Applies the rule only to included fields. Allowed enum values: include

Option 2

object

Excludes specific fields from sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Excludes specific fields from processing. Allowed enum values: exclude

Option 3

object

Applies scanning across all available fields.

target [required]

enum

Applies the rule to all fields. Allowed enum values: all

tags [required]

[string]

Tags assigned to this rule for filtering and classification.

type [required]

enum

The processor type. The value should always be sensitive_data_scanner. Allowed enum values: sensitive_data_scanner

default: sensitive_data_scanner

Option 20

object

The split_array processor splits array fields into separate events based on configured rules.

Supported pipeline types: logs

arrays [required]

[object]

A list of array split configurations.

field [required]

string

The path to the array field to split.

include [required]

string

A Datadog search query used to determine which logs this array split operation targets.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets. For split_array, this should typically be *.

type [required]

enum

The processor type. The value should always be split_array. Allowed enum values: split_array

default: split_array

Option 21

object

The throttle processor limits the number of events that pass through over a given time window.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by

[string]

Optional list of fields used to group events before the threshold has been reached.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

threshold [required]

int64

the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.

type [required]

enum

The processor type. The value should always be throttle. Allowed enum values: throttle

default: throttle

window [required]

double

The time window in seconds over which the threshold applies.

Option 22

object

The metric_tags processor filters metrics based on their tags using Datadog tag key patterns.

Supported pipeline types: metrics

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query that determines which metrics the processor targets.

rules [required]

[object]

A list of rules for filtering metric tags.

action [required]

enum

The action to take on tags with matching keys. Allowed enum values: include,exclude

include [required]

string

A Datadog search query used to determine which metrics this rule targets.

keys [required]

[string]

A list of tag keys to include or exclude.

mode [required]

enum

The processing mode for tag filtering. Allowed enum values: filter

type [required]

enum

The processor type. The value should always be metric_tags. Allowed enum values: metric_tags

default: metric_tags

processors

[object]

DEPRECATED: A list of processor groups that transform or enrich log data.

Deprecated: This field is deprecated, you should now use the processor_groups field.

display_name

string

The display name for a component.

enabled [required]

boolean

Whether this processor group is enabled.

id [required]

string

The unique identifier for the processor group.

include [required]

string

Conditional expression for when this processor group should execute.

inputs [required]

[string]

A list of IDs for components whose output is used as the input for this processor group.

processors [required]

[ <oneOf>]

Processors applied sequentially within this group. Events flow through each processor in order.

Option 1

object

The filter processor allows conditional processing of logs/metrics based on a Datadog search query. Logs/metrics that match the include query are passed through; others are discarded.

Supported pipeline types: logs, metrics

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs/metrics should pass through the filter. Logs/metrics that match this query continue to downstream components; others are dropped.

type [required]

enum

The processor type. The value should always be filter. Allowed enum values: filter

default: filter

Option 2

object

The add_env_vars processor adds environment variable values to log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this processor in the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_env_vars. Allowed enum values: add_env_vars

default: add_env_vars

variables [required]

[object]

A list of environment variable mappings to apply to log fields.

field [required]

string

The target field in the log event.

name [required]

string

The name of the environment variable to read.

Option 3

object

The add_fields processor adds static key-value fields to logs.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[object]

A list of static fields (key-value pairs) that is added to each log event processed by this component.

name [required]

string

The field name.

value [required]

string

The field value.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_fields. Allowed enum values: add_fields

default: add_fields

Option 4

object

The add_hostname processor adds the hostname to log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_hostname. Allowed enum values: add_hostname

default: add_hostname

Option 5

object

The custom_processor processor transforms events using Vector Remap Language (VRL) scripts with advanced filtering capabilities.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets. This field should always be set to * for the custom_processor processor.

default: *

remaps [required]

[object]

Array of VRL remap rules.

drop_on_error [required]

boolean

Whether to drop events that caused errors during processing.

enabled

boolean

Whether this remap rule is enabled.

include [required]

string

A Datadog search query used to filter events for this specific remap rule.

name [required]

string

A descriptive name for this remap rule.

source [required]

string

The VRL script source code that defines the processing logic.

type [required]

enum

The processor type. The value should always be custom_processor. Allowed enum values: custom_processor

default: custom_processor

Option 6

object

The datadog_tags processor includes or excludes specific Datadog tags in your logs.

Supported pipeline types: logs

action [required]

enum

The action to take on tags with matching keys. Allowed enum values: include,exclude

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

keys [required]

[string]

A list of tag keys.

mode [required]

enum

The processing mode. Allowed enum values: filter

type [required]

enum

The processor type. The value should always be datadog_tags. Allowed enum values: datadog_tags

default: datadog_tags

Option 7

object

The dedupe processor removes duplicate fields in log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[string]

A list of log field paths to check for duplicates.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

mode [required]

enum

The deduplication mode to apply to the fields. Allowed enum values: match,ignore

type [required]

enum

The processor type. The value should always be dedupe. Allowed enum values: dedupe

default: dedupe

Option 8

object

The enrichment_table processor enriches logs using a static CSV file, GeoIP database, or reference table. Exactly one of file, geoip, or reference_table must be configured.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

file

object

Defines a static enrichment table loaded from a CSV file.

encoding [required]

object

File encoding format.

delimiter [required]

string

The encoding delimiter.

includes_headers [required]

boolean

The encoding includes_headers.

type [required]

enum

Specifies the encoding format (e.g., CSV) used for enrichment tables. Allowed enum values: csv

key [required]

[object]

Key fields used to look up enrichment values.

column [required]

string

The items column.

comparison [required]

enum

Defines how to compare key fields for enrichment table lookups. Allowed enum values: equals

field [required]

string

The items field.

path [required]

string

Path to the CSV file.

schema [required]

[object]

Schema defining column names and their types.

column [required]

string

The items column.

type [required]

enum

Declares allowed data types for enrichment table columns. Allowed enum values: string,boolean,integer,float,date,timestamp

geoip

object

Uses a GeoIP database to enrich logs based on an IP field.

key_field [required]

string

Path to the IP field in the log.

locale [required]

string

Locale used to resolve geographical names.

path [required]

string

Path to the GeoIP database file.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

reference_table

object

Uses a Datadog reference table to enrich logs.

columns

[string]

List of column names to include from the reference table. If not provided, all columns are included.

key_field [required]

string

Path to the field in the log event to match against the reference table.

table_id [required]

string

The unique identifier of the reference table.

target [required]

string

Path where enrichment results should be stored in the log.

type [required]

enum

The processor type. The value should always be enrichment_table. Allowed enum values: enrichment_table

default: enrichment_table

Option 9

object

The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog. Metrics can be counters, gauges, or distributions and optionally grouped by log fields.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include

string

A Datadog search query used to determine which logs this processor targets.

metrics

[object]

Configuration for generating individual metrics.

group_by

[string]

Optional fields used to group the metric series.

include [required]

string

Datadog filter query to match logs for metric generation.

metric_type [required]

enum

Type of metric to create. Allowed enum values: count,gauge,distribution

name [required]

string

Name of the custom metric to be created.

value [required]

 <oneOf>

Specifies how the value of the generated metric is computed.

Option 1

object

Strategy that increments a generated metric by one for each matching event.

strategy [required]

enum

Increments the metric by 1 for each matching event. Allowed enum values: increment_by_one

Option 2

object

Strategy that increments a generated metric based on the value of a log field.

field [required]

string

Name of the log field containing the numeric value to increment the metric by.

strategy [required]

enum

Uses a numeric field in the log event as the metric increment. Allowed enum values: increment_by_field

type [required]

enum

The processor type. Always generate_datadog_metrics. Allowed enum values: generate_datadog_metrics

default: generate_datadog_metrics

Option 10

object

The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

mappings [required]

[object]

A list of mapping rules to convert events to the OCSF format.

include [required]

string

A Datadog search query used to select the logs that this mapping should apply to.

mapping [required]

 <oneOf>

Defines a single mapping rule for transforming logs into the OCSF schema.

Option 1

enum

Predefined library mappings for common log formats. Allowed enum values: CloudTrail Account Change,GCP Cloud Audit CreateBucket,GCP Cloud Audit CreateSink,GCP Cloud Audit SetIamPolicy,GCP Cloud Audit UpdateSink,Github Audit Log API Activity,Google Workspace Admin Audit addPrivilege,Microsoft 365 Defender Incident,Microsoft 365 Defender UserLoggedIn,Okta System Log Authentication,Palo Alto Networks Firewall Traffic

type [required]

enum

The processor type. The value should always be ocsf_mapper. Allowed enum values: ocsf_mapper

default: ocsf_mapper

Option 11

object

The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.

Supported pipeline types: logs

disable_library_rules

boolean

If set to true, disables the default Grok rules provided by Datadog.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

A unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

rules [required]

[object]

The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.

match_rules [required]

[object]

A list of Grok parsing rules that define how to extract fields from the source field. Each rule must contain a name and a valid Grok pattern.

name [required]

string

The name of the rule.

rule [required]

string

The definition of the Grok rule.

source [required]

string

The name of the field in the log event to apply the Grok rules to.

support_rules

[object]

A list of Grok helper rules that can be referenced by the parsing rules.

name [required]

string

The name of the Grok helper rule.

rule [required]

string

The definition of the Grok helper rule.

type [required]

enum

The processor type. The value should always be parse_grok. Allowed enum values: parse_grok

default: parse_grok

Option 12

object

The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

field [required]

string

The name of the log field that contains a JSON string.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be parse_json. Allowed enum values: parse_json

default: parse_json

Option 13

object

The parse_xml processor parses XML from a specified field and extracts it into the event.

Supported pipeline types: logs

always_use_text_key

boolean

Whether to always use a text key for element content.

attr_prefix

string

The prefix to use for XML attributes in the parsed output.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

field [required]

string

The name of the log field that contains an XML string.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

include_attr

boolean

Whether to include XML attributes in the parsed output.

parse_bool

boolean

Whether to parse boolean values from strings.

parse_null

boolean

Whether to parse null values.

parse_number

boolean

Whether to parse numeric values from strings.

text_key

string

The key name to use for text content within XML elements. Must be at least 1 character if specified.

type [required]

enum

The processor type. The value should always be parse_xml. Allowed enum values: parse_xml

default: parse_xml

Option 14

object

The quota processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.

Supported pipeline types: logs

display_name

string

The display name for a component.

drop_events

boolean

If set to true, logs that match the quota filter and are sent after the quota is exceeded are dropped. Logs that do not match the filter continue through the pipeline. Note: You can set either drop_events or overflow_action, but not both.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

ignore_when_missing_partitions

boolean

If true, the processor skips quota checks when partition fields are missing from the logs.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

name [required]

string

Name of the quota.

overflow_action

enum

The action to take when the quota or bucket limit is exceeded. Options:

  • drop: Drop the event.
  • no_action: Let the event pass through.
  • overflow_routing: Route to an overflow destination. Allowed enum values: drop,no_action,overflow_routing

overrides

[object]

A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.

fields [required]

[object]

A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.

name [required]

string

The field name.

value [required]

string

The field value.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

partition_fields

[string]

A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.

too_many_buckets_action

enum

The action to take when the quota or bucket limit is exceeded. Options:

  • drop: Drop the event.
  • no_action: Let the event pass through.
  • overflow_routing: Route to an overflow destination. Allowed enum values: drop,no_action,overflow_routing

type [required]

enum

The processor type. The value should always be quota. Allowed enum values: quota

default: quota

Option 15

object

The reduce processor aggregates and merges logs based on matching keys and merge strategies.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by [required]

[string]

A list of fields used to group log events for merging.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

merge_strategies [required]

[object]

List of merge strategies defining how values from grouped events should be combined.

path [required]

string

The field path in the log event.

strategy [required]

enum

The merge strategy to apply. Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique

type [required]

enum

The processor type. The value should always be reduce. Allowed enum values: reduce

default: reduce

Option 16

object

The remove_fields processor deletes specified fields from logs.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[string]

A list of field names to be removed from each log event.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be remove_fields. Allowed enum values: remove_fields

default: remove_fields

Option 17

object

The rename_fields processor changes field names.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[object]

A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.

destination [required]

string

The field name to assign the renamed value to.

preserve_source [required]

boolean

Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.

source [required]

string

The original field name in the log event that should be renamed.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be rename_fields. Allowed enum values: rename_fields

default: rename_fields

Option 18

object

The sample processor allows probabilistic sampling of logs at a fixed rate.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by

[string]

Optional list of fields to group events by. Each group is sampled independently.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

percentage [required]

double

The percentage of logs to sample.

type [required]

enum

The processor type. The value should always be sample. Allowed enum values: sample

default: sample

Option 19

object

The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

rules [required]

[object]

A list of rules for identifying and acting on sensitive data patterns.

keyword_options

object

Configuration for keywords used to reinforce sensitive data pattern detection.

keywords [required]

[string]

A list of keywords to match near the sensitive pattern.

proximity [required]

int64

Maximum number of tokens between a keyword and a sensitive value match.

name [required]

string

A name identifying the rule.

on_match [required]

 <oneOf>

Defines what action to take when sensitive data is matched.

Option 1

object

Configuration for completely redacting matched sensitive data.

action [required]

enum

Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility. Allowed enum values: redact

options [required]

object

Configuration for fully redacting sensitive data.

replace [required]

string

The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptions replace.

Option 2

object

Configuration for hashing matched sensitive values.

action [required]

enum

Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content. Allowed enum values: hash

options

object

The ObservabilityPipelineSensitiveDataScannerProcessorActionHash options.

Option 3

object

Configuration for partially redacting matched sensitive data.

action [required]

enum

Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card). Allowed enum values: partial_redact

options [required]

object

Controls how partial redaction is applied, including character count and direction.

characters [required]

int64

The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptions characters.

direction [required]

enum

Indicates whether to redact characters from the first or last part of the matched value. Allowed enum values: first,last

pattern [required]

 <oneOf>

Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.

Option 1

object

Defines a custom regex-based pattern for identifying sensitive data in logs.

options [required]

object

Options for defining a custom regex pattern.

description

string

Human-readable description providing context about a sensitive data scanner rule

rule [required]

string

A regular expression used to detect sensitive values. Must be a valid regex.

type [required]

enum

Indicates a custom regular expression is used for matching. Allowed enum values: custom

Option 2

object

Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.

options [required]

object

Options for selecting a predefined library pattern and enabling keyword support.

description

string

Human-readable description providing context about a sensitive data scanner rule

id [required]

string

Identifier for a predefined pattern from the sensitive data scanner pattern library.

use_recommended_keywords

boolean

Whether to augment the pattern with recommended keywords (optional).

type [required]

enum

Indicates that a predefined library pattern is used. Allowed enum values: library

scope [required]

 <oneOf>

Determines which parts of the log the pattern-matching rule should be applied to.

Option 1

object

Includes only specific fields for sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Applies the rule only to included fields. Allowed enum values: include

Option 2

object

Excludes specific fields from sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Excludes specific fields from processing. Allowed enum values: exclude

Option 3

object

Applies scanning across all available fields.

target [required]

enum

Applies the rule to all fields. Allowed enum values: all

tags [required]

[string]

Tags assigned to this rule for filtering and classification.

type [required]

enum

The processor type. The value should always be sensitive_data_scanner. Allowed enum values: sensitive_data_scanner

default: sensitive_data_scanner

Option 20

object

The split_array processor splits array fields into separate events based on configured rules.

Supported pipeline types: logs

arrays [required]

[object]

A list of array split configurations.

field [required]

string

The path to the array field to split.

include [required]

string

A Datadog search query used to determine which logs this array split operation targets.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets. For split_array, this should typically be *.

type [required]

enum

The processor type. The value should always be split_array. Allowed enum values: split_array

default: split_array

Option 21

object

The throttle processor limits the number of events that pass through over a given time window.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by

[string]

Optional list of fields used to group events before the threshold has been reached.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

threshold [required]

int64

the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.

type [required]

enum

The processor type. The value should always be throttle. Allowed enum values: throttle

default: throttle

window [required]

double

The time window in seconds over which the threshold applies.

Option 22

object

The metric_tags processor filters metrics based on their tags using Datadog tag key patterns.

Supported pipeline types: metrics

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query that determines which metrics the processor targets.

rules [required]

[object]

A list of rules for filtering metric tags.

action [required]

enum

The action to take on tags with matching keys. Allowed enum values: include,exclude

include [required]

string

A Datadog search query used to determine which metrics this rule targets.

keys [required]

[string]

A list of tag keys to include or exclude.

mode [required]

enum

The processing mode for tag filtering. Allowed enum values: filter

type [required]

enum

The processor type. The value should always be metric_tags. Allowed enum values: metric_tags

default: metric_tags

sources [required]

[ <oneOf>]

A list of configured data sources for the pipeline.

Option 1

object

The datadog_agent source collects logs/metrics from the Datadog Agent.

Supported pipeline types: logs, metrics

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be datadog_agent. Allowed enum values: datadog_agent

default: datadog_agent

Option 2

object

The amazon_data_firehose source ingests logs from AWS Data Firehose.

Supported pipeline types: logs

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be amazon_data_firehose. Allowed enum values: amazon_data_firehose

default: amazon_data_firehose

Option 3

object

The amazon_s3 source ingests logs from an Amazon S3 bucket. It supports AWS authentication and TLS encryption.

Supported pipeline types: logs

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

region [required]

string

AWS region where the S3 bucket resides.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always amazon_s3. Allowed enum values: amazon_s3

default: amazon_s3

Option 4

object

The fluent_bit source ingests logs from Fluent Bit.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be fluent_bit. Allowed enum values: fluent_bit

default: fluent_bit

Option 5

object

The fluentd source ingests logs from a Fluentd-compatible service.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be `fluentd. Allowed enum values: fluentd

default: fluentd

Option 6

object

The google_pubsub source ingests logs from a Google Cloud Pub/Sub subscription.

Supported pipeline types: logs

auth

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

project [required]

string

The GCP project ID that owns the Pub/Sub subscription.

subscription [required]

string

The Pub/Sub subscription name from which messages are consumed.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be google_pubsub. Allowed enum values: google_pubsub

default: google_pubsub

Option 7

object

The http_client source scrapes logs from HTTP endpoints at regular intervals.

Supported pipeline types: logs

auth_strategy

enum

Optional authentication strategy for HTTP requests. Allowed enum values: none,basic,bearer,custom

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

scrape_interval_secs

int64

The interval (in seconds) between HTTP scrape requests.

scrape_timeout_secs

int64

The timeout (in seconds) for each scrape request.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be http_client. Allowed enum values: http_client

default: http_client

Option 8

object

The http_server source collects logs over HTTP POST from external services.

Supported pipeline types: logs

auth_strategy [required]

enum

HTTP authentication method. Allowed enum values: none,plain

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

Unique ID for the HTTP server source.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be http_server. Allowed enum values: http_server

default: http_server

Option 9

object

The kafka source ingests data from Apache Kafka topics.

Supported pipeline types: logs

group_id [required]

string

Consumer group ID used by the Kafka client.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

librdkafka_options

[object]

Optional list of advanced Kafka client configuration options, defined as key-value pairs.

name [required]

string

The name of the librdkafka configuration option to set.

value [required]

string

The value assigned to the specified librdkafka configuration option.

sasl

object

Specifies the SASL mechanism for authenticating with a Kafka cluster.

mechanism

enum

SASL mechanism used for Kafka authentication. Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topics [required]

[string]

A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.

type [required]

enum

The source type. The value should always be kafka. Allowed enum values: kafka

default: kafka

Option 10

object

The logstash source ingests logs from a Logstash forwarder.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be logstash. Allowed enum values: logstash

default: logstash

Option 11

object

The rsyslog source listens for logs over TCP or UDP from an rsyslog server using the syslog protocol.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

mode [required]

enum

Protocol used by the syslog source to receive messages. Allowed enum values: tcp,udp

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be rsyslog. Allowed enum values: rsyslog

default: rsyslog

Option 12

object

The socket source ingests logs over TCP or UDP.

Supported pipeline types: logs

framing [required]

 <oneOf>

Framing method configuration for the socket source.

Option 1

object

Byte frames which are delimited by a newline character.

method [required]

enum

Byte frames which are delimited by a newline character. Allowed enum values: newline_delimited

Option 2

object

Byte frames are passed through as-is according to the underlying I/O boundaries (for example, split between messages or stream segments).

method [required]

enum

Byte frames are passed through as-is according to the underlying I/O boundaries (for example, split between messages or stream segments). Allowed enum values: bytes

Option 3

object

Byte frames which are delimited by a chosen character.

delimiter [required]

string

A single ASCII character used to delimit events.

method [required]

enum

Byte frames which are delimited by a chosen character. Allowed enum values: character_delimited

Option 4

object

Byte frames according to the octet counting format as per RFC6587.

method [required]

enum

Byte frames according to the octet counting format as per RFC6587. Allowed enum values: octet_counting

Option 5

object

Byte frames which are chunked GELF messages.

method [required]

enum

Byte frames which are chunked GELF messages. Allowed enum values: chunked_gelf

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

mode [required]

enum

Protocol used to receive logs. Allowed enum values: tcp,udp

tls

object

TLS configuration. Relevant only when mode is tcp.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be socket. Allowed enum values: socket

default: socket

Option 13

object

The splunk_hec source implements the Splunk HTTP Event Collector (HEC) API.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always splunk_hec. Allowed enum values: splunk_hec

default: splunk_hec

Option 14

object

The splunk_tcp source receives logs from a Splunk Universal Forwarder over TCP. TLS is supported for secure transmission.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always splunk_tcp. Allowed enum values: splunk_tcp

default: splunk_tcp

Option 15

object

The sumo_logic source receives logs from Sumo Logic collectors.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

type [required]

enum

The source type. The value should always be sumo_logic. Allowed enum values: sumo_logic

default: sumo_logic

Option 16

object

The syslog_ng source listens for logs over TCP or UDP from a syslog-ng server using the syslog protocol.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

mode [required]

enum

Protocol used by the syslog source to receive messages. Allowed enum values: tcp,udp

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be syslog_ng. Allowed enum values: syslog_ng

default: syslog_ng

Option 17

object

The opentelemetry source receives telemetry data using the OpenTelemetry Protocol (OTLP) over gRPC and HTTP.

Supported pipeline types: logs

grpc_address_key

string

Environment variable name containing the gRPC server address for receiving OTLP data. Must be a valid environment variable name (alphanumeric characters and underscores only).

http_address_key

string

Environment variable name containing the HTTP server address for receiving OTLP data. Must be a valid environment variable name (alphanumeric characters and underscores only).

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be opentelemetry. Allowed enum values: opentelemetry

default: opentelemetry

name [required]

string

Name of the pipeline.

id [required]

string

Unique identifier for the pipeline.

type [required]

string

The resource type identifier. For pipeline resources, this should always be set to pipelines.

default: pipelines

meta

object

Metadata about the response.

totalCount

int64

The total number of pipelines.

{
  "data": [
    {
      "attributes": {
        "config": {
          "destinations": [
            {
              "auth_strategy": "basic",
              "compression": {
                "algorithm": "gzip"
              },
              "encoding": "json",
              "id": "http-client-destination",
              "inputs": [
                "filter-processor"
              ],
              "tls": {
                "ca_file": "string",
                "crt_file": "/path/to/cert.crt",
                "key_file": "string"
              },
              "type": "http_client"
            }
          ],
          "pipeline_type": "logs",
          "processor_groups": [
            {
              "display_name": "my component",
              "enabled": true,
              "id": "grouped-processors",
              "include": "service:my-service",
              "inputs": [
                "datadog-agent-source"
              ],
              "processors": [
                []
              ]
            }
          ],
          "processors": [
            {
              "display_name": "my component",
              "enabled": true,
              "id": "grouped-processors",
              "include": "service:my-service",
              "inputs": [
                "datadog-agent-source"
              ],
              "processors": [
                []
              ]
            }
          ],
          "sources": [
            {
              "id": "datadog-agent-source",
              "tls": {
                "ca_file": "string",
                "crt_file": "/path/to/cert.crt",
                "key_file": "string"
              },
              "type": "datadog_agent"
            }
          ]
        },
        "name": "Main Observability Pipeline"
      },
      "id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
      "type": "pipelines"
    }
  ],
  "meta": {
    "totalCount": 42
  }
}

Bad Request

API error response.

Expand All

Champ

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Not Authorized

API error response.

Expand All

Champ

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Too many requests

API error response.

Expand All

Champ

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Exemple de code

                  # Curl command
curl -X GET "https://api.ap1.datadoghq.com"https://api.ap2.datadoghq.com"https://api.datadoghq.eu"https://api.ddog-gov.com"https://api.datadoghq.com"https://api.us3.datadoghq.com"https://api.us5.datadoghq.com/api/v2/obs-pipelines/pipelines" \ -H "Accept: application/json" \ -H "DD-API-KEY: ${DD_API_KEY}" \ -H "DD-APPLICATION-KEY: ${DD_APP_KEY}"
"""
List pipelines returns "OK" response
"""

from datadog_api_client import ApiClient, Configuration
from datadog_api_client.v2.api.observability_pipelines_api import ObservabilityPipelinesApi

configuration = Configuration()
configuration.unstable_operations["list_pipelines"] = True
with ApiClient(configuration) as api_client:
    api_instance = ObservabilityPipelinesApi(api_client)
    response = api_instance.list_pipelines()

    print(response)

Instructions

First install the library and its dependencies and then save the example to example.py and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" python3 "example.py"
# List pipelines returns "OK" response

require "datadog_api_client"
DatadogAPIClient.configure do |config|
  config.unstable_operations["v2.list_pipelines".to_sym] = true
end
api_instance = DatadogAPIClient::V2::ObservabilityPipelinesAPI.new
p api_instance.list_pipelines()

Instructions

First install the library and its dependencies and then save the example to example.rb and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" rb "example.rb"
// List pipelines returns "OK" response

package main

import (
	"context"
	"encoding/json"
	"fmt"
	"os"

	"github.com/DataDog/datadog-api-client-go/v2/api/datadog"
	"github.com/DataDog/datadog-api-client-go/v2/api/datadogV2"
)

func main() {
	ctx := datadog.NewDefaultContext(context.Background())
	configuration := datadog.NewConfiguration()
	configuration.SetUnstableOperationEnabled("v2.ListPipelines", true)
	apiClient := datadog.NewAPIClient(configuration)
	api := datadogV2.NewObservabilityPipelinesApi(apiClient)
	resp, r, err := api.ListPipelines(ctx, *datadogV2.NewListPipelinesOptionalParameters())

	if err != nil {
		fmt.Fprintf(os.Stderr, "Error when calling `ObservabilityPipelinesApi.ListPipelines`: %v\n", err)
		fmt.Fprintf(os.Stderr, "Full HTTP response: %v\n", r)
	}

	responseContent, _ := json.MarshalIndent(resp, "", "  ")
	fmt.Fprintf(os.Stdout, "Response from `ObservabilityPipelinesApi.ListPipelines`:\n%s\n", responseContent)
}

Instructions

First install the library and its dependencies and then save the example to main.go and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" go run "main.go"
// List pipelines returns "OK" response

import com.datadog.api.client.ApiClient;
import com.datadog.api.client.ApiException;
import com.datadog.api.client.v2.api.ObservabilityPipelinesApi;
import com.datadog.api.client.v2.model.ListPipelinesResponse;

public class Example {
  public static void main(String[] args) {
    ApiClient defaultClient = ApiClient.getDefaultApiClient();
    defaultClient.setUnstableOperationEnabled("v2.listPipelines", true);
    ObservabilityPipelinesApi apiInstance = new ObservabilityPipelinesApi(defaultClient);

    try {
      ListPipelinesResponse result = apiInstance.listPipelines();
      System.out.println(result);
    } catch (ApiException e) {
      System.err.println("Exception when calling ObservabilityPipelinesApi#listPipelines");
      System.err.println("Status code: " + e.getCode());
      System.err.println("Reason: " + e.getResponseBody());
      System.err.println("Response headers: " + e.getResponseHeaders());
      e.printStackTrace();
    }
  }
}

Instructions

First install the library and its dependencies and then save the example to Example.java and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" java "Example.java"
// List pipelines returns "OK" response
use datadog_api_client::datadog;
use datadog_api_client::datadogV2::api_observability_pipelines::ListPipelinesOptionalParams;
use datadog_api_client::datadogV2::api_observability_pipelines::ObservabilityPipelinesAPI;

#[tokio::main]
async fn main() {
    let mut configuration = datadog::Configuration::new();
    configuration.set_unstable_operation_enabled("v2.ListPipelines", true);
    let api = ObservabilityPipelinesAPI::with_config(configuration);
    let resp = api
        .list_pipelines(ListPipelinesOptionalParams::default())
        .await;
    if let Ok(value) = resp {
        println!("{:#?}", value);
    } else {
        println!("{:#?}", resp.unwrap_err());
    }
}

Instructions

First install the library and its dependencies and then save the example to src/main.rs and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" cargo run
/**
 * List pipelines returns "OK" response
 */

import { client, v2 } from "@datadog/datadog-api-client";

const configuration = client.createConfiguration();
configuration.unstableOperations["v2.listPipelines"] = true;
const apiInstance = new v2.ObservabilityPipelinesApi(configuration);

apiInstance
  .listPipelines()
  .then((data: v2.ListPipelinesResponse) => {
    console.log(
      "API called successfully. Returned data: " + JSON.stringify(data)
    );
  })
  .catch((error: any) => console.error(error));

Instructions

First install the library and its dependencies and then save the example to example.ts and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" tsc "example.ts"

Note: This endpoint is in Preview. Fill out this form to request access.

POST https://api.ap1.datadoghq.com/api/v2/obs-pipelines/pipelineshttps://api.ap2.datadoghq.com/api/v2/obs-pipelines/pipelineshttps://api.datadoghq.eu/api/v2/obs-pipelines/pipelineshttps://api.ddog-gov.com/api/v2/obs-pipelines/pipelineshttps://api.datadoghq.com/api/v2/obs-pipelines/pipelineshttps://api.us3.datadoghq.com/api/v2/obs-pipelines/pipelineshttps://api.us5.datadoghq.com/api/v2/obs-pipelines/pipelines

Présentation

Create a new pipeline. This endpoint requires the observability_pipelines_deploy permission.

Requête

Body Data (required)

Expand All

Champ

Type

Description

data [required]

object

Contains the the pipeline configuration.

attributes [required]

object

Defines the pipeline’s name and its components (sources, processors, and destinations).

config [required]

object

Specifies the pipeline's configuration, including its sources, processors, and destinations.

destinations [required]

[ <oneOf>]

A list of destination components where processed logs are sent.

Option 1

object

The http_client destination sends data to an HTTP endpoint.

Supported pipeline types: logs, metrics

auth_strategy

enum

HTTP authentication strategy. Allowed enum values: none,basic,bearer

compression

object

Compression configuration for HTTP requests.

algorithm [required]

enum

Compression algorithm. Allowed enum values: gzip

encoding [required]

enum

Encoding format for log events. Allowed enum values: json

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be http_client. Allowed enum values: http_client

default: http_client

Option 2

object

The amazon_opensearch destination writes logs to Amazon OpenSearch.

Supported pipeline types: logs

auth [required]

object

Authentication settings for the Amazon OpenSearch destination. The strategy field determines whether basic or AWS-based authentication is used.

assume_role

string

The ARN of the role to assume (used with aws strategy).

aws_region

string

AWS region

external_id

string

External ID for the assumed role (used with aws strategy).

session_name

string

Session name for the assumed role (used with aws strategy).

strategy [required]

enum

The authentication strategy to use. Allowed enum values: basic,aws

bulk_index

string

The index to write logs to.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be amazon_opensearch. Allowed enum values: amazon_opensearch

default: amazon_opensearch

Option 3

object

The amazon_s3 destination sends your logs in Datadog-rehydratable format to an Amazon S3 bucket for archiving.

Supported pipeline types: logs

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

bucket [required]

string

S3 bucket name.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_prefix

string

Optional prefix for object keys.

region [required]

string

AWS region of the S3 bucket.

storage_class [required]

enum

S3 storage class. Allowed enum values: STANDARD,REDUCED_REDUNDANCY,INTELLIGENT_TIERING,STANDARD_IA,EXPRESS_ONEZONE,ONEZONE_IA,GLACIER,GLACIER_IR,DEEP_ARCHIVE

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. Always amazon_s3. Allowed enum values: amazon_s3

default: amazon_s3

Option 4

object

The amazon_security_lake destination sends your logs to Amazon Security Lake.

Supported pipeline types: logs

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

bucket [required]

string

Name of the Amazon S3 bucket in Security Lake (3-63 characters).

custom_source_name [required]

string

Custom source name for the logs in Security Lake.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

string

AWS region of the S3 bucket.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. Always amazon_security_lake. Allowed enum values: amazon_security_lake

default: amazon_security_lake

Option 5

object

The azure_storage destination forwards logs to an Azure Blob Storage container.

Supported pipeline types: logs

blob_prefix

string

Optional prefix for blobs written to the container.

container_name [required]

string

The name of the Azure Blob Storage container to store logs in.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be azure_storage. Allowed enum values: azure_storage

default: azure_storage

Option 6

object

The cloud_prem destination sends logs to Datadog CloudPrem.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be cloud_prem. Allowed enum values: cloud_prem

default: cloud_prem

Option 7

object

The crowdstrike_next_gen_siem destination forwards logs to CrowdStrike Next Gen SIEM.

Supported pipeline types: logs

compression

object

Compression configuration for log events.

algorithm [required]

enum

Compression algorithm for log events. Allowed enum values: gzip,zlib

level

int64

Compression level.

encoding [required]

enum

Encoding format for log events. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be crowdstrike_next_gen_siem. Allowed enum values: crowdstrike_next_gen_siem

default: crowdstrike_next_gen_siem

Option 8

object

The datadog_logs destination forwards logs to Datadog Log Management.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

routes

[object]

A list of routing rules that forward matching logs to Datadog using dedicated API keys.

api_key_key

string

Name of the environment variable or secret that stores the Datadog API key used by this route.

include

string

A Datadog search query that determines which logs are forwarded using this route.

route_id

string

Unique identifier for this route within the destination.

site

string

Datadog site where matching logs are sent (for example, us1).

type [required]

enum

The destination type. The value should always be datadog_logs. Allowed enum values: datadog_logs

default: datadog_logs

Option 9

object

The elasticsearch destination writes logs to an Elasticsearch cluster.

Supported pipeline types: logs

api_version

enum

The Elasticsearch API version to use. Set to auto to auto-detect. Allowed enum values: auto,v6,v7,v8

bulk_index

string

The index to write logs to in Elasticsearch.

data_stream

object

Configuration options for writing to Elasticsearch Data Streams instead of a fixed index.

dataset

string

The data stream dataset for your logs. This groups logs by their source or application.

dtype

string

The data stream type for your logs. This determines how logs are categorized within the data stream.

namespace

string

The data stream namespace for your logs. This separates logs into different environments or domains.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be elasticsearch. Allowed enum values: elasticsearch

default: elasticsearch

Option 10

object

The google_chronicle destination sends logs to Google Chronicle.

Supported pipeline types: logs

auth

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

customer_id [required]

string

The Google Chronicle customer ID.

encoding

enum

The encoding format for the logs sent to Chronicle. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

log_type

string

The log type metadata associated with the Chronicle destination.

type [required]

enum

The destination type. The value should always be google_chronicle. Allowed enum values: google_chronicle

default: google_chronicle

Option 11

object

The google_cloud_storage destination stores logs in a Google Cloud Storage (GCS) bucket. It requires a bucket name, GCP authentication, and metadata fields.

Supported pipeline types: logs

acl

enum

Access control list setting for objects written to the bucket. Allowed enum values: private,project-private,public-read,authenticated-read,bucket-owner-read,bucket-owner-full-control

auth

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

bucket [required]

string

Name of the GCS bucket.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_prefix

string

Optional prefix for object keys within the GCS bucket.

metadata

[object]

Custom metadata to attach to each object uploaded to the GCS bucket.

name [required]

string

The metadata key.

value [required]

string

The metadata value.

storage_class [required]

enum

Storage class used for objects stored in GCS. Allowed enum values: STANDARD,NEARLINE,COLDLINE,ARCHIVE

type [required]

enum

The destination type. Always google_cloud_storage. Allowed enum values: google_cloud_storage

default: google_cloud_storage

Option 12

object

The google_pubsub destination publishes logs to a Google Cloud Pub/Sub topic.

Supported pipeline types: logs

auth

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

encoding [required]

enum

Encoding format for log events. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

project [required]

string

The GCP project ID that owns the Pub/Sub topic.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topic [required]

string

The Pub/Sub topic name to publish logs to.

type [required]

enum

The destination type. The value should always be google_pubsub. Allowed enum values: google_pubsub

default: google_pubsub

Option 13

object

The kafka destination sends logs to Apache Kafka topics.

Supported pipeline types: logs

compression

enum

Compression codec for Kafka messages. Allowed enum values: none,gzip,snappy,lz4,zstd

encoding [required]

enum

Encoding format for log events. Allowed enum values: json,raw_message

headers_key

string

The field name to use for Kafka message headers.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_field

string

The field name to use as the Kafka message key.

librdkafka_options

[object]

Optional list of advanced Kafka producer configuration options, defined as key-value pairs.

name [required]

string

The name of the librdkafka configuration option to set.

value [required]

string

The value assigned to the specified librdkafka configuration option.

message_timeout_ms

int64

Maximum time in milliseconds to wait for message delivery confirmation.

rate_limit_duration_secs

int64

Duration in seconds for the rate limit window.

rate_limit_num

int64

Maximum number of messages allowed per rate limit duration.

sasl

object

Specifies the SASL mechanism for authenticating with a Kafka cluster.

mechanism

enum

SASL mechanism used for Kafka authentication. Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512

socket_timeout_ms

int64

Socket timeout in milliseconds for network requests.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topic [required]

string

The Kafka topic name to publish logs to.

type [required]

enum

The destination type. The value should always be kafka. Allowed enum values: kafka

default: kafka

Option 14

object

The microsoft_sentinel destination forwards logs to Microsoft Sentinel.

Supported pipeline types: logs

client_id [required]

string

Azure AD client ID used for authentication.

dcr_immutable_id [required]

string

The immutable ID of the Data Collection Rule (DCR).

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

table [required]

string

The name of the Log Analytics table where logs are sent.

tenant_id [required]

string

Azure AD tenant ID.

type [required]

enum

The destination type. The value should always be microsoft_sentinel. Allowed enum values: microsoft_sentinel

default: microsoft_sentinel

Option 15

object

The new_relic destination sends logs to the New Relic platform.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

enum

The New Relic region. Allowed enum values: us,eu

type [required]

enum

The destination type. The value should always be new_relic. Allowed enum values: new_relic

default: new_relic

Option 16

object

The opensearch destination writes logs to an OpenSearch cluster.

Supported pipeline types: logs

bulk_index

string

The index to write logs to.

data_stream

object

Configuration options for writing to OpenSearch Data Streams instead of a fixed index.

dataset

string

The data stream dataset for your logs. This groups logs by their source or application.

dtype

string

The data stream type for your logs. This determines how logs are categorized within the data stream.

namespace

string

The data stream namespace for your logs. This separates logs into different environments or domains.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be opensearch. Allowed enum values: opensearch

default: opensearch

Option 17

object

The rsyslog destination forwards logs to an external rsyslog server over TCP or UDP using the syslog protocol.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

keepalive

int64

Optional socket keepalive duration in milliseconds.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be rsyslog. Allowed enum values: rsyslog

default: rsyslog

Option 18

object

The sentinel_one destination sends logs to SentinelOne.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

enum

The SentinelOne region to send logs to. Allowed enum values: us,eu,ca,data_set_us

type [required]

enum

The destination type. The value should always be sentinel_one. Allowed enum values: sentinel_one

default: sentinel_one

Option 19

object

The socket destination sends logs over TCP or UDP to a remote server.

Supported pipeline types: logs

encoding [required]

enum

Encoding format for log events. Allowed enum values: json,raw_message

framing [required]

 <oneOf>

Framing method configuration.

Option 1

object

Each log event is delimited by a newline character.

method [required]

enum

The definition of ObservabilityPipelineSocketDestinationFramingNewlineDelimitedMethod object. Allowed enum values: newline_delimited

Option 2

object

Event data is not delimited at all.

method [required]

enum

The definition of ObservabilityPipelineSocketDestinationFramingBytesMethod object. Allowed enum values: bytes

Option 3

object

Each log event is separated using the specified delimiter character.

delimiter [required]

string

A single ASCII character used as a delimiter.

method [required]

enum

The definition of ObservabilityPipelineSocketDestinationFramingCharacterDelimitedMethod object. Allowed enum values: character_delimited

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

mode [required]

enum

Protocol used to send logs. Allowed enum values: tcp,udp

tls

object

TLS configuration. Relevant only when mode is tcp.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be socket. Allowed enum values: socket

default: socket

Option 20

object

The splunk_hec destination forwards logs to Splunk using the HTTP Event Collector (HEC).

Supported pipeline types: logs

auto_extract_timestamp

boolean

If true, Splunk tries to extract timestamps from incoming log events. If false, Splunk assigns the time the event was received.

encoding

enum

Encoding format for log events. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

index

string

Optional name of the Splunk index where logs are written.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

sourcetype

string

The Splunk sourcetype to assign to log events.

type [required]

enum

The destination type. Always splunk_hec. Allowed enum values: splunk_hec

default: splunk_hec

Option 21

object

The sumo_logic destination forwards logs to Sumo Logic.

Supported pipeline types: logs

encoding

enum

The output encoding format. Allowed enum values: json,raw_message,logfmt

header_custom_fields

[object]

A list of custom headers to include in the request to Sumo Logic.

name [required]

string

The header field name.

value [required]

string

The header field value.

header_host_name

string

Optional override for the host name header.

header_source_category

string

Optional override for the source category header.

header_source_name

string

Optional override for the source name header.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be sumo_logic. Allowed enum values: sumo_logic

default: sumo_logic

Option 22

object

The syslog_ng destination forwards logs to an external syslog-ng server over TCP or UDP using the syslog protocol.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

keepalive

int64

Optional socket keepalive duration in milliseconds.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be syslog_ng. Allowed enum values: syslog_ng

default: syslog_ng

Option 23

object

The datadog_metrics destination forwards metrics to Datadog.

Supported pipeline types: metrics

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be datadog_metrics. Allowed enum values: datadog_metrics

default: datadog_metrics

pipeline_type

enum

The type of data being ingested. Defaults to logs if not specified. Allowed enum values: logs,metrics

default: logs

processor_groups

[object]

A list of processor groups that transform or enrich log data.

display_name

string

The display name for a component.

enabled [required]

boolean

Whether this processor group is enabled.

id [required]

string

The unique identifier for the processor group.

include [required]

string

Conditional expression for when this processor group should execute.

inputs [required]

[string]

A list of IDs for components whose output is used as the input for this processor group.

processors [required]

[ <oneOf>]

Processors applied sequentially within this group. Events flow through each processor in order.

Option 1

object

The filter processor allows conditional processing of logs/metrics based on a Datadog search query. Logs/metrics that match the include query are passed through; others are discarded.

Supported pipeline types: logs, metrics

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs/metrics should pass through the filter. Logs/metrics that match this query continue to downstream components; others are dropped.

type [required]

enum

The processor type. The value should always be filter. Allowed enum values: filter

default: filter

Option 2

object

The add_env_vars processor adds environment variable values to log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this processor in the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_env_vars. Allowed enum values: add_env_vars

default: add_env_vars

variables [required]

[object]

A list of environment variable mappings to apply to log fields.

field [required]

string

The target field in the log event.

name [required]

string

The name of the environment variable to read.

Option 3

object

The add_fields processor adds static key-value fields to logs.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[object]

A list of static fields (key-value pairs) that is added to each log event processed by this component.

name [required]

string

The field name.

value [required]

string

The field value.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_fields. Allowed enum values: add_fields

default: add_fields

Option 4

object

The add_hostname processor adds the hostname to log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_hostname. Allowed enum values: add_hostname

default: add_hostname

Option 5

object

The custom_processor processor transforms events using Vector Remap Language (VRL) scripts with advanced filtering capabilities.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets. This field should always be set to * for the custom_processor processor.

default: *

remaps [required]

[object]

Array of VRL remap rules.

drop_on_error [required]

boolean

Whether to drop events that caused errors during processing.

enabled

boolean

Whether this remap rule is enabled.

include [required]

string

A Datadog search query used to filter events for this specific remap rule.

name [required]

string

A descriptive name for this remap rule.

source [required]

string

The VRL script source code that defines the processing logic.

type [required]

enum

The processor type. The value should always be custom_processor. Allowed enum values: custom_processor

default: custom_processor

Option 6

object

The datadog_tags processor includes or excludes specific Datadog tags in your logs.

Supported pipeline types: logs

action [required]

enum

The action to take on tags with matching keys. Allowed enum values: include,exclude

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

keys [required]

[string]

A list of tag keys.

mode [required]

enum

The processing mode. Allowed enum values: filter

type [required]

enum

The processor type. The value should always be datadog_tags. Allowed enum values: datadog_tags

default: datadog_tags

Option 7

object

The dedupe processor removes duplicate fields in log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[string]

A list of log field paths to check for duplicates.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

mode [required]

enum

The deduplication mode to apply to the fields. Allowed enum values: match,ignore

type [required]

enum

The processor type. The value should always be dedupe. Allowed enum values: dedupe

default: dedupe

Option 8

object

The enrichment_table processor enriches logs using a static CSV file, GeoIP database, or reference table. Exactly one of file, geoip, or reference_table must be configured.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

file

object

Defines a static enrichment table loaded from a CSV file.

encoding [required]

object

File encoding format.

delimiter [required]

string

The encoding delimiter.

includes_headers [required]

boolean

The encoding includes_headers.

type [required]

enum

Specifies the encoding format (e.g., CSV) used for enrichment tables. Allowed enum values: csv

key [required]

[object]

Key fields used to look up enrichment values.

column [required]

string

The items column.

comparison [required]

enum

Defines how to compare key fields for enrichment table lookups. Allowed enum values: equals

field [required]

string

The items field.

path [required]

string

Path to the CSV file.

schema [required]

[object]

Schema defining column names and their types.

column [required]

string

The items column.

type [required]

enum

Declares allowed data types for enrichment table columns. Allowed enum values: string,boolean,integer,float,date,timestamp

geoip

object

Uses a GeoIP database to enrich logs based on an IP field.

key_field [required]

string

Path to the IP field in the log.

locale [required]

string

Locale used to resolve geographical names.

path [required]

string

Path to the GeoIP database file.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

reference_table

object

Uses a Datadog reference table to enrich logs.

columns

[string]

List of column names to include from the reference table. If not provided, all columns are included.

key_field [required]

string

Path to the field in the log event to match against the reference table.

table_id [required]

string

The unique identifier of the reference table.

target [required]

string

Path where enrichment results should be stored in the log.

type [required]

enum

The processor type. The value should always be enrichment_table. Allowed enum values: enrichment_table

default: enrichment_table

Option 9

object

The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog. Metrics can be counters, gauges, or distributions and optionally grouped by log fields.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include

string

A Datadog search query used to determine which logs this processor targets.

metrics

[object]

Configuration for generating individual metrics.

group_by

[string]

Optional fields used to group the metric series.

include [required]

string

Datadog filter query to match logs for metric generation.

metric_type [required]

enum

Type of metric to create. Allowed enum values: count,gauge,distribution

name [required]

string

Name of the custom metric to be created.

value [required]

 <oneOf>

Specifies how the value of the generated metric is computed.

Option 1

object

Strategy that increments a generated metric by one for each matching event.

strategy [required]

enum

Increments the metric by 1 for each matching event. Allowed enum values: increment_by_one

Option 2

object

Strategy that increments a generated metric based on the value of a log field.

field [required]

string

Name of the log field containing the numeric value to increment the metric by.

strategy [required]

enum

Uses a numeric field in the log event as the metric increment. Allowed enum values: increment_by_field

type [required]

enum

The processor type. Always generate_datadog_metrics. Allowed enum values: generate_datadog_metrics

default: generate_datadog_metrics

Option 10

object

The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

mappings [required]

[object]

A list of mapping rules to convert events to the OCSF format.

include [required]

string

A Datadog search query used to select the logs that this mapping should apply to.

mapping [required]

 <oneOf>

Defines a single mapping rule for transforming logs into the OCSF schema.

Option 1

enum

Predefined library mappings for common log formats. Allowed enum values: CloudTrail Account Change,GCP Cloud Audit CreateBucket,GCP Cloud Audit CreateSink,GCP Cloud Audit SetIamPolicy,GCP Cloud Audit UpdateSink,Github Audit Log API Activity,Google Workspace Admin Audit addPrivilege,Microsoft 365 Defender Incident,Microsoft 365 Defender UserLoggedIn,Okta System Log Authentication,Palo Alto Networks Firewall Traffic

type [required]

enum

The processor type. The value should always be ocsf_mapper. Allowed enum values: ocsf_mapper

default: ocsf_mapper

Option 11

object

The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.

Supported pipeline types: logs

disable_library_rules

boolean

If set to true, disables the default Grok rules provided by Datadog.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

A unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

rules [required]

[object]

The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.

match_rules [required]

[object]

A list of Grok parsing rules that define how to extract fields from the source field. Each rule must contain a name and a valid Grok pattern.

name [required]

string

The name of the rule.

rule [required]

string

The definition of the Grok rule.

source [required]

string

The name of the field in the log event to apply the Grok rules to.

support_rules

[object]

A list of Grok helper rules that can be referenced by the parsing rules.

name [required]

string

The name of the Grok helper rule.

rule [required]

string

The definition of the Grok helper rule.

type [required]

enum

The processor type. The value should always be parse_grok. Allowed enum values: parse_grok

default: parse_grok

Option 12

object

The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

field [required]

string

The name of the log field that contains a JSON string.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be parse_json. Allowed enum values: parse_json

default: parse_json

Option 13

object

The parse_xml processor parses XML from a specified field and extracts it into the event.

Supported pipeline types: logs

always_use_text_key

boolean

Whether to always use a text key for element content.

attr_prefix

string

The prefix to use for XML attributes in the parsed output.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

field [required]

string

The name of the log field that contains an XML string.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

include_attr

boolean

Whether to include XML attributes in the parsed output.

parse_bool

boolean

Whether to parse boolean values from strings.

parse_null

boolean

Whether to parse null values.

parse_number

boolean

Whether to parse numeric values from strings.

text_key

string

The key name to use for text content within XML elements. Must be at least 1 character if specified.

type [required]

enum

The processor type. The value should always be parse_xml. Allowed enum values: parse_xml

default: parse_xml

Option 14

object

The quota processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.

Supported pipeline types: logs

display_name

string

The display name for a component.

drop_events

boolean

If set to true, logs that match the quota filter and are sent after the quota is exceeded are dropped. Logs that do not match the filter continue through the pipeline. Note: You can set either drop_events or overflow_action, but not both.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

ignore_when_missing_partitions

boolean

If true, the processor skips quota checks when partition fields are missing from the logs.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

name [required]

string

Name of the quota.

overflow_action

enum

The action to take when the quota or bucket limit is exceeded. Options:

  • drop: Drop the event.
  • no_action: Let the event pass through.
  • overflow_routing: Route to an overflow destination. Allowed enum values: drop,no_action,overflow_routing

overrides

[object]

A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.

fields [required]

[object]

A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.

name [required]

string

The field name.

value [required]

string

The field value.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

partition_fields

[string]

A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.

too_many_buckets_action

enum

The action to take when the quota or bucket limit is exceeded. Options:

  • drop: Drop the event.
  • no_action: Let the event pass through.
  • overflow_routing: Route to an overflow destination. Allowed enum values: drop,no_action,overflow_routing

type [required]

enum

The processor type. The value should always be quota. Allowed enum values: quota

default: quota

Option 15

object

The reduce processor aggregates and merges logs based on matching keys and merge strategies.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by [required]

[string]

A list of fields used to group log events for merging.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

merge_strategies [required]

[object]

List of merge strategies defining how values from grouped events should be combined.

path [required]

string

The field path in the log event.

strategy [required]

enum

The merge strategy to apply. Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique

type [required]

enum

The processor type. The value should always be reduce. Allowed enum values: reduce

default: reduce

Option 16

object

The remove_fields processor deletes specified fields from logs.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[string]

A list of field names to be removed from each log event.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be remove_fields. Allowed enum values: remove_fields

default: remove_fields

Option 17

object

The rename_fields processor changes field names.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[object]

A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.

destination [required]

string

The field name to assign the renamed value to.

preserve_source [required]

boolean

Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.

source [required]

string

The original field name in the log event that should be renamed.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be rename_fields. Allowed enum values: rename_fields

default: rename_fields

Option 18

object

The sample processor allows probabilistic sampling of logs at a fixed rate.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by

[string]

Optional list of fields to group events by. Each group is sampled independently.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

percentage [required]

double

The percentage of logs to sample.

type [required]

enum

The processor type. The value should always be sample. Allowed enum values: sample

default: sample

Option 19

object

The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

rules [required]

[object]

A list of rules for identifying and acting on sensitive data patterns.

keyword_options

object

Configuration for keywords used to reinforce sensitive data pattern detection.

keywords [required]

[string]

A list of keywords to match near the sensitive pattern.

proximity [required]

int64

Maximum number of tokens between a keyword and a sensitive value match.

name [required]

string

A name identifying the rule.

on_match [required]

 <oneOf>

Defines what action to take when sensitive data is matched.

Option 1

object

Configuration for completely redacting matched sensitive data.

action [required]

enum

Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility. Allowed enum values: redact

options [required]

object

Configuration for fully redacting sensitive data.

replace [required]

string

The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptions replace.

Option 2

object

Configuration for hashing matched sensitive values.

action [required]

enum

Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content. Allowed enum values: hash

options

object

The ObservabilityPipelineSensitiveDataScannerProcessorActionHash options.

Option 3

object

Configuration for partially redacting matched sensitive data.

action [required]

enum

Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card). Allowed enum values: partial_redact

options [required]

object

Controls how partial redaction is applied, including character count and direction.

characters [required]

int64

The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptions characters.

direction [required]

enum

Indicates whether to redact characters from the first or last part of the matched value. Allowed enum values: first,last

pattern [required]

 <oneOf>

Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.

Option 1

object

Defines a custom regex-based pattern for identifying sensitive data in logs.

options [required]

object

Options for defining a custom regex pattern.

description

string

Human-readable description providing context about a sensitive data scanner rule

rule [required]

string

A regular expression used to detect sensitive values. Must be a valid regex.

type [required]

enum

Indicates a custom regular expression is used for matching. Allowed enum values: custom

Option 2

object

Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.

options [required]

object

Options for selecting a predefined library pattern and enabling keyword support.

description

string

Human-readable description providing context about a sensitive data scanner rule

id [required]

string

Identifier for a predefined pattern from the sensitive data scanner pattern library.

use_recommended_keywords

boolean

Whether to augment the pattern with recommended keywords (optional).

type [required]

enum

Indicates that a predefined library pattern is used. Allowed enum values: library

scope [required]

 <oneOf>

Determines which parts of the log the pattern-matching rule should be applied to.

Option 1

object

Includes only specific fields for sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Applies the rule only to included fields. Allowed enum values: include

Option 2

object

Excludes specific fields from sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Excludes specific fields from processing. Allowed enum values: exclude

Option 3

object

Applies scanning across all available fields.

target [required]

enum

Applies the rule to all fields. Allowed enum values: all

tags [required]

[string]

Tags assigned to this rule for filtering and classification.

type [required]

enum

The processor type. The value should always be sensitive_data_scanner. Allowed enum values: sensitive_data_scanner

default: sensitive_data_scanner

Option 20

object

The split_array processor splits array fields into separate events based on configured rules.

Supported pipeline types: logs

arrays [required]

[object]

A list of array split configurations.

field [required]

string

The path to the array field to split.

include [required]

string

A Datadog search query used to determine which logs this array split operation targets.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets. For split_array, this should typically be *.

type [required]

enum

The processor type. The value should always be split_array. Allowed enum values: split_array

default: split_array

Option 21

object

The throttle processor limits the number of events that pass through over a given time window.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by

[string]

Optional list of fields used to group events before the threshold has been reached.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

threshold [required]

int64

the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.

type [required]

enum

The processor type. The value should always be throttle. Allowed enum values: throttle

default: throttle

window [required]

double

The time window in seconds over which the threshold applies.

Option 22

object

The metric_tags processor filters metrics based on their tags using Datadog tag key patterns.

Supported pipeline types: metrics

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query that determines which metrics the processor targets.

rules [required]

[object]

A list of rules for filtering metric tags.

action [required]

enum

The action to take on tags with matching keys. Allowed enum values: include,exclude

include [required]

string

A Datadog search query used to determine which metrics this rule targets.

keys [required]

[string]

A list of tag keys to include or exclude.

mode [required]

enum

The processing mode for tag filtering. Allowed enum values: filter

type [required]

enum

The processor type. The value should always be metric_tags. Allowed enum values: metric_tags

default: metric_tags

processors

[object]

DEPRECATED: A list of processor groups that transform or enrich log data.

Deprecated: This field is deprecated, you should now use the processor_groups field.

display_name

string

The display name for a component.

enabled [required]

boolean

Whether this processor group is enabled.

id [required]

string

The unique identifier for the processor group.

include [required]

string

Conditional expression for when this processor group should execute.

inputs [required]

[string]

A list of IDs for components whose output is used as the input for this processor group.

processors [required]

[ <oneOf>]

Processors applied sequentially within this group. Events flow through each processor in order.

Option 1

object

The filter processor allows conditional processing of logs/metrics based on a Datadog search query. Logs/metrics that match the include query are passed through; others are discarded.

Supported pipeline types: logs, metrics

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs/metrics should pass through the filter. Logs/metrics that match this query continue to downstream components; others are dropped.

type [required]

enum

The processor type. The value should always be filter. Allowed enum values: filter

default: filter

Option 2

object

The add_env_vars processor adds environment variable values to log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this processor in the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_env_vars. Allowed enum values: add_env_vars

default: add_env_vars

variables [required]

[object]

A list of environment variable mappings to apply to log fields.

field [required]

string

The target field in the log event.

name [required]

string

The name of the environment variable to read.

Option 3

object

The add_fields processor adds static key-value fields to logs.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[object]

A list of static fields (key-value pairs) that is added to each log event processed by this component.

name [required]

string

The field name.

value [required]

string

The field value.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_fields. Allowed enum values: add_fields

default: add_fields

Option 4

object

The add_hostname processor adds the hostname to log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_hostname. Allowed enum values: add_hostname

default: add_hostname

Option 5

object

The custom_processor processor transforms events using Vector Remap Language (VRL) scripts with advanced filtering capabilities.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets. This field should always be set to * for the custom_processor processor.

default: *

remaps [required]

[object]

Array of VRL remap rules.

drop_on_error [required]

boolean

Whether to drop events that caused errors during processing.

enabled

boolean

Whether this remap rule is enabled.

include [required]

string

A Datadog search query used to filter events for this specific remap rule.

name [required]

string

A descriptive name for this remap rule.

source [required]

string

The VRL script source code that defines the processing logic.

type [required]

enum

The processor type. The value should always be custom_processor. Allowed enum values: custom_processor

default: custom_processor

Option 6

object

The datadog_tags processor includes or excludes specific Datadog tags in your logs.

Supported pipeline types: logs

action [required]

enum

The action to take on tags with matching keys. Allowed enum values: include,exclude

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

keys [required]

[string]

A list of tag keys.

mode [required]

enum

The processing mode. Allowed enum values: filter

type [required]

enum

The processor type. The value should always be datadog_tags. Allowed enum values: datadog_tags

default: datadog_tags

Option 7

object

The dedupe processor removes duplicate fields in log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[string]

A list of log field paths to check for duplicates.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

mode [required]

enum

The deduplication mode to apply to the fields. Allowed enum values: match,ignore

type [required]

enum

The processor type. The value should always be dedupe. Allowed enum values: dedupe

default: dedupe

Option 8

object

The enrichment_table processor enriches logs using a static CSV file, GeoIP database, or reference table. Exactly one of file, geoip, or reference_table must be configured.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

file

object

Defines a static enrichment table loaded from a CSV file.

encoding [required]

object

File encoding format.

delimiter [required]

string

The encoding delimiter.

includes_headers [required]

boolean

The encoding includes_headers.

type [required]

enum

Specifies the encoding format (e.g., CSV) used for enrichment tables. Allowed enum values: csv

key [required]

[object]

Key fields used to look up enrichment values.

column [required]

string

The items column.

comparison [required]

enum

Defines how to compare key fields for enrichment table lookups. Allowed enum values: equals

field [required]

string

The items field.

path [required]

string

Path to the CSV file.

schema [required]

[object]

Schema defining column names and their types.

column [required]

string

The items column.

type [required]

enum

Declares allowed data types for enrichment table columns. Allowed enum values: string,boolean,integer,float,date,timestamp

geoip

object

Uses a GeoIP database to enrich logs based on an IP field.

key_field [required]

string

Path to the IP field in the log.

locale [required]

string

Locale used to resolve geographical names.

path [required]

string

Path to the GeoIP database file.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

reference_table

object

Uses a Datadog reference table to enrich logs.

columns

[string]

List of column names to include from the reference table. If not provided, all columns are included.

key_field [required]

string

Path to the field in the log event to match against the reference table.

table_id [required]

string

The unique identifier of the reference table.

target [required]

string

Path where enrichment results should be stored in the log.

type [required]

enum

The processor type. The value should always be enrichment_table. Allowed enum values: enrichment_table

default: enrichment_table

Option 9

object

The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog. Metrics can be counters, gauges, or distributions and optionally grouped by log fields.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include

string

A Datadog search query used to determine which logs this processor targets.

metrics

[object]

Configuration for generating individual metrics.

group_by

[string]

Optional fields used to group the metric series.

include [required]

string

Datadog filter query to match logs for metric generation.

metric_type [required]

enum

Type of metric to create. Allowed enum values: count,gauge,distribution

name [required]

string

Name of the custom metric to be created.

value [required]

 <oneOf>

Specifies how the value of the generated metric is computed.

Option 1

object

Strategy that increments a generated metric by one for each matching event.

strategy [required]

enum

Increments the metric by 1 for each matching event. Allowed enum values: increment_by_one

Option 2

object

Strategy that increments a generated metric based on the value of a log field.

field [required]

string

Name of the log field containing the numeric value to increment the metric by.

strategy [required]

enum

Uses a numeric field in the log event as the metric increment. Allowed enum values: increment_by_field

type [required]

enum

The processor type. Always generate_datadog_metrics. Allowed enum values: generate_datadog_metrics

default: generate_datadog_metrics

Option 10

object

The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

mappings [required]

[object]

A list of mapping rules to convert events to the OCSF format.

include [required]

string

A Datadog search query used to select the logs that this mapping should apply to.

mapping [required]

 <oneOf>

Defines a single mapping rule for transforming logs into the OCSF schema.

Option 1

enum

Predefined library mappings for common log formats. Allowed enum values: CloudTrail Account Change,GCP Cloud Audit CreateBucket,GCP Cloud Audit CreateSink,GCP Cloud Audit SetIamPolicy,GCP Cloud Audit UpdateSink,Github Audit Log API Activity,Google Workspace Admin Audit addPrivilege,Microsoft 365 Defender Incident,Microsoft 365 Defender UserLoggedIn,Okta System Log Authentication,Palo Alto Networks Firewall Traffic

type [required]

enum

The processor type. The value should always be ocsf_mapper. Allowed enum values: ocsf_mapper

default: ocsf_mapper

Option 11

object

The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.

Supported pipeline types: logs

disable_library_rules

boolean

If set to true, disables the default Grok rules provided by Datadog.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

A unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

rules [required]

[object]

The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.

match_rules [required]

[object]

A list of Grok parsing rules that define how to extract fields from the source field. Each rule must contain a name and a valid Grok pattern.

name [required]

string

The name of the rule.

rule [required]

string

The definition of the Grok rule.

source [required]

string

The name of the field in the log event to apply the Grok rules to.

support_rules

[object]

A list of Grok helper rules that can be referenced by the parsing rules.

name [required]

string

The name of the Grok helper rule.

rule [required]

string

The definition of the Grok helper rule.

type [required]

enum

The processor type. The value should always be parse_grok. Allowed enum values: parse_grok

default: parse_grok

Option 12

object

The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

field [required]

string

The name of the log field that contains a JSON string.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be parse_json. Allowed enum values: parse_json

default: parse_json

Option 13

object

The parse_xml processor parses XML from a specified field and extracts it into the event.

Supported pipeline types: logs

always_use_text_key

boolean

Whether to always use a text key for element content.

attr_prefix

string

The prefix to use for XML attributes in the parsed output.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

field [required]

string

The name of the log field that contains an XML string.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

include_attr

boolean

Whether to include XML attributes in the parsed output.

parse_bool

boolean

Whether to parse boolean values from strings.

parse_null

boolean

Whether to parse null values.

parse_number

boolean

Whether to parse numeric values from strings.

text_key

string

The key name to use for text content within XML elements. Must be at least 1 character if specified.

type [required]

enum

The processor type. The value should always be parse_xml. Allowed enum values: parse_xml

default: parse_xml

Option 14

object

The quota processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.

Supported pipeline types: logs

display_name

string

The display name for a component.

drop_events

boolean

If set to true, logs that match the quota filter and are sent after the quota is exceeded are dropped. Logs that do not match the filter continue through the pipeline. Note: You can set either drop_events or overflow_action, but not both.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

ignore_when_missing_partitions

boolean

If true, the processor skips quota checks when partition fields are missing from the logs.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

name [required]

string

Name of the quota.

overflow_action

enum

The action to take when the quota or bucket limit is exceeded. Options:

  • drop: Drop the event.
  • no_action: Let the event pass through.
  • overflow_routing: Route to an overflow destination. Allowed enum values: drop,no_action,overflow_routing

overrides

[object]

A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.

fields [required]

[object]

A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.

name [required]

string

The field name.

value [required]

string

The field value.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

partition_fields

[string]

A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.

too_many_buckets_action

enum

The action to take when the quota or bucket limit is exceeded. Options:

  • drop: Drop the event.
  • no_action: Let the event pass through.
  • overflow_routing: Route to an overflow destination. Allowed enum values: drop,no_action,overflow_routing

type [required]

enum

The processor type. The value should always be quota. Allowed enum values: quota

default: quota

Option 15

object

The reduce processor aggregates and merges logs based on matching keys and merge strategies.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by [required]

[string]

A list of fields used to group log events for merging.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

merge_strategies [required]

[object]

List of merge strategies defining how values from grouped events should be combined.

path [required]

string

The field path in the log event.

strategy [required]

enum

The merge strategy to apply. Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique

type [required]

enum

The processor type. The value should always be reduce. Allowed enum values: reduce

default: reduce

Option 16

object

The remove_fields processor deletes specified fields from logs.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[string]

A list of field names to be removed from each log event.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be remove_fields. Allowed enum values: remove_fields

default: remove_fields

Option 17

object

The rename_fields processor changes field names.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[object]

A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.

destination [required]

string

The field name to assign the renamed value to.

preserve_source [required]

boolean

Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.

source [required]

string

The original field name in the log event that should be renamed.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be rename_fields. Allowed enum values: rename_fields

default: rename_fields

Option 18

object

The sample processor allows probabilistic sampling of logs at a fixed rate.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by

[string]

Optional list of fields to group events by. Each group is sampled independently.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

percentage [required]

double

The percentage of logs to sample.

type [required]

enum

The processor type. The value should always be sample. Allowed enum values: sample

default: sample

Option 19

object

The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

rules [required]

[object]

A list of rules for identifying and acting on sensitive data patterns.

keyword_options

object

Configuration for keywords used to reinforce sensitive data pattern detection.

keywords [required]

[string]

A list of keywords to match near the sensitive pattern.

proximity [required]

int64

Maximum number of tokens between a keyword and a sensitive value match.

name [required]

string

A name identifying the rule.

on_match [required]

 <oneOf>

Defines what action to take when sensitive data is matched.

Option 1

object

Configuration for completely redacting matched sensitive data.

action [required]

enum

Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility. Allowed enum values: redact

options [required]

object

Configuration for fully redacting sensitive data.

replace [required]

string

The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptions replace.

Option 2

object

Configuration for hashing matched sensitive values.

action [required]

enum

Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content. Allowed enum values: hash

options

object

The ObservabilityPipelineSensitiveDataScannerProcessorActionHash options.

Option 3

object

Configuration for partially redacting matched sensitive data.

action [required]

enum

Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card). Allowed enum values: partial_redact

options [required]

object

Controls how partial redaction is applied, including character count and direction.

characters [required]

int64

The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptions characters.

direction [required]

enum

Indicates whether to redact characters from the first or last part of the matched value. Allowed enum values: first,last

pattern [required]

 <oneOf>

Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.

Option 1

object

Defines a custom regex-based pattern for identifying sensitive data in logs.

options [required]

object

Options for defining a custom regex pattern.

description

string

Human-readable description providing context about a sensitive data scanner rule

rule [required]

string

A regular expression used to detect sensitive values. Must be a valid regex.

type [required]

enum

Indicates a custom regular expression is used for matching. Allowed enum values: custom

Option 2

object

Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.

options [required]

object

Options for selecting a predefined library pattern and enabling keyword support.

description

string

Human-readable description providing context about a sensitive data scanner rule

id [required]

string

Identifier for a predefined pattern from the sensitive data scanner pattern library.

use_recommended_keywords

boolean

Whether to augment the pattern with recommended keywords (optional).

type [required]

enum

Indicates that a predefined library pattern is used. Allowed enum values: library

scope [required]

 <oneOf>

Determines which parts of the log the pattern-matching rule should be applied to.

Option 1

object

Includes only specific fields for sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Applies the rule only to included fields. Allowed enum values: include

Option 2

object

Excludes specific fields from sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Excludes specific fields from processing. Allowed enum values: exclude

Option 3

object

Applies scanning across all available fields.

target [required]

enum

Applies the rule to all fields. Allowed enum values: all

tags [required]

[string]

Tags assigned to this rule for filtering and classification.

type [required]

enum

The processor type. The value should always be sensitive_data_scanner. Allowed enum values: sensitive_data_scanner

default: sensitive_data_scanner

Option 20

object

The split_array processor splits array fields into separate events based on configured rules.

Supported pipeline types: logs

arrays [required]

[object]

A list of array split configurations.

field [required]

string

The path to the array field to split.

include [required]

string

A Datadog search query used to determine which logs this array split operation targets.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets. For split_array, this should typically be *.

type [required]

enum

The processor type. The value should always be split_array. Allowed enum values: split_array

default: split_array

Option 21

object

The throttle processor limits the number of events that pass through over a given time window.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by

[string]

Optional list of fields used to group events before the threshold has been reached.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

threshold [required]

int64

the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.

type [required]

enum

The processor type. The value should always be throttle. Allowed enum values: throttle

default: throttle

window [required]

double

The time window in seconds over which the threshold applies.

Option 22

object

The metric_tags processor filters metrics based on their tags using Datadog tag key patterns.

Supported pipeline types: metrics

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query that determines which metrics the processor targets.

rules [required]

[object]

A list of rules for filtering metric tags.

action [required]

enum

The action to take on tags with matching keys. Allowed enum values: include,exclude

include [required]

string

A Datadog search query used to determine which metrics this rule targets.

keys [required]

[string]

A list of tag keys to include or exclude.

mode [required]

enum

The processing mode for tag filtering. Allowed enum values: filter

type [required]

enum

The processor type. The value should always be metric_tags. Allowed enum values: metric_tags

default: metric_tags

sources [required]

[ <oneOf>]

A list of configured data sources for the pipeline.

Option 1

object

The datadog_agent source collects logs/metrics from the Datadog Agent.

Supported pipeline types: logs, metrics

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be datadog_agent. Allowed enum values: datadog_agent

default: datadog_agent

Option 2

object

The amazon_data_firehose source ingests logs from AWS Data Firehose.

Supported pipeline types: logs

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be amazon_data_firehose. Allowed enum values: amazon_data_firehose

default: amazon_data_firehose

Option 3

object

The amazon_s3 source ingests logs from an Amazon S3 bucket. It supports AWS authentication and TLS encryption.

Supported pipeline types: logs

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

region [required]

string

AWS region where the S3 bucket resides.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always amazon_s3. Allowed enum values: amazon_s3

default: amazon_s3

Option 4

object

The fluent_bit source ingests logs from Fluent Bit.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be fluent_bit. Allowed enum values: fluent_bit

default: fluent_bit

Option 5

object

The fluentd source ingests logs from a Fluentd-compatible service.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be `fluentd. Allowed enum values: fluentd

default: fluentd

Option 6

object

The google_pubsub source ingests logs from a Google Cloud Pub/Sub subscription.

Supported pipeline types: logs

auth

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

project [required]

string

The GCP project ID that owns the Pub/Sub subscription.

subscription [required]

string

The Pub/Sub subscription name from which messages are consumed.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be google_pubsub. Allowed enum values: google_pubsub

default: google_pubsub

Option 7

object

The http_client source scrapes logs from HTTP endpoints at regular intervals.

Supported pipeline types: logs

auth_strategy

enum

Optional authentication strategy for HTTP requests. Allowed enum values: none,basic,bearer,custom

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

scrape_interval_secs

int64

The interval (in seconds) between HTTP scrape requests.

scrape_timeout_secs

int64

The timeout (in seconds) for each scrape request.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be http_client. Allowed enum values: http_client

default: http_client

Option 8

object

The http_server source collects logs over HTTP POST from external services.

Supported pipeline types: logs

auth_strategy [required]

enum

HTTP authentication method. Allowed enum values: none,plain

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

Unique ID for the HTTP server source.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be http_server. Allowed enum values: http_server

default: http_server

Option 9

object

The kafka source ingests data from Apache Kafka topics.

Supported pipeline types: logs

group_id [required]

string

Consumer group ID used by the Kafka client.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

librdkafka_options

[object]

Optional list of advanced Kafka client configuration options, defined as key-value pairs.

name [required]

string

The name of the librdkafka configuration option to set.

value [required]

string

The value assigned to the specified librdkafka configuration option.

sasl

object

Specifies the SASL mechanism for authenticating with a Kafka cluster.

mechanism

enum

SASL mechanism used for Kafka authentication. Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topics [required]

[string]

A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.

type [required]

enum

The source type. The value should always be kafka. Allowed enum values: kafka

default: kafka

Option 10

object

The logstash source ingests logs from a Logstash forwarder.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be logstash. Allowed enum values: logstash

default: logstash

Option 11

object

The rsyslog source listens for logs over TCP or UDP from an rsyslog server using the syslog protocol.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

mode [required]

enum

Protocol used by the syslog source to receive messages. Allowed enum values: tcp,udp

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be rsyslog. Allowed enum values: rsyslog

default: rsyslog

Option 12

object

The socket source ingests logs over TCP or UDP.

Supported pipeline types: logs

framing [required]

 <oneOf>

Framing method configuration for the socket source.

Option 1

object

Byte frames which are delimited by a newline character.

method [required]

enum

Byte frames which are delimited by a newline character. Allowed enum values: newline_delimited

Option 2

object

Byte frames are passed through as-is according to the underlying I/O boundaries (for example, split between messages or stream segments).

method [required]

enum

Byte frames are passed through as-is according to the underlying I/O boundaries (for example, split between messages or stream segments). Allowed enum values: bytes

Option 3

object

Byte frames which are delimited by a chosen character.

delimiter [required]

string

A single ASCII character used to delimit events.

method [required]

enum

Byte frames which are delimited by a chosen character. Allowed enum values: character_delimited

Option 4

object

Byte frames according to the octet counting format as per RFC6587.

method [required]

enum

Byte frames according to the octet counting format as per RFC6587. Allowed enum values: octet_counting

Option 5

object

Byte frames which are chunked GELF messages.

method [required]

enum

Byte frames which are chunked GELF messages. Allowed enum values: chunked_gelf

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

mode [required]

enum

Protocol used to receive logs. Allowed enum values: tcp,udp

tls

object

TLS configuration. Relevant only when mode is tcp.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be socket. Allowed enum values: socket

default: socket

Option 13

object

The splunk_hec source implements the Splunk HTTP Event Collector (HEC) API.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always splunk_hec. Allowed enum values: splunk_hec

default: splunk_hec

Option 14

object

The splunk_tcp source receives logs from a Splunk Universal Forwarder over TCP. TLS is supported for secure transmission.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always splunk_tcp. Allowed enum values: splunk_tcp

default: splunk_tcp

Option 15

object

The sumo_logic source receives logs from Sumo Logic collectors.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

type [required]

enum

The source type. The value should always be sumo_logic. Allowed enum values: sumo_logic

default: sumo_logic

Option 16

object

The syslog_ng source listens for logs over TCP or UDP from a syslog-ng server using the syslog protocol.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

mode [required]

enum

Protocol used by the syslog source to receive messages. Allowed enum values: tcp,udp

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be syslog_ng. Allowed enum values: syslog_ng

default: syslog_ng

Option 17

object

The opentelemetry source receives telemetry data using the OpenTelemetry Protocol (OTLP) over gRPC and HTTP.

Supported pipeline types: logs

grpc_address_key

string

Environment variable name containing the gRPC server address for receiving OTLP data. Must be a valid environment variable name (alphanumeric characters and underscores only).

http_address_key

string

Environment variable name containing the HTTP server address for receiving OTLP data. Must be a valid environment variable name (alphanumeric characters and underscores only).

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be opentelemetry. Allowed enum values: opentelemetry

default: opentelemetry

name [required]

string

Name of the pipeline.

type [required]

string

The resource type identifier. For pipeline resources, this should always be set to pipelines.

default: pipelines

{
  "data": {
    "attributes": {
      "config": {
        "destinations": [
          {
            "id": "datadog-logs-destination",
            "inputs": [
              "my-processor-group"
            ],
            "type": "datadog_logs"
          }
        ],
        "processor_groups": [
          {
            "enabled": true,
            "id": "my-processor-group",
            "include": "service:my-service",
            "inputs": [
              "datadog-agent-source"
            ],
            "processors": [
              {
                "enabled": true,
                "id": "filter-processor",
                "include": "status:error",
                "type": "filter"
              }
            ]
          }
        ],
        "sources": [
          {
            "id": "datadog-agent-source",
            "type": "datadog_agent"
          }
        ]
      },
      "name": "Main Observability Pipeline"
    },
    "type": "pipelines"
  }
}

Réponse

OK

Top-level schema representing a pipeline.

Expand All

Champ

Type

Description

data [required]

object

Contains the pipeline’s ID, type, and configuration attributes.

attributes [required]

object

Defines the pipeline’s name and its components (sources, processors, and destinations).

config [required]

object

Specifies the pipeline's configuration, including its sources, processors, and destinations.

destinations [required]

[ <oneOf>]

A list of destination components where processed logs are sent.

Option 1

object

The http_client destination sends data to an HTTP endpoint.

Supported pipeline types: logs, metrics

auth_strategy

enum

HTTP authentication strategy. Allowed enum values: none,basic,bearer

compression

object

Compression configuration for HTTP requests.

algorithm [required]

enum

Compression algorithm. Allowed enum values: gzip

encoding [required]

enum

Encoding format for log events. Allowed enum values: json

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be http_client. Allowed enum values: http_client

default: http_client

Option 2

object

The amazon_opensearch destination writes logs to Amazon OpenSearch.

Supported pipeline types: logs

auth [required]

object

Authentication settings for the Amazon OpenSearch destination. The strategy field determines whether basic or AWS-based authentication is used.

assume_role

string

The ARN of the role to assume (used with aws strategy).

aws_region

string

AWS region

external_id

string

External ID for the assumed role (used with aws strategy).

session_name

string

Session name for the assumed role (used with aws strategy).

strategy [required]

enum

The authentication strategy to use. Allowed enum values: basic,aws

bulk_index

string

The index to write logs to.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be amazon_opensearch. Allowed enum values: amazon_opensearch

default: amazon_opensearch

Option 3

object

The amazon_s3 destination sends your logs in Datadog-rehydratable format to an Amazon S3 bucket for archiving.

Supported pipeline types: logs

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

bucket [required]

string

S3 bucket name.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_prefix

string

Optional prefix for object keys.

region [required]

string

AWS region of the S3 bucket.

storage_class [required]

enum

S3 storage class. Allowed enum values: STANDARD,REDUCED_REDUNDANCY,INTELLIGENT_TIERING,STANDARD_IA,EXPRESS_ONEZONE,ONEZONE_IA,GLACIER,GLACIER_IR,DEEP_ARCHIVE

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. Always amazon_s3. Allowed enum values: amazon_s3

default: amazon_s3

Option 4

object

The amazon_security_lake destination sends your logs to Amazon Security Lake.

Supported pipeline types: logs

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

bucket [required]

string

Name of the Amazon S3 bucket in Security Lake (3-63 characters).

custom_source_name [required]

string

Custom source name for the logs in Security Lake.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

string

AWS region of the S3 bucket.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. Always amazon_security_lake. Allowed enum values: amazon_security_lake

default: amazon_security_lake

Option 5

object

The azure_storage destination forwards logs to an Azure Blob Storage container.

Supported pipeline types: logs

blob_prefix

string

Optional prefix for blobs written to the container.

container_name [required]

string

The name of the Azure Blob Storage container to store logs in.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be azure_storage. Allowed enum values: azure_storage

default: azure_storage

Option 6

object

The cloud_prem destination sends logs to Datadog CloudPrem.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be cloud_prem. Allowed enum values: cloud_prem

default: cloud_prem

Option 7

object

The crowdstrike_next_gen_siem destination forwards logs to CrowdStrike Next Gen SIEM.

Supported pipeline types: logs

compression

object

Compression configuration for log events.

algorithm [required]

enum

Compression algorithm for log events. Allowed enum values: gzip,zlib

level

int64

Compression level.

encoding [required]

enum

Encoding format for log events. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be crowdstrike_next_gen_siem. Allowed enum values: crowdstrike_next_gen_siem

default: crowdstrike_next_gen_siem

Option 8

object

The datadog_logs destination forwards logs to Datadog Log Management.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

routes

[object]

A list of routing rules that forward matching logs to Datadog using dedicated API keys.

api_key_key

string

Name of the environment variable or secret that stores the Datadog API key used by this route.

include

string

A Datadog search query that determines which logs are forwarded using this route.

route_id

string

Unique identifier for this route within the destination.

site

string

Datadog site where matching logs are sent (for example, us1).

type [required]

enum

The destination type. The value should always be datadog_logs. Allowed enum values: datadog_logs

default: datadog_logs

Option 9

object

The elasticsearch destination writes logs to an Elasticsearch cluster.

Supported pipeline types: logs

api_version

enum

The Elasticsearch API version to use. Set to auto to auto-detect. Allowed enum values: auto,v6,v7,v8

bulk_index

string

The index to write logs to in Elasticsearch.

data_stream

object

Configuration options for writing to Elasticsearch Data Streams instead of a fixed index.

dataset

string

The data stream dataset for your logs. This groups logs by their source or application.

dtype

string

The data stream type for your logs. This determines how logs are categorized within the data stream.

namespace

string

The data stream namespace for your logs. This separates logs into different environments or domains.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be elasticsearch. Allowed enum values: elasticsearch

default: elasticsearch

Option 10

object

The google_chronicle destination sends logs to Google Chronicle.

Supported pipeline types: logs

auth

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

customer_id [required]

string

The Google Chronicle customer ID.

encoding

enum

The encoding format for the logs sent to Chronicle. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

log_type

string

The log type metadata associated with the Chronicle destination.

type [required]

enum

The destination type. The value should always be google_chronicle. Allowed enum values: google_chronicle

default: google_chronicle

Option 11

object

The google_cloud_storage destination stores logs in a Google Cloud Storage (GCS) bucket. It requires a bucket name, GCP authentication, and metadata fields.

Supported pipeline types: logs

acl

enum

Access control list setting for objects written to the bucket. Allowed enum values: private,project-private,public-read,authenticated-read,bucket-owner-read,bucket-owner-full-control

auth

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

bucket [required]

string

Name of the GCS bucket.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_prefix

string

Optional prefix for object keys within the GCS bucket.

metadata

[object]

Custom metadata to attach to each object uploaded to the GCS bucket.

name [required]

string

The metadata key.

value [required]

string

The metadata value.

storage_class [required]

enum

Storage class used for objects stored in GCS. Allowed enum values: STANDARD,NEARLINE,COLDLINE,ARCHIVE

type [required]

enum

The destination type. Always google_cloud_storage. Allowed enum values: google_cloud_storage

default: google_cloud_storage

Option 12

object

The google_pubsub destination publishes logs to a Google Cloud Pub/Sub topic.

Supported pipeline types: logs

auth

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

encoding [required]

enum

Encoding format for log events. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

project [required]

string

The GCP project ID that owns the Pub/Sub topic.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topic [required]

string

The Pub/Sub topic name to publish logs to.

type [required]

enum

The destination type. The value should always be google_pubsub. Allowed enum values: google_pubsub

default: google_pubsub

Option 13

object

The kafka destination sends logs to Apache Kafka topics.

Supported pipeline types: logs

compression

enum

Compression codec for Kafka messages. Allowed enum values: none,gzip,snappy,lz4,zstd

encoding [required]

enum

Encoding format for log events. Allowed enum values: json,raw_message

headers_key

string

The field name to use for Kafka message headers.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_field

string

The field name to use as the Kafka message key.

librdkafka_options

[object]

Optional list of advanced Kafka producer configuration options, defined as key-value pairs.

name [required]

string

The name of the librdkafka configuration option to set.

value [required]

string

The value assigned to the specified librdkafka configuration option.

message_timeout_ms

int64

Maximum time in milliseconds to wait for message delivery confirmation.

rate_limit_duration_secs

int64

Duration in seconds for the rate limit window.

rate_limit_num

int64

Maximum number of messages allowed per rate limit duration.

sasl

object

Specifies the SASL mechanism for authenticating with a Kafka cluster.

mechanism

enum

SASL mechanism used for Kafka authentication. Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512

socket_timeout_ms

int64

Socket timeout in milliseconds for network requests.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topic [required]

string

The Kafka topic name to publish logs to.

type [required]

enum

The destination type. The value should always be kafka. Allowed enum values: kafka

default: kafka

Option 14

object

The microsoft_sentinel destination forwards logs to Microsoft Sentinel.

Supported pipeline types: logs

client_id [required]

string

Azure AD client ID used for authentication.

dcr_immutable_id [required]

string

The immutable ID of the Data Collection Rule (DCR).

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

table [required]

string

The name of the Log Analytics table where logs are sent.

tenant_id [required]

string

Azure AD tenant ID.

type [required]

enum

The destination type. The value should always be microsoft_sentinel. Allowed enum values: microsoft_sentinel

default: microsoft_sentinel

Option 15

object

The new_relic destination sends logs to the New Relic platform.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

enum

The New Relic region. Allowed enum values: us,eu

type [required]

enum

The destination type. The value should always be new_relic. Allowed enum values: new_relic

default: new_relic

Option 16

object

The opensearch destination writes logs to an OpenSearch cluster.

Supported pipeline types: logs

bulk_index

string

The index to write logs to.

data_stream

object

Configuration options for writing to OpenSearch Data Streams instead of a fixed index.

dataset

string

The data stream dataset for your logs. This groups logs by their source or application.

dtype

string

The data stream type for your logs. This determines how logs are categorized within the data stream.

namespace

string

The data stream namespace for your logs. This separates logs into different environments or domains.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be opensearch. Allowed enum values: opensearch

default: opensearch

Option 17

object

The rsyslog destination forwards logs to an external rsyslog server over TCP or UDP using the syslog protocol.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

keepalive

int64

Optional socket keepalive duration in milliseconds.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be rsyslog. Allowed enum values: rsyslog

default: rsyslog

Option 18

object

The sentinel_one destination sends logs to SentinelOne.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

enum

The SentinelOne region to send logs to. Allowed enum values: us,eu,ca,data_set_us

type [required]

enum

The destination type. The value should always be sentinel_one. Allowed enum values: sentinel_one

default: sentinel_one

Option 19

object

The socket destination sends logs over TCP or UDP to a remote server.

Supported pipeline types: logs

encoding [required]

enum

Encoding format for log events. Allowed enum values: json,raw_message

framing [required]

 <oneOf>

Framing method configuration.

Option 1

object

Each log event is delimited by a newline character.

method [required]

enum

The definition of ObservabilityPipelineSocketDestinationFramingNewlineDelimitedMethod object. Allowed enum values: newline_delimited

Option 2

object

Event data is not delimited at all.

method [required]

enum

The definition of ObservabilityPipelineSocketDestinationFramingBytesMethod object. Allowed enum values: bytes

Option 3

object

Each log event is separated using the specified delimiter character.

delimiter [required]

string

A single ASCII character used as a delimiter.

method [required]

enum

The definition of ObservabilityPipelineSocketDestinationFramingCharacterDelimitedMethod object. Allowed enum values: character_delimited

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

mode [required]

enum

Protocol used to send logs. Allowed enum values: tcp,udp

tls

object

TLS configuration. Relevant only when mode is tcp.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be socket. Allowed enum values: socket

default: socket

Option 20

object

The splunk_hec destination forwards logs to Splunk using the HTTP Event Collector (HEC).

Supported pipeline types: logs

auto_extract_timestamp

boolean

If true, Splunk tries to extract timestamps from incoming log events. If false, Splunk assigns the time the event was received.

encoding

enum

Encoding format for log events. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

index

string

Optional name of the Splunk index where logs are written.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

sourcetype

string

The Splunk sourcetype to assign to log events.

type [required]

enum

The destination type. Always splunk_hec. Allowed enum values: splunk_hec

default: splunk_hec

Option 21

object

The sumo_logic destination forwards logs to Sumo Logic.

Supported pipeline types: logs

encoding

enum

The output encoding format. Allowed enum values: json,raw_message,logfmt

header_custom_fields

[object]

A list of custom headers to include in the request to Sumo Logic.

name [required]

string

The header field name.

value [required]

string

The header field value.

header_host_name

string

Optional override for the host name header.

header_source_category

string

Optional override for the source category header.

header_source_name

string

Optional override for the source name header.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be sumo_logic. Allowed enum values: sumo_logic

default: sumo_logic

Option 22

object

The syslog_ng destination forwards logs to an external syslog-ng server over TCP or UDP using the syslog protocol.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

keepalive

int64

Optional socket keepalive duration in milliseconds.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be syslog_ng. Allowed enum values: syslog_ng

default: syslog_ng

Option 23

object

The datadog_metrics destination forwards metrics to Datadog.

Supported pipeline types: metrics

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be datadog_metrics. Allowed enum values: datadog_metrics

default: datadog_metrics

pipeline_type

enum

The type of data being ingested. Defaults to logs if not specified. Allowed enum values: logs,metrics

default: logs

processor_groups

[object]

A list of processor groups that transform or enrich log data.

display_name

string

The display name for a component.

enabled [required]

boolean

Whether this processor group is enabled.

id [required]

string

The unique identifier for the processor group.

include [required]

string

Conditional expression for when this processor group should execute.

inputs [required]

[string]

A list of IDs for components whose output is used as the input for this processor group.

processors [required]

[ <oneOf>]

Processors applied sequentially within this group. Events flow through each processor in order.

Option 1

object

The filter processor allows conditional processing of logs/metrics based on a Datadog search query. Logs/metrics that match the include query are passed through; others are discarded.

Supported pipeline types: logs, metrics

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs/metrics should pass through the filter. Logs/metrics that match this query continue to downstream components; others are dropped.

type [required]

enum

The processor type. The value should always be filter. Allowed enum values: filter

default: filter

Option 2

object

The add_env_vars processor adds environment variable values to log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this processor in the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_env_vars. Allowed enum values: add_env_vars

default: add_env_vars

variables [required]

[object]

A list of environment variable mappings to apply to log fields.

field [required]

string

The target field in the log event.

name [required]

string

The name of the environment variable to read.

Option 3

object

The add_fields processor adds static key-value fields to logs.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[object]

A list of static fields (key-value pairs) that is added to each log event processed by this component.

name [required]

string

The field name.

value [required]

string

The field value.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_fields. Allowed enum values: add_fields

default: add_fields

Option 4

object

The add_hostname processor adds the hostname to log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_hostname. Allowed enum values: add_hostname

default: add_hostname

Option 5

object

The custom_processor processor transforms events using Vector Remap Language (VRL) scripts with advanced filtering capabilities.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets. This field should always be set to * for the custom_processor processor.

default: *

remaps [required]

[object]

Array of VRL remap rules.

drop_on_error [required]

boolean

Whether to drop events that caused errors during processing.

enabled

boolean

Whether this remap rule is enabled.

include [required]

string

A Datadog search query used to filter events for this specific remap rule.

name [required]

string

A descriptive name for this remap rule.

source [required]

string

The VRL script source code that defines the processing logic.

type [required]

enum

The processor type. The value should always be custom_processor. Allowed enum values: custom_processor

default: custom_processor

Option 6

object

The datadog_tags processor includes or excludes specific Datadog tags in your logs.

Supported pipeline types: logs

action [required]

enum

The action to take on tags with matching keys. Allowed enum values: include,exclude

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

keys [required]

[string]

A list of tag keys.

mode [required]

enum

The processing mode. Allowed enum values: filter

type [required]

enum

The processor type. The value should always be datadog_tags. Allowed enum values: datadog_tags

default: datadog_tags

Option 7

object

The dedupe processor removes duplicate fields in log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[string]

A list of log field paths to check for duplicates.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

mode [required]

enum

The deduplication mode to apply to the fields. Allowed enum values: match,ignore

type [required]

enum

The processor type. The value should always be dedupe. Allowed enum values: dedupe

default: dedupe

Option 8

object

The enrichment_table processor enriches logs using a static CSV file, GeoIP database, or reference table. Exactly one of file, geoip, or reference_table must be configured.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

file

object

Defines a static enrichment table loaded from a CSV file.

encoding [required]

object

File encoding format.

delimiter [required]

string

The encoding delimiter.

includes_headers [required]

boolean

The encoding includes_headers.

type [required]

enum

Specifies the encoding format (e.g., CSV) used for enrichment tables. Allowed enum values: csv

key [required]

[object]

Key fields used to look up enrichment values.

column [required]

string

The items column.

comparison [required]

enum

Defines how to compare key fields for enrichment table lookups. Allowed enum values: equals

field [required]

string

The items field.

path [required]

string

Path to the CSV file.

schema [required]

[object]

Schema defining column names and their types.

column [required]

string

The items column.

type [required]

enum

Declares allowed data types for enrichment table columns. Allowed enum values: string,boolean,integer,float,date,timestamp

geoip

object

Uses a GeoIP database to enrich logs based on an IP field.

key_field [required]

string

Path to the IP field in the log.

locale [required]

string

Locale used to resolve geographical names.

path [required]

string

Path to the GeoIP database file.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

reference_table

object

Uses a Datadog reference table to enrich logs.

columns

[string]

List of column names to include from the reference table. If not provided, all columns are included.

key_field [required]

string

Path to the field in the log event to match against the reference table.

table_id [required]

string

The unique identifier of the reference table.

target [required]

string

Path where enrichment results should be stored in the log.

type [required]

enum

The processor type. The value should always be enrichment_table. Allowed enum values: enrichment_table

default: enrichment_table

Option 9

object

The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog. Metrics can be counters, gauges, or distributions and optionally grouped by log fields.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include

string

A Datadog search query used to determine which logs this processor targets.

metrics

[object]

Configuration for generating individual metrics.

group_by

[string]

Optional fields used to group the metric series.

include [required]

string

Datadog filter query to match logs for metric generation.

metric_type [required]

enum

Type of metric to create. Allowed enum values: count,gauge,distribution

name [required]

string

Name of the custom metric to be created.

value [required]

 <oneOf>

Specifies how the value of the generated metric is computed.

Option 1

object

Strategy that increments a generated metric by one for each matching event.

strategy [required]

enum

Increments the metric by 1 for each matching event. Allowed enum values: increment_by_one

Option 2

object

Strategy that increments a generated metric based on the value of a log field.

field [required]

string

Name of the log field containing the numeric value to increment the metric by.

strategy [required]

enum

Uses a numeric field in the log event as the metric increment. Allowed enum values: increment_by_field

type [required]

enum

The processor type. Always generate_datadog_metrics. Allowed enum values: generate_datadog_metrics

default: generate_datadog_metrics

Option 10

object

The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

mappings [required]

[object]

A list of mapping rules to convert events to the OCSF format.

include [required]

string

A Datadog search query used to select the logs that this mapping should apply to.

mapping [required]

 <oneOf>

Defines a single mapping rule for transforming logs into the OCSF schema.

Option 1

enum

Predefined library mappings for common log formats. Allowed enum values: CloudTrail Account Change,GCP Cloud Audit CreateBucket,GCP Cloud Audit CreateSink,GCP Cloud Audit SetIamPolicy,GCP Cloud Audit UpdateSink,Github Audit Log API Activity,Google Workspace Admin Audit addPrivilege,Microsoft 365 Defender Incident,Microsoft 365 Defender UserLoggedIn,Okta System Log Authentication,Palo Alto Networks Firewall Traffic

type [required]

enum

The processor type. The value should always be ocsf_mapper. Allowed enum values: ocsf_mapper

default: ocsf_mapper

Option 11

object

The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.

Supported pipeline types: logs

disable_library_rules

boolean

If set to true, disables the default Grok rules provided by Datadog.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

A unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

rules [required]

[object]

The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.

match_rules [required]

[object]

A list of Grok parsing rules that define how to extract fields from the source field. Each rule must contain a name and a valid Grok pattern.

name [required]

string

The name of the rule.

rule [required]

string

The definition of the Grok rule.

source [required]

string

The name of the field in the log event to apply the Grok rules to.

support_rules

[object]

A list of Grok helper rules that can be referenced by the parsing rules.

name [required]

string

The name of the Grok helper rule.

rule [required]

string

The definition of the Grok helper rule.

type [required]

enum

The processor type. The value should always be parse_grok. Allowed enum values: parse_grok

default: parse_grok

Option 12

object

The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

field [required]

string

The name of the log field that contains a JSON string.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be parse_json. Allowed enum values: parse_json

default: parse_json

Option 13

object

The parse_xml processor parses XML from a specified field and extracts it into the event.

Supported pipeline types: logs

always_use_text_key

boolean

Whether to always use a text key for element content.

attr_prefix

string

The prefix to use for XML attributes in the parsed output.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

field [required]

string

The name of the log field that contains an XML string.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

include_attr

boolean

Whether to include XML attributes in the parsed output.

parse_bool

boolean

Whether to parse boolean values from strings.

parse_null

boolean

Whether to parse null values.

parse_number

boolean

Whether to parse numeric values from strings.

text_key

string

The key name to use for text content within XML elements. Must be at least 1 character if specified.

type [required]

enum

The processor type. The value should always be parse_xml. Allowed enum values: parse_xml

default: parse_xml

Option 14

object

The quota processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.

Supported pipeline types: logs

display_name

string

The display name for a component.

drop_events

boolean

If set to true, logs that match the quota filter and are sent after the quota is exceeded are dropped. Logs that do not match the filter continue through the pipeline. Note: You can set either drop_events or overflow_action, but not both.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

ignore_when_missing_partitions

boolean

If true, the processor skips quota checks when partition fields are missing from the logs.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

name [required]

string

Name of the quota.

overflow_action

enum

The action to take when the quota or bucket limit is exceeded. Options:

  • drop: Drop the event.
  • no_action: Let the event pass through.
  • overflow_routing: Route to an overflow destination. Allowed enum values: drop,no_action,overflow_routing

overrides

[object]

A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.

fields [required]

[object]

A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.

name [required]

string

The field name.

value [required]

string

The field value.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

partition_fields

[string]

A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.

too_many_buckets_action

enum

The action to take when the quota or bucket limit is exceeded. Options:

  • drop: Drop the event.
  • no_action: Let the event pass through.
  • overflow_routing: Route to an overflow destination. Allowed enum values: drop,no_action,overflow_routing

type [required]

enum

The processor type. The value should always be quota. Allowed enum values: quota

default: quota

Option 15

object

The reduce processor aggregates and merges logs based on matching keys and merge strategies.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by [required]

[string]

A list of fields used to group log events for merging.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

merge_strategies [required]

[object]

List of merge strategies defining how values from grouped events should be combined.

path [required]

string

The field path in the log event.

strategy [required]

enum

The merge strategy to apply. Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique

type [required]

enum

The processor type. The value should always be reduce. Allowed enum values: reduce

default: reduce

Option 16

object

The remove_fields processor deletes specified fields from logs.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[string]

A list of field names to be removed from each log event.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be remove_fields. Allowed enum values: remove_fields

default: remove_fields

Option 17

object

The rename_fields processor changes field names.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[object]

A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.

destination [required]

string

The field name to assign the renamed value to.

preserve_source [required]

boolean

Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.

source [required]

string

The original field name in the log event that should be renamed.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be rename_fields. Allowed enum values: rename_fields

default: rename_fields

Option 18

object

The sample processor allows probabilistic sampling of logs at a fixed rate.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by

[string]

Optional list of fields to group events by. Each group is sampled independently.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

percentage [required]

double

The percentage of logs to sample.

type [required]

enum

The processor type. The value should always be sample. Allowed enum values: sample

default: sample

Option 19

object

The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

rules [required]

[object]

A list of rules for identifying and acting on sensitive data patterns.

keyword_options

object

Configuration for keywords used to reinforce sensitive data pattern detection.

keywords [required]

[string]

A list of keywords to match near the sensitive pattern.

proximity [required]

int64

Maximum number of tokens between a keyword and a sensitive value match.

name [required]

string

A name identifying the rule.

on_match [required]

 <oneOf>

Defines what action to take when sensitive data is matched.

Option 1

object

Configuration for completely redacting matched sensitive data.

action [required]

enum

Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility. Allowed enum values: redact

options [required]

object

Configuration for fully redacting sensitive data.

replace [required]

string

The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptions replace.

Option 2

object

Configuration for hashing matched sensitive values.

action [required]

enum

Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content. Allowed enum values: hash

options

object

The ObservabilityPipelineSensitiveDataScannerProcessorActionHash options.

Option 3

object

Configuration for partially redacting matched sensitive data.

action [required]

enum

Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card). Allowed enum values: partial_redact

options [required]

object

Controls how partial redaction is applied, including character count and direction.

characters [required]

int64

The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptions characters.

direction [required]

enum

Indicates whether to redact characters from the first or last part of the matched value. Allowed enum values: first,last

pattern [required]

 <oneOf>

Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.

Option 1

object

Defines a custom regex-based pattern for identifying sensitive data in logs.

options [required]

object

Options for defining a custom regex pattern.

description

string

Human-readable description providing context about a sensitive data scanner rule

rule [required]

string

A regular expression used to detect sensitive values. Must be a valid regex.

type [required]

enum

Indicates a custom regular expression is used for matching. Allowed enum values: custom

Option 2

object

Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.

options [required]

object

Options for selecting a predefined library pattern and enabling keyword support.

description

string

Human-readable description providing context about a sensitive data scanner rule

id [required]

string

Identifier for a predefined pattern from the sensitive data scanner pattern library.

use_recommended_keywords

boolean

Whether to augment the pattern with recommended keywords (optional).

type [required]

enum

Indicates that a predefined library pattern is used. Allowed enum values: library

scope [required]

 <oneOf>

Determines which parts of the log the pattern-matching rule should be applied to.

Option 1

object

Includes only specific fields for sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Applies the rule only to included fields. Allowed enum values: include

Option 2

object

Excludes specific fields from sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Excludes specific fields from processing. Allowed enum values: exclude

Option 3

object

Applies scanning across all available fields.

target [required]

enum

Applies the rule to all fields. Allowed enum values: all

tags [required]

[string]

Tags assigned to this rule for filtering and classification.

type [required]

enum

The processor type. The value should always be sensitive_data_scanner. Allowed enum values: sensitive_data_scanner

default: sensitive_data_scanner

Option 20

object

The split_array processor splits array fields into separate events based on configured rules.

Supported pipeline types: logs

arrays [required]

[object]

A list of array split configurations.

field [required]

string

The path to the array field to split.

include [required]

string

A Datadog search query used to determine which logs this array split operation targets.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets. For split_array, this should typically be *.

type [required]

enum

The processor type. The value should always be split_array. Allowed enum values: split_array

default: split_array

Option 21

object

The throttle processor limits the number of events that pass through over a given time window.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by

[string]

Optional list of fields used to group events before the threshold has been reached.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

threshold [required]

int64

the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.

type [required]

enum

The processor type. The value should always be throttle. Allowed enum values: throttle

default: throttle

window [required]

double

The time window in seconds over which the threshold applies.

Option 22

object

The metric_tags processor filters metrics based on their tags using Datadog tag key patterns.

Supported pipeline types: metrics

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query that determines which metrics the processor targets.

rules [required]

[object]

A list of rules for filtering metric tags.

action [required]

enum

The action to take on tags with matching keys. Allowed enum values: include,exclude

include [required]

string

A Datadog search query used to determine which metrics this rule targets.

keys [required]

[string]

A list of tag keys to include or exclude.

mode [required]

enum

The processing mode for tag filtering. Allowed enum values: filter

type [required]

enum

The processor type. The value should always be metric_tags. Allowed enum values: metric_tags

default: metric_tags

processors

[object]

DEPRECATED: A list of processor groups that transform or enrich log data.

Deprecated: This field is deprecated, you should now use the processor_groups field.

display_name

string

The display name for a component.

enabled [required]

boolean

Whether this processor group is enabled.

id [required]

string

The unique identifier for the processor group.

include [required]

string

Conditional expression for when this processor group should execute.

inputs [required]

[string]

A list of IDs for components whose output is used as the input for this processor group.

processors [required]

[ <oneOf>]

Processors applied sequentially within this group. Events flow through each processor in order.

Option 1

object

The filter processor allows conditional processing of logs/metrics based on a Datadog search query. Logs/metrics that match the include query are passed through; others are discarded.

Supported pipeline types: logs, metrics

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs/metrics should pass through the filter. Logs/metrics that match this query continue to downstream components; others are dropped.

type [required]

enum

The processor type. The value should always be filter. Allowed enum values: filter

default: filter

Option 2

object

The add_env_vars processor adds environment variable values to log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this processor in the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_env_vars. Allowed enum values: add_env_vars

default: add_env_vars

variables [required]

[object]

A list of environment variable mappings to apply to log fields.

field [required]

string

The target field in the log event.

name [required]

string

The name of the environment variable to read.

Option 3

object

The add_fields processor adds static key-value fields to logs.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[object]

A list of static fields (key-value pairs) that is added to each log event processed by this component.

name [required]

string

The field name.

value [required]

string

The field value.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_fields. Allowed enum values: add_fields

default: add_fields

Option 4

object

The add_hostname processor adds the hostname to log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_hostname. Allowed enum values: add_hostname

default: add_hostname

Option 5

object

The custom_processor processor transforms events using Vector Remap Language (VRL) scripts with advanced filtering capabilities.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets. This field should always be set to * for the custom_processor processor.

default: *

remaps [required]

[object]

Array of VRL remap rules.

drop_on_error [required]

boolean

Whether to drop events that caused errors during processing.

enabled

boolean

Whether this remap rule is enabled.

include [required]

string

A Datadog search query used to filter events for this specific remap rule.

name [required]

string

A descriptive name for this remap rule.

source [required]

string

The VRL script source code that defines the processing logic.

type [required]

enum

The processor type. The value should always be custom_processor. Allowed enum values: custom_processor

default: custom_processor

Option 6

object

The datadog_tags processor includes or excludes specific Datadog tags in your logs.

Supported pipeline types: logs

action [required]

enum

The action to take on tags with matching keys. Allowed enum values: include,exclude

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

keys [required]

[string]

A list of tag keys.

mode [required]

enum

The processing mode. Allowed enum values: filter

type [required]

enum

The processor type. The value should always be datadog_tags. Allowed enum values: datadog_tags

default: datadog_tags

Option 7

object

The dedupe processor removes duplicate fields in log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[string]

A list of log field paths to check for duplicates.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

mode [required]

enum

The deduplication mode to apply to the fields. Allowed enum values: match,ignore

type [required]

enum

The processor type. The value should always be dedupe. Allowed enum values: dedupe

default: dedupe

Option 8

object

The enrichment_table processor enriches logs using a static CSV file, GeoIP database, or reference table. Exactly one of file, geoip, or reference_table must be configured.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

file

object

Defines a static enrichment table loaded from a CSV file.

encoding [required]

object

File encoding format.

delimiter [required]

string

The encoding delimiter.

includes_headers [required]

boolean

The encoding includes_headers.

type [required]

enum

Specifies the encoding format (e.g., CSV) used for enrichment tables. Allowed enum values: csv

key [required]

[object]

Key fields used to look up enrichment values.

column [required]

string

The items column.

comparison [required]

enum

Defines how to compare key fields for enrichment table lookups. Allowed enum values: equals

field [required]

string

The items field.

path [required]

string

Path to the CSV file.

schema [required]

[object]

Schema defining column names and their types.

column [required]

string

The items column.

type [required]

enum

Declares allowed data types for enrichment table columns. Allowed enum values: string,boolean,integer,float,date,timestamp

geoip

object

Uses a GeoIP database to enrich logs based on an IP field.

key_field [required]

string

Path to the IP field in the log.

locale [required]

string

Locale used to resolve geographical names.

path [required]

string

Path to the GeoIP database file.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

reference_table

object

Uses a Datadog reference table to enrich logs.

columns

[string]

List of column names to include from the reference table. If not provided, all columns are included.

key_field [required]

string

Path to the field in the log event to match against the reference table.

table_id [required]

string

The unique identifier of the reference table.

target [required]

string

Path where enrichment results should be stored in the log.

type [required]

enum

The processor type. The value should always be enrichment_table. Allowed enum values: enrichment_table

default: enrichment_table

Option 9

object

The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog. Metrics can be counters, gauges, or distributions and optionally grouped by log fields.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include

string

A Datadog search query used to determine which logs this processor targets.

metrics

[object]

Configuration for generating individual metrics.

group_by

[string]

Optional fields used to group the metric series.

include [required]

string

Datadog filter query to match logs for metric generation.

metric_type [required]

enum

Type of metric to create. Allowed enum values: count,gauge,distribution

name [required]

string

Name of the custom metric to be created.

value [required]

 <oneOf>

Specifies how the value of the generated metric is computed.

Option 1

object

Strategy that increments a generated metric by one for each matching event.

strategy [required]

enum

Increments the metric by 1 for each matching event. Allowed enum values: increment_by_one

Option 2

object

Strategy that increments a generated metric based on the value of a log field.

field [required]

string

Name of the log field containing the numeric value to increment the metric by.

strategy [required]

enum

Uses a numeric field in the log event as the metric increment. Allowed enum values: increment_by_field

type [required]

enum

The processor type. Always generate_datadog_metrics. Allowed enum values: generate_datadog_metrics

default: generate_datadog_metrics

Option 10

object

The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

mappings [required]

[object]

A list of mapping rules to convert events to the OCSF format.

include [required]

string

A Datadog search query used to select the logs that this mapping should apply to.

mapping [required]

 <oneOf>

Defines a single mapping rule for transforming logs into the OCSF schema.

Option 1

enum

Predefined library mappings for common log formats. Allowed enum values: CloudTrail Account Change,GCP Cloud Audit CreateBucket,GCP Cloud Audit CreateSink,GCP Cloud Audit SetIamPolicy,GCP Cloud Audit UpdateSink,Github Audit Log API Activity,Google Workspace Admin Audit addPrivilege,Microsoft 365 Defender Incident,Microsoft 365 Defender UserLoggedIn,Okta System Log Authentication,Palo Alto Networks Firewall Traffic

type [required]

enum

The processor type. The value should always be ocsf_mapper. Allowed enum values: ocsf_mapper

default: ocsf_mapper

Option 11

object

The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.

Supported pipeline types: logs

disable_library_rules

boolean

If set to true, disables the default Grok rules provided by Datadog.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

A unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

rules [required]

[object]

The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.

match_rules [required]

[object]

A list of Grok parsing rules that define how to extract fields from the source field. Each rule must contain a name and a valid Grok pattern.

name [required]

string

The name of the rule.

rule [required]

string

The definition of the Grok rule.

source [required]

string

The name of the field in the log event to apply the Grok rules to.

support_rules

[object]

A list of Grok helper rules that can be referenced by the parsing rules.

name [required]

string

The name of the Grok helper rule.

rule [required]

string

The definition of the Grok helper rule.

type [required]

enum

The processor type. The value should always be parse_grok. Allowed enum values: parse_grok

default: parse_grok

Option 12

object

The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

field [required]

string

The name of the log field that contains a JSON string.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be parse_json. Allowed enum values: parse_json

default: parse_json

Option 13

object

The parse_xml processor parses XML from a specified field and extracts it into the event.

Supported pipeline types: logs

always_use_text_key

boolean

Whether to always use a text key for element content.

attr_prefix

string

The prefix to use for XML attributes in the parsed output.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

field [required]

string

The name of the log field that contains an XML string.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

include_attr

boolean

Whether to include XML attributes in the parsed output.

parse_bool

boolean

Whether to parse boolean values from strings.

parse_null

boolean

Whether to parse null values.

parse_number

boolean

Whether to parse numeric values from strings.

text_key

string

The key name to use for text content within XML elements. Must be at least 1 character if specified.

type [required]

enum

The processor type. The value should always be parse_xml. Allowed enum values: parse_xml

default: parse_xml

Option 14

object

The quota processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.

Supported pipeline types: logs

display_name

string

The display name for a component.

drop_events

boolean

If set to true, logs that match the quota filter and are sent after the quota is exceeded are dropped. Logs that do not match the filter continue through the pipeline. Note: You can set either drop_events or overflow_action, but not both.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

ignore_when_missing_partitions

boolean

If true, the processor skips quota checks when partition fields are missing from the logs.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

name [required]

string

Name of the quota.

overflow_action

enum

The action to take when the quota or bucket limit is exceeded. Options:

  • drop: Drop the event.
  • no_action: Let the event pass through.
  • overflow_routing: Route to an overflow destination. Allowed enum values: drop,no_action,overflow_routing

overrides

[object]

A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.

fields [required]

[object]

A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.

name [required]

string

The field name.

value [required]

string

The field value.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

partition_fields

[string]

A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.

too_many_buckets_action

enum

The action to take when the quota or bucket limit is exceeded. Options:

  • drop: Drop the event.
  • no_action: Let the event pass through.
  • overflow_routing: Route to an overflow destination. Allowed enum values: drop,no_action,overflow_routing

type [required]

enum

The processor type. The value should always be quota. Allowed enum values: quota

default: quota

Option 15

object

The reduce processor aggregates and merges logs based on matching keys and merge strategies.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by [required]

[string]

A list of fields used to group log events for merging.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

merge_strategies [required]

[object]

List of merge strategies defining how values from grouped events should be combined.

path [required]

string

The field path in the log event.

strategy [required]

enum

The merge strategy to apply. Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique

type [required]

enum

The processor type. The value should always be reduce. Allowed enum values: reduce

default: reduce

Option 16

object

The remove_fields processor deletes specified fields from logs.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[string]

A list of field names to be removed from each log event.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be remove_fields. Allowed enum values: remove_fields

default: remove_fields

Option 17

object

The rename_fields processor changes field names.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[object]

A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.

destination [required]

string

The field name to assign the renamed value to.

preserve_source [required]

boolean

Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.

source [required]

string

The original field name in the log event that should be renamed.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be rename_fields. Allowed enum values: rename_fields

default: rename_fields

Option 18

object

The sample processor allows probabilistic sampling of logs at a fixed rate.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by

[string]

Optional list of fields to group events by. Each group is sampled independently.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

percentage [required]

double

The percentage of logs to sample.

type [required]

enum

The processor type. The value should always be sample. Allowed enum values: sample

default: sample

Option 19

object

The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

rules [required]

[object]

A list of rules for identifying and acting on sensitive data patterns.

keyword_options

object

Configuration for keywords used to reinforce sensitive data pattern detection.

keywords [required]

[string]

A list of keywords to match near the sensitive pattern.

proximity [required]

int64

Maximum number of tokens between a keyword and a sensitive value match.

name [required]

string

A name identifying the rule.

on_match [required]

 <oneOf>

Defines what action to take when sensitive data is matched.

Option 1

object

Configuration for completely redacting matched sensitive data.

action [required]

enum

Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility. Allowed enum values: redact

options [required]

object

Configuration for fully redacting sensitive data.

replace [required]

string

The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptions replace.

Option 2

object

Configuration for hashing matched sensitive values.

action [required]

enum

Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content. Allowed enum values: hash

options

object

The ObservabilityPipelineSensitiveDataScannerProcessorActionHash options.

Option 3

object

Configuration for partially redacting matched sensitive data.

action [required]

enum

Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card). Allowed enum values: partial_redact

options [required]

object

Controls how partial redaction is applied, including character count and direction.

characters [required]

int64

The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptions characters.

direction [required]

enum

Indicates whether to redact characters from the first or last part of the matched value. Allowed enum values: first,last

pattern [required]

 <oneOf>

Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.

Option 1

object

Defines a custom regex-based pattern for identifying sensitive data in logs.

options [required]

object

Options for defining a custom regex pattern.

description

string

Human-readable description providing context about a sensitive data scanner rule

rule [required]

string

A regular expression used to detect sensitive values. Must be a valid regex.

type [required]

enum

Indicates a custom regular expression is used for matching. Allowed enum values: custom

Option 2

object

Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.

options [required]

object

Options for selecting a predefined library pattern and enabling keyword support.

description

string

Human-readable description providing context about a sensitive data scanner rule

id [required]

string

Identifier for a predefined pattern from the sensitive data scanner pattern library.

use_recommended_keywords

boolean

Whether to augment the pattern with recommended keywords (optional).

type [required]

enum

Indicates that a predefined library pattern is used. Allowed enum values: library

scope [required]

 <oneOf>

Determines which parts of the log the pattern-matching rule should be applied to.

Option 1

object

Includes only specific fields for sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Applies the rule only to included fields. Allowed enum values: include

Option 2

object

Excludes specific fields from sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Excludes specific fields from processing. Allowed enum values: exclude

Option 3

object

Applies scanning across all available fields.

target [required]

enum

Applies the rule to all fields. Allowed enum values: all

tags [required]

[string]

Tags assigned to this rule for filtering and classification.

type [required]

enum

The processor type. The value should always be sensitive_data_scanner. Allowed enum values: sensitive_data_scanner

default: sensitive_data_scanner

Option 20

object

The split_array processor splits array fields into separate events based on configured rules.

Supported pipeline types: logs

arrays [required]

[object]

A list of array split configurations.

field [required]

string

The path to the array field to split.

include [required]

string

A Datadog search query used to determine which logs this array split operation targets.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets. For split_array, this should typically be *.

type [required]

enum

The processor type. The value should always be split_array. Allowed enum values: split_array

default: split_array

Option 21

object

The throttle processor limits the number of events that pass through over a given time window.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by

[string]

Optional list of fields used to group events before the threshold has been reached.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

threshold [required]

int64

the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.

type [required]

enum

The processor type. The value should always be throttle. Allowed enum values: throttle

default: throttle

window [required]

double

The time window in seconds over which the threshold applies.

Option 22

object

The metric_tags processor filters metrics based on their tags using Datadog tag key patterns.

Supported pipeline types: metrics

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query that determines which metrics the processor targets.

rules [required]

[object]

A list of rules for filtering metric tags.

action [required]

enum

The action to take on tags with matching keys. Allowed enum values: include,exclude

include [required]

string

A Datadog search query used to determine which metrics this rule targets.

keys [required]

[string]

A list of tag keys to include or exclude.

mode [required]

enum

The processing mode for tag filtering. Allowed enum values: filter

type [required]

enum

The processor type. The value should always be metric_tags. Allowed enum values: metric_tags

default: metric_tags

sources [required]

[ <oneOf>]

A list of configured data sources for the pipeline.

Option 1

object

The datadog_agent source collects logs/metrics from the Datadog Agent.

Supported pipeline types: logs, metrics

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be datadog_agent. Allowed enum values: datadog_agent

default: datadog_agent

Option 2

object

The amazon_data_firehose source ingests logs from AWS Data Firehose.

Supported pipeline types: logs

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be amazon_data_firehose. Allowed enum values: amazon_data_firehose

default: amazon_data_firehose

Option 3

object

The amazon_s3 source ingests logs from an Amazon S3 bucket. It supports AWS authentication and TLS encryption.

Supported pipeline types: logs

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

region [required]

string

AWS region where the S3 bucket resides.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always amazon_s3. Allowed enum values: amazon_s3

default: amazon_s3

Option 4

object

The fluent_bit source ingests logs from Fluent Bit.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be fluent_bit. Allowed enum values: fluent_bit

default: fluent_bit

Option 5

object

The fluentd source ingests logs from a Fluentd-compatible service.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be `fluentd. Allowed enum values: fluentd

default: fluentd

Option 6

object

The google_pubsub source ingests logs from a Google Cloud Pub/Sub subscription.

Supported pipeline types: logs

auth

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

project [required]

string

The GCP project ID that owns the Pub/Sub subscription.

subscription [required]

string

The Pub/Sub subscription name from which messages are consumed.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be google_pubsub. Allowed enum values: google_pubsub

default: google_pubsub

Option 7

object

The http_client source scrapes logs from HTTP endpoints at regular intervals.

Supported pipeline types: logs

auth_strategy

enum

Optional authentication strategy for HTTP requests. Allowed enum values: none,basic,bearer,custom

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

scrape_interval_secs

int64

The interval (in seconds) between HTTP scrape requests.

scrape_timeout_secs

int64

The timeout (in seconds) for each scrape request.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be http_client. Allowed enum values: http_client

default: http_client

Option 8

object

The http_server source collects logs over HTTP POST from external services.

Supported pipeline types: logs

auth_strategy [required]

enum

HTTP authentication method. Allowed enum values: none,plain

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

Unique ID for the HTTP server source.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be http_server. Allowed enum values: http_server

default: http_server

Option 9

object

The kafka source ingests data from Apache Kafka topics.

Supported pipeline types: logs

group_id [required]

string

Consumer group ID used by the Kafka client.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

librdkafka_options

[object]

Optional list of advanced Kafka client configuration options, defined as key-value pairs.

name [required]

string

The name of the librdkafka configuration option to set.

value [required]

string

The value assigned to the specified librdkafka configuration option.

sasl

object

Specifies the SASL mechanism for authenticating with a Kafka cluster.

mechanism

enum

SASL mechanism used for Kafka authentication. Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topics [required]

[string]

A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.

type [required]

enum

The source type. The value should always be kafka. Allowed enum values: kafka

default: kafka

Option 10

object

The logstash source ingests logs from a Logstash forwarder.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be logstash. Allowed enum values: logstash

default: logstash

Option 11

object

The rsyslog source listens for logs over TCP or UDP from an rsyslog server using the syslog protocol.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

mode [required]

enum

Protocol used by the syslog source to receive messages. Allowed enum values: tcp,udp

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be rsyslog. Allowed enum values: rsyslog

default: rsyslog

Option 12

object

The socket source ingests logs over TCP or UDP.

Supported pipeline types: logs

framing [required]

 <oneOf>

Framing method configuration for the socket source.

Option 1

object

Byte frames which are delimited by a newline character.

method [required]

enum

Byte frames which are delimited by a newline character. Allowed enum values: newline_delimited

Option 2

object

Byte frames are passed through as-is according to the underlying I/O boundaries (for example, split between messages or stream segments).

method [required]

enum

Byte frames are passed through as-is according to the underlying I/O boundaries (for example, split between messages or stream segments). Allowed enum values: bytes

Option 3

object

Byte frames which are delimited by a chosen character.

delimiter [required]

string

A single ASCII character used to delimit events.

method [required]

enum

Byte frames which are delimited by a chosen character. Allowed enum values: character_delimited

Option 4

object

Byte frames according to the octet counting format as per RFC6587.

method [required]

enum

Byte frames according to the octet counting format as per RFC6587. Allowed enum values: octet_counting

Option 5

object

Byte frames which are chunked GELF messages.

method [required]

enum

Byte frames which are chunked GELF messages. Allowed enum values: chunked_gelf

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

mode [required]

enum

Protocol used to receive logs. Allowed enum values: tcp,udp

tls

object

TLS configuration. Relevant only when mode is tcp.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be socket. Allowed enum values: socket

default: socket

Option 13

object

The splunk_hec source implements the Splunk HTTP Event Collector (HEC) API.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always splunk_hec. Allowed enum values: splunk_hec

default: splunk_hec

Option 14

object

The splunk_tcp source receives logs from a Splunk Universal Forwarder over TCP. TLS is supported for secure transmission.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always splunk_tcp. Allowed enum values: splunk_tcp

default: splunk_tcp

Option 15

object

The sumo_logic source receives logs from Sumo Logic collectors.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

type [required]

enum

The source type. The value should always be sumo_logic. Allowed enum values: sumo_logic

default: sumo_logic

Option 16

object

The syslog_ng source listens for logs over TCP or UDP from a syslog-ng server using the syslog protocol.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

mode [required]

enum

Protocol used by the syslog source to receive messages. Allowed enum values: tcp,udp

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be syslog_ng. Allowed enum values: syslog_ng

default: syslog_ng

Option 17

object

The opentelemetry source receives telemetry data using the OpenTelemetry Protocol (OTLP) over gRPC and HTTP.

Supported pipeline types: logs

grpc_address_key

string

Environment variable name containing the gRPC server address for receiving OTLP data. Must be a valid environment variable name (alphanumeric characters and underscores only).

http_address_key

string

Environment variable name containing the HTTP server address for receiving OTLP data. Must be a valid environment variable name (alphanumeric characters and underscores only).

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be opentelemetry. Allowed enum values: opentelemetry

default: opentelemetry

name [required]

string

Name of the pipeline.

id [required]

string

Unique identifier for the pipeline.

type [required]

string

The resource type identifier. For pipeline resources, this should always be set to pipelines.

default: pipelines

{
  "data": {
    "attributes": {
      "config": {
        "destinations": [
          {
            "auth_strategy": "basic",
            "compression": {
              "algorithm": "gzip"
            },
            "encoding": "json",
            "id": "http-client-destination",
            "inputs": [
              "filter-processor"
            ],
            "tls": {
              "ca_file": "string",
              "crt_file": "/path/to/cert.crt",
              "key_file": "string"
            },
            "type": "http_client"
          }
        ],
        "pipeline_type": "logs",
        "processor_groups": [
          {
            "display_name": "my component",
            "enabled": true,
            "id": "grouped-processors",
            "include": "service:my-service",
            "inputs": [
              "datadog-agent-source"
            ],
            "processors": [
              []
            ]
          }
        ],
        "processors": [
          {
            "display_name": "my component",
            "enabled": true,
            "id": "grouped-processors",
            "include": "service:my-service",
            "inputs": [
              "datadog-agent-source"
            ],
            "processors": [
              []
            ]
          }
        ],
        "sources": [
          {
            "id": "datadog-agent-source",
            "tls": {
              "ca_file": "string",
              "crt_file": "/path/to/cert.crt",
              "key_file": "string"
            },
            "type": "datadog_agent"
          }
        ]
      },
      "name": "Main Observability Pipeline"
    },
    "id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
    "type": "pipelines"
  }
}

Bad Request

API error response.

Expand All

Champ

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Not Authorized

API error response.

Expand All

Champ

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Conflict

API error response.

Expand All

Champ

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Too many requests

API error response.

Expand All

Champ

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Exemple de code

                          # Curl command
curl -X POST "https://api.ap1.datadoghq.com"https://api.ap2.datadoghq.com"https://api.datadoghq.eu"https://api.ddog-gov.com"https://api.datadoghq.com"https://api.us3.datadoghq.com"https://api.us5.datadoghq.com/api/v2/obs-pipelines/pipelines" \ -H "Accept: application/json" \ -H "Content-Type: application/json" \ -H "DD-API-KEY: ${DD_API_KEY}" \ -H "DD-APPLICATION-KEY: ${DD_APP_KEY}" \ -d @- << EOF { "data": { "attributes": { "config": { "destinations": [ { "id": "datadog-logs-destination", "inputs": [ "my-processor-group" ], "type": "datadog_logs" } ], "processor_groups": [ { "enabled": true, "id": "my-processor-group", "include": "service:my-service", "inputs": [ "datadog-agent-source" ], "processors": [ { "enabled": true, "id": "filter-processor", "include": "status:error", "type": "filter" } ] } ], "sources": [ { "id": "datadog-agent-source", "type": "datadog_agent" } ] }, "name": "Main Observability Pipeline" }, "type": "pipelines" } } EOF
// Create a new pipeline returns "OK" response

package main

import (
	"context"
	"encoding/json"
	"fmt"
	"os"

	"github.com/DataDog/datadog-api-client-go/v2/api/datadog"
	"github.com/DataDog/datadog-api-client-go/v2/api/datadogV2"
)

func main() {
	body := datadogV2.ObservabilityPipelineSpec{
		Data: datadogV2.ObservabilityPipelineSpecData{
			Attributes: datadogV2.ObservabilityPipelineDataAttributes{
				Config: datadogV2.ObservabilityPipelineConfig{
					Destinations: []datadogV2.ObservabilityPipelineConfigDestinationItem{
						datadogV2.ObservabilityPipelineConfigDestinationItem{
							ObservabilityPipelineDatadogLogsDestination: &datadogV2.ObservabilityPipelineDatadogLogsDestination{
								Id: "datadog-logs-destination",
								Inputs: []string{
									"my-processor-group",
								},
								Type: datadogV2.OBSERVABILITYPIPELINEDATADOGLOGSDESTINATIONTYPE_DATADOG_LOGS,
							}},
					},
					Processors: []datadogV2.ObservabilityPipelineConfigProcessorGroup{
						{
							Enabled: true,
							Id:      "my-processor-group",
							Include: "service:my-service",
							Inputs: []string{
								"datadog-agent-source",
							},
							Processors: []datadogV2.ObservabilityPipelineConfigProcessorItem{
								datadogV2.ObservabilityPipelineConfigProcessorItem{
									ObservabilityPipelineFilterProcessor: &datadogV2.ObservabilityPipelineFilterProcessor{
										Enabled: true,
										Id:      "filter-processor",
										Include: "status:error",
										Type:    datadogV2.OBSERVABILITYPIPELINEFILTERPROCESSORTYPE_FILTER,
									}},
							},
						},
					},
					Sources: []datadogV2.ObservabilityPipelineConfigSourceItem{
						datadogV2.ObservabilityPipelineConfigSourceItem{
							ObservabilityPipelineDatadogAgentSource: &datadogV2.ObservabilityPipelineDatadogAgentSource{
								Id:   "datadog-agent-source",
								Type: datadogV2.OBSERVABILITYPIPELINEDATADOGAGENTSOURCETYPE_DATADOG_AGENT,
							}},
					},
				},
				Name: "Main Observability Pipeline",
			},
			Type: "pipelines",
		},
	}
	ctx := datadog.NewDefaultContext(context.Background())
	configuration := datadog.NewConfiguration()
	configuration.SetUnstableOperationEnabled("v2.CreatePipeline", true)
	apiClient := datadog.NewAPIClient(configuration)
	api := datadogV2.NewObservabilityPipelinesApi(apiClient)
	resp, r, err := api.CreatePipeline(ctx, body)

	if err != nil {
		fmt.Fprintf(os.Stderr, "Error when calling `ObservabilityPipelinesApi.CreatePipeline`: %v\n", err)
		fmt.Fprintf(os.Stderr, "Full HTTP response: %v\n", r)
	}

	responseContent, _ := json.MarshalIndent(resp, "", "  ")
	fmt.Fprintf(os.Stdout, "Response from `ObservabilityPipelinesApi.CreatePipeline`:\n%s\n", responseContent)
}

Instructions

First install the library and its dependencies and then save the example to main.go and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" go run "main.go"
// Create a new pipeline returns "OK" response

import com.datadog.api.client.ApiClient;
import com.datadog.api.client.ApiException;
import com.datadog.api.client.v2.api.ObservabilityPipelinesApi;
import com.datadog.api.client.v2.model.ObservabilityPipeline;
import com.datadog.api.client.v2.model.ObservabilityPipelineConfig;
import com.datadog.api.client.v2.model.ObservabilityPipelineConfigDestinationItem;
import com.datadog.api.client.v2.model.ObservabilityPipelineConfigProcessorGroup;
import com.datadog.api.client.v2.model.ObservabilityPipelineConfigProcessorItem;
import com.datadog.api.client.v2.model.ObservabilityPipelineConfigSourceItem;
import com.datadog.api.client.v2.model.ObservabilityPipelineDataAttributes;
import com.datadog.api.client.v2.model.ObservabilityPipelineDatadogAgentSource;
import com.datadog.api.client.v2.model.ObservabilityPipelineDatadogAgentSourceType;
import com.datadog.api.client.v2.model.ObservabilityPipelineDatadogLogsDestination;
import com.datadog.api.client.v2.model.ObservabilityPipelineDatadogLogsDestinationType;
import com.datadog.api.client.v2.model.ObservabilityPipelineFilterProcessor;
import com.datadog.api.client.v2.model.ObservabilityPipelineFilterProcessorType;
import com.datadog.api.client.v2.model.ObservabilityPipelineSpec;
import com.datadog.api.client.v2.model.ObservabilityPipelineSpecData;
import java.util.Collections;

public class Example {
  public static void main(String[] args) {
    ApiClient defaultClient = ApiClient.getDefaultApiClient();
    defaultClient.setUnstableOperationEnabled("v2.createPipeline", true);
    ObservabilityPipelinesApi apiInstance = new ObservabilityPipelinesApi(defaultClient);

    ObservabilityPipelineSpec body =
        new ObservabilityPipelineSpec()
            .data(
                new ObservabilityPipelineSpecData()
                    .attributes(
                        new ObservabilityPipelineDataAttributes()
                            .config(
                                new ObservabilityPipelineConfig()
                                    .destinations(
                                        Collections.singletonList(
                                            new ObservabilityPipelineConfigDestinationItem(
                                                new ObservabilityPipelineDatadogLogsDestination()
                                                    .id("datadog-logs-destination")
                                                    .inputs(
                                                        Collections.singletonList(
                                                            "my-processor-group"))
                                                    .type(
                                                        ObservabilityPipelineDatadogLogsDestinationType
                                                            .DATADOG_LOGS))))
                                    .processors(
                                        Collections.singletonList(
                                            new ObservabilityPipelineConfigProcessorGroup()
                                                .enabled(true)
                                                .id("my-processor-group")
                                                .include("service:my-service")
                                                .inputs(
                                                    Collections.singletonList(
                                                        "datadog-agent-source"))
                                                .processors(
                                                    Collections.singletonList(
                                                        new ObservabilityPipelineConfigProcessorItem(
                                                            new ObservabilityPipelineFilterProcessor()
                                                                .enabled(true)
                                                                .id("filter-processor")
                                                                .include("status:error")
                                                                .type(
                                                                    ObservabilityPipelineFilterProcessorType
                                                                        .FILTER))))))
                                    .sources(
                                        Collections.singletonList(
                                            new ObservabilityPipelineConfigSourceItem(
                                                new ObservabilityPipelineDatadogAgentSource()
                                                    .id("datadog-agent-source")
                                                    .type(
                                                        ObservabilityPipelineDatadogAgentSourceType
                                                            .DATADOG_AGENT)))))
                            .name("Main Observability Pipeline"))
                    .type("pipelines"));

    try {
      ObservabilityPipeline result = apiInstance.createPipeline(body);
      System.out.println(result);
    } catch (ApiException e) {
      System.err.println("Exception when calling ObservabilityPipelinesApi#createPipeline");
      System.err.println("Status code: " + e.getCode());
      System.err.println("Reason: " + e.getResponseBody());
      System.err.println("Response headers: " + e.getResponseHeaders());
      e.printStackTrace();
    }
  }
}

Instructions

First install the library and its dependencies and then save the example to Example.java and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" java "Example.java"
"""
Create a new pipeline returns "OK" response
"""

from datadog_api_client import ApiClient, Configuration
from datadog_api_client.v2.api.observability_pipelines_api import ObservabilityPipelinesApi
from datadog_api_client.v2.model.observability_pipeline_config import ObservabilityPipelineConfig
from datadog_api_client.v2.model.observability_pipeline_config_processor_group import (
    ObservabilityPipelineConfigProcessorGroup,
)
from datadog_api_client.v2.model.observability_pipeline_data_attributes import ObservabilityPipelineDataAttributes
from datadog_api_client.v2.model.observability_pipeline_datadog_agent_source import (
    ObservabilityPipelineDatadogAgentSource,
)
from datadog_api_client.v2.model.observability_pipeline_datadog_agent_source_type import (
    ObservabilityPipelineDatadogAgentSourceType,
)
from datadog_api_client.v2.model.observability_pipeline_datadog_logs_destination import (
    ObservabilityPipelineDatadogLogsDestination,
)
from datadog_api_client.v2.model.observability_pipeline_datadog_logs_destination_type import (
    ObservabilityPipelineDatadogLogsDestinationType,
)
from datadog_api_client.v2.model.observability_pipeline_filter_processor import ObservabilityPipelineFilterProcessor
from datadog_api_client.v2.model.observability_pipeline_filter_processor_type import (
    ObservabilityPipelineFilterProcessorType,
)
from datadog_api_client.v2.model.observability_pipeline_spec import ObservabilityPipelineSpec
from datadog_api_client.v2.model.observability_pipeline_spec_data import ObservabilityPipelineSpecData

body = ObservabilityPipelineSpec(
    data=ObservabilityPipelineSpecData(
        attributes=ObservabilityPipelineDataAttributes(
            config=ObservabilityPipelineConfig(
                destinations=[
                    ObservabilityPipelineDatadogLogsDestination(
                        id="datadog-logs-destination",
                        inputs=[
                            "my-processor-group",
                        ],
                        type=ObservabilityPipelineDatadogLogsDestinationType.DATADOG_LOGS,
                    ),
                ],
                processors=[
                    ObservabilityPipelineConfigProcessorGroup(
                        enabled=True,
                        id="my-processor-group",
                        include="service:my-service",
                        inputs=[
                            "datadog-agent-source",
                        ],
                        processors=[
                            ObservabilityPipelineFilterProcessor(
                                enabled=True,
                                id="filter-processor",
                                include="status:error",
                                type=ObservabilityPipelineFilterProcessorType.FILTER,
                            ),
                        ],
                    ),
                ],
                sources=[
                    ObservabilityPipelineDatadogAgentSource(
                        id="datadog-agent-source",
                        type=ObservabilityPipelineDatadogAgentSourceType.DATADOG_AGENT,
                    ),
                ],
            ),
            name="Main Observability Pipeline",
        ),
        type="pipelines",
    ),
)

configuration = Configuration()
configuration.unstable_operations["create_pipeline"] = True
with ApiClient(configuration) as api_client:
    api_instance = ObservabilityPipelinesApi(api_client)
    response = api_instance.create_pipeline(body=body)

    print(response)

Instructions

First install the library and its dependencies and then save the example to example.py and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" python3 "example.py"
# Create a new pipeline returns "OK" response

require "datadog_api_client"
DatadogAPIClient.configure do |config|
  config.unstable_operations["v2.create_pipeline".to_sym] = true
end
api_instance = DatadogAPIClient::V2::ObservabilityPipelinesAPI.new

body = DatadogAPIClient::V2::ObservabilityPipelineSpec.new({
  data: DatadogAPIClient::V2::ObservabilityPipelineSpecData.new({
    attributes: DatadogAPIClient::V2::ObservabilityPipelineDataAttributes.new({
      config: DatadogAPIClient::V2::ObservabilityPipelineConfig.new({
        destinations: [
          DatadogAPIClient::V2::ObservabilityPipelineDatadogLogsDestination.new({
            id: "datadog-logs-destination",
            inputs: [
              "my-processor-group",
            ],
            type: DatadogAPIClient::V2::ObservabilityPipelineDatadogLogsDestinationType::DATADOG_LOGS,
          }),
        ],
        processors: [
          DatadogAPIClient::V2::ObservabilityPipelineConfigProcessorGroup.new({
            enabled: true,
            id: "my-processor-group",
            include: "service:my-service",
            inputs: [
              "datadog-agent-source",
            ],
            processors: [
              DatadogAPIClient::V2::ObservabilityPipelineFilterProcessor.new({
                enabled: true,
                id: "filter-processor",
                include: "status:error",
                type: DatadogAPIClient::V2::ObservabilityPipelineFilterProcessorType::FILTER,
              }),
            ],
          }),
        ],
        sources: [
          DatadogAPIClient::V2::ObservabilityPipelineDatadogAgentSource.new({
            id: "datadog-agent-source",
            type: DatadogAPIClient::V2::ObservabilityPipelineDatadogAgentSourceType::DATADOG_AGENT,
          }),
        ],
      }),
      name: "Main Observability Pipeline",
    }),
    type: "pipelines",
  }),
})
p api_instance.create_pipeline(body)

Instructions

First install the library and its dependencies and then save the example to example.rb and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" rb "example.rb"
// Create a new pipeline returns "OK" response
use datadog_api_client::datadog;
use datadog_api_client::datadogV2::api_observability_pipelines::ObservabilityPipelinesAPI;
use datadog_api_client::datadogV2::model::ObservabilityPipelineConfig;
use datadog_api_client::datadogV2::model::ObservabilityPipelineConfigDestinationItem;
use datadog_api_client::datadogV2::model::ObservabilityPipelineConfigProcessorGroup;
use datadog_api_client::datadogV2::model::ObservabilityPipelineConfigProcessorItem;
use datadog_api_client::datadogV2::model::ObservabilityPipelineConfigSourceItem;
use datadog_api_client::datadogV2::model::ObservabilityPipelineDataAttributes;
use datadog_api_client::datadogV2::model::ObservabilityPipelineDatadogAgentSource;
use datadog_api_client::datadogV2::model::ObservabilityPipelineDatadogAgentSourceType;
use datadog_api_client::datadogV2::model::ObservabilityPipelineDatadogLogsDestination;
use datadog_api_client::datadogV2::model::ObservabilityPipelineDatadogLogsDestinationType;
use datadog_api_client::datadogV2::model::ObservabilityPipelineFilterProcessor;
use datadog_api_client::datadogV2::model::ObservabilityPipelineFilterProcessorType;
use datadog_api_client::datadogV2::model::ObservabilityPipelineSpec;
use datadog_api_client::datadogV2::model::ObservabilityPipelineSpecData;

#[tokio::main]
async fn main() {
    let body =
        ObservabilityPipelineSpec::new(
            ObservabilityPipelineSpecData::new(
                ObservabilityPipelineDataAttributes::new(
                    ObservabilityPipelineConfig::new(
                        vec![
                            ObservabilityPipelineConfigDestinationItem::ObservabilityPipelineDatadogLogsDestination(
                                Box::new(
                                    ObservabilityPipelineDatadogLogsDestination::new(
                                        "datadog-logs-destination".to_string(),
                                        vec!["my-processor-group".to_string()],
                                        ObservabilityPipelineDatadogLogsDestinationType::DATADOG_LOGS,
                                    ),
                                ),
                            )
                        ],
                        vec![
                            ObservabilityPipelineConfigSourceItem::ObservabilityPipelineDatadogAgentSource(
                                Box::new(
                                    ObservabilityPipelineDatadogAgentSource::new(
                                        "datadog-agent-source".to_string(),
                                        ObservabilityPipelineDatadogAgentSourceType::DATADOG_AGENT,
                                    ),
                                ),
                            )
                        ],
                    ).processors(
                        vec![
                            ObservabilityPipelineConfigProcessorGroup::new(
                                true,
                                "my-processor-group".to_string(),
                                "service:my-service".to_string(),
                                vec!["datadog-agent-source".to_string()],
                                vec![
                                    ObservabilityPipelineConfigProcessorItem::ObservabilityPipelineFilterProcessor(
                                        Box::new(
                                            ObservabilityPipelineFilterProcessor::new(
                                                true,
                                                "filter-processor".to_string(),
                                                "status:error".to_string(),
                                                ObservabilityPipelineFilterProcessorType::FILTER,
                                            ),
                                        ),
                                    )
                                ],
                            )
                        ],
                    ),
                    "Main Observability Pipeline".to_string(),
                ),
                "pipelines".to_string(),
            ),
        );
    let mut configuration = datadog::Configuration::new();
    configuration.set_unstable_operation_enabled("v2.CreatePipeline", true);
    let api = ObservabilityPipelinesAPI::with_config(configuration);
    let resp = api.create_pipeline(body).await;
    if let Ok(value) = resp {
        println!("{:#?}", value);
    } else {
        println!("{:#?}", resp.unwrap_err());
    }
}

Instructions

First install the library and its dependencies and then save the example to src/main.rs and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" cargo run
/**
 * Create a new pipeline returns "OK" response
 */

import { client, v2 } from "@datadog/datadog-api-client";

const configuration = client.createConfiguration();
configuration.unstableOperations["v2.createPipeline"] = true;
const apiInstance = new v2.ObservabilityPipelinesApi(configuration);

const params: v2.ObservabilityPipelinesApiCreatePipelineRequest = {
  body: {
    data: {
      attributes: {
        config: {
          destinations: [
            {
              id: "datadog-logs-destination",
              inputs: ["my-processor-group"],
              type: "datadog_logs",
            },
          ],
          processors: [
            {
              enabled: true,
              id: "my-processor-group",
              include: "service:my-service",
              inputs: ["datadog-agent-source"],
              processors: [
                {
                  enabled: true,
                  id: "filter-processor",
                  include: "status:error",
                  type: "filter",
                },
              ],
            },
          ],
          sources: [
            {
              id: "datadog-agent-source",
              type: "datadog_agent",
            },
          ],
        },
        name: "Main Observability Pipeline",
      },
      type: "pipelines",
    },
  },
};

apiInstance
  .createPipeline(params)
  .then((data: v2.ObservabilityPipeline) => {
    console.log(
      "API called successfully. Returned data: " + JSON.stringify(data)
    );
  })
  .catch((error: any) => console.error(error));

Instructions

First install the library and its dependencies and then save the example to example.ts and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" tsc "example.ts"

Note: This endpoint is in Preview. Fill out this form to request access.

GET https://api.ap1.datadoghq.com/api/v2/obs-pipelines/pipelines/{pipeline_id}https://api.ap2.datadoghq.com/api/v2/obs-pipelines/pipelines/{pipeline_id}https://api.datadoghq.eu/api/v2/obs-pipelines/pipelines/{pipeline_id}https://api.ddog-gov.com/api/v2/obs-pipelines/pipelines/{pipeline_id}https://api.datadoghq.com/api/v2/obs-pipelines/pipelines/{pipeline_id}https://api.us3.datadoghq.com/api/v2/obs-pipelines/pipelines/{pipeline_id}https://api.us5.datadoghq.com/api/v2/obs-pipelines/pipelines/{pipeline_id}

Présentation

Get a specific pipeline by its ID. This endpoint requires the observability_pipelines_read permission.

Arguments

Paramètres du chemin

Nom

Type

Description

pipeline_id [required]

string

The ID of the pipeline to retrieve.

Réponse

OK

Top-level schema representing a pipeline.

Expand All

Champ

Type

Description

data [required]

object

Contains the pipeline’s ID, type, and configuration attributes.

attributes [required]

object

Defines the pipeline’s name and its components (sources, processors, and destinations).

config [required]

object

Specifies the pipeline's configuration, including its sources, processors, and destinations.

destinations [required]

[ <oneOf>]

A list of destination components where processed logs are sent.

Option 1

object

The http_client destination sends data to an HTTP endpoint.

Supported pipeline types: logs, metrics

auth_strategy

enum

HTTP authentication strategy. Allowed enum values: none,basic,bearer

compression

object

Compression configuration for HTTP requests.

algorithm [required]

enum

Compression algorithm. Allowed enum values: gzip

encoding [required]

enum

Encoding format for log events. Allowed enum values: json

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be http_client. Allowed enum values: http_client

default: http_client

Option 2

object

The amazon_opensearch destination writes logs to Amazon OpenSearch.

Supported pipeline types: logs

auth [required]

object

Authentication settings for the Amazon OpenSearch destination. The strategy field determines whether basic or AWS-based authentication is used.

assume_role

string

The ARN of the role to assume (used with aws strategy).

aws_region

string

AWS region

external_id

string

External ID for the assumed role (used with aws strategy).

session_name

string

Session name for the assumed role (used with aws strategy).

strategy [required]

enum

The authentication strategy to use. Allowed enum values: basic,aws

bulk_index

string

The index to write logs to.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be amazon_opensearch. Allowed enum values: amazon_opensearch

default: amazon_opensearch

Option 3

object

The amazon_s3 destination sends your logs in Datadog-rehydratable format to an Amazon S3 bucket for archiving.

Supported pipeline types: logs

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

bucket [required]

string

S3 bucket name.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_prefix

string

Optional prefix for object keys.

region [required]

string

AWS region of the S3 bucket.

storage_class [required]

enum

S3 storage class. Allowed enum values: STANDARD,REDUCED_REDUNDANCY,INTELLIGENT_TIERING,STANDARD_IA,EXPRESS_ONEZONE,ONEZONE_IA,GLACIER,GLACIER_IR,DEEP_ARCHIVE

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. Always amazon_s3. Allowed enum values: amazon_s3

default: amazon_s3

Option 4

object

The amazon_security_lake destination sends your logs to Amazon Security Lake.

Supported pipeline types: logs

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

bucket [required]

string

Name of the Amazon S3 bucket in Security Lake (3-63 characters).

custom_source_name [required]

string

Custom source name for the logs in Security Lake.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

string

AWS region of the S3 bucket.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. Always amazon_security_lake. Allowed enum values: amazon_security_lake

default: amazon_security_lake

Option 5

object

The azure_storage destination forwards logs to an Azure Blob Storage container.

Supported pipeline types: logs

blob_prefix

string

Optional prefix for blobs written to the container.

container_name [required]

string

The name of the Azure Blob Storage container to store logs in.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be azure_storage. Allowed enum values: azure_storage

default: azure_storage

Option 6

object

The cloud_prem destination sends logs to Datadog CloudPrem.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be cloud_prem. Allowed enum values: cloud_prem

default: cloud_prem

Option 7

object

The crowdstrike_next_gen_siem destination forwards logs to CrowdStrike Next Gen SIEM.

Supported pipeline types: logs

compression

object

Compression configuration for log events.

algorithm [required]

enum

Compression algorithm for log events. Allowed enum values: gzip,zlib

level

int64

Compression level.

encoding [required]

enum

Encoding format for log events. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be crowdstrike_next_gen_siem. Allowed enum values: crowdstrike_next_gen_siem

default: crowdstrike_next_gen_siem

Option 8

object

The datadog_logs destination forwards logs to Datadog Log Management.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

routes

[object]

A list of routing rules that forward matching logs to Datadog using dedicated API keys.

api_key_key

string

Name of the environment variable or secret that stores the Datadog API key used by this route.

include

string

A Datadog search query that determines which logs are forwarded using this route.

route_id

string

Unique identifier for this route within the destination.

site

string

Datadog site where matching logs are sent (for example, us1).

type [required]

enum

The destination type. The value should always be datadog_logs. Allowed enum values: datadog_logs

default: datadog_logs

Option 9

object

The elasticsearch destination writes logs to an Elasticsearch cluster.

Supported pipeline types: logs

api_version

enum

The Elasticsearch API version to use. Set to auto to auto-detect. Allowed enum values: auto,v6,v7,v8

bulk_index

string

The index to write logs to in Elasticsearch.

data_stream

object

Configuration options for writing to Elasticsearch Data Streams instead of a fixed index.

dataset

string

The data stream dataset for your logs. This groups logs by their source or application.

dtype

string

The data stream type for your logs. This determines how logs are categorized within the data stream.

namespace

string

The data stream namespace for your logs. This separates logs into different environments or domains.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be elasticsearch. Allowed enum values: elasticsearch

default: elasticsearch

Option 10

object

The google_chronicle destination sends logs to Google Chronicle.

Supported pipeline types: logs

auth

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

customer_id [required]

string

The Google Chronicle customer ID.

encoding

enum

The encoding format for the logs sent to Chronicle. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

log_type

string

The log type metadata associated with the Chronicle destination.

type [required]

enum

The destination type. The value should always be google_chronicle. Allowed enum values: google_chronicle

default: google_chronicle

Option 11

object

The google_cloud_storage destination stores logs in a Google Cloud Storage (GCS) bucket. It requires a bucket name, GCP authentication, and metadata fields.

Supported pipeline types: logs

acl

enum

Access control list setting for objects written to the bucket. Allowed enum values: private,project-private,public-read,authenticated-read,bucket-owner-read,bucket-owner-full-control

auth

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

bucket [required]

string

Name of the GCS bucket.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_prefix

string

Optional prefix for object keys within the GCS bucket.

metadata

[object]

Custom metadata to attach to each object uploaded to the GCS bucket.

name [required]

string

The metadata key.

value [required]

string

The metadata value.

storage_class [required]

enum

Storage class used for objects stored in GCS. Allowed enum values: STANDARD,NEARLINE,COLDLINE,ARCHIVE

type [required]

enum

The destination type. Always google_cloud_storage. Allowed enum values: google_cloud_storage

default: google_cloud_storage

Option 12

object

The google_pubsub destination publishes logs to a Google Cloud Pub/Sub topic.

Supported pipeline types: logs

auth

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

encoding [required]

enum

Encoding format for log events. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

project [required]

string

The GCP project ID that owns the Pub/Sub topic.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topic [required]

string

The Pub/Sub topic name to publish logs to.

type [required]

enum

The destination type. The value should always be google_pubsub. Allowed enum values: google_pubsub

default: google_pubsub

Option 13

object

The kafka destination sends logs to Apache Kafka topics.

Supported pipeline types: logs

compression

enum

Compression codec for Kafka messages. Allowed enum values: none,gzip,snappy,lz4,zstd

encoding [required]

enum

Encoding format for log events. Allowed enum values: json,raw_message

headers_key

string

The field name to use for Kafka message headers.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_field

string

The field name to use as the Kafka message key.

librdkafka_options

[object]

Optional list of advanced Kafka producer configuration options, defined as key-value pairs.

name [required]

string

The name of the librdkafka configuration option to set.

value [required]

string

The value assigned to the specified librdkafka configuration option.

message_timeout_ms

int64

Maximum time in milliseconds to wait for message delivery confirmation.

rate_limit_duration_secs

int64

Duration in seconds for the rate limit window.

rate_limit_num

int64

Maximum number of messages allowed per rate limit duration.

sasl

object

Specifies the SASL mechanism for authenticating with a Kafka cluster.

mechanism

enum

SASL mechanism used for Kafka authentication. Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512

socket_timeout_ms

int64

Socket timeout in milliseconds for network requests.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topic [required]

string

The Kafka topic name to publish logs to.

type [required]

enum

The destination type. The value should always be kafka. Allowed enum values: kafka

default: kafka

Option 14

object

The microsoft_sentinel destination forwards logs to Microsoft Sentinel.

Supported pipeline types: logs

client_id [required]

string

Azure AD client ID used for authentication.

dcr_immutable_id [required]

string

The immutable ID of the Data Collection Rule (DCR).

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

table [required]

string

The name of the Log Analytics table where logs are sent.

tenant_id [required]

string

Azure AD tenant ID.

type [required]

enum

The destination type. The value should always be microsoft_sentinel. Allowed enum values: microsoft_sentinel

default: microsoft_sentinel

Option 15

object

The new_relic destination sends logs to the New Relic platform.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

enum

The New Relic region. Allowed enum values: us,eu

type [required]

enum

The destination type. The value should always be new_relic. Allowed enum values: new_relic

default: new_relic

Option 16

object

The opensearch destination writes logs to an OpenSearch cluster.

Supported pipeline types: logs

bulk_index

string

The index to write logs to.

data_stream

object

Configuration options for writing to OpenSearch Data Streams instead of a fixed index.

dataset

string

The data stream dataset for your logs. This groups logs by their source or application.

dtype

string

The data stream type for your logs. This determines how logs are categorized within the data stream.

namespace

string

The data stream namespace for your logs. This separates logs into different environments or domains.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be opensearch. Allowed enum values: opensearch

default: opensearch

Option 17

object

The rsyslog destination forwards logs to an external rsyslog server over TCP or UDP using the syslog protocol.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

keepalive

int64

Optional socket keepalive duration in milliseconds.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be rsyslog. Allowed enum values: rsyslog

default: rsyslog

Option 18

object

The sentinel_one destination sends logs to SentinelOne.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

enum

The SentinelOne region to send logs to. Allowed enum values: us,eu,ca,data_set_us

type [required]

enum

The destination type. The value should always be sentinel_one. Allowed enum values: sentinel_one

default: sentinel_one

Option 19

object

The socket destination sends logs over TCP or UDP to a remote server.

Supported pipeline types: logs

encoding [required]

enum

Encoding format for log events. Allowed enum values: json,raw_message

framing [required]

 <oneOf>

Framing method configuration.

Option 1

object

Each log event is delimited by a newline character.

method [required]

enum

The definition of ObservabilityPipelineSocketDestinationFramingNewlineDelimitedMethod object. Allowed enum values: newline_delimited

Option 2

object

Event data is not delimited at all.

method [required]

enum

The definition of ObservabilityPipelineSocketDestinationFramingBytesMethod object. Allowed enum values: bytes

Option 3

object

Each log event is separated using the specified delimiter character.

delimiter [required]

string

A single ASCII character used as a delimiter.

method [required]

enum

The definition of ObservabilityPipelineSocketDestinationFramingCharacterDelimitedMethod object. Allowed enum values: character_delimited

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

mode [required]

enum

Protocol used to send logs. Allowed enum values: tcp,udp

tls

object

TLS configuration. Relevant only when mode is tcp.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be socket. Allowed enum values: socket

default: socket

Option 20

object

The splunk_hec destination forwards logs to Splunk using the HTTP Event Collector (HEC).

Supported pipeline types: logs

auto_extract_timestamp

boolean

If true, Splunk tries to extract timestamps from incoming log events. If false, Splunk assigns the time the event was received.

encoding

enum

Encoding format for log events. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

index

string

Optional name of the Splunk index where logs are written.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

sourcetype

string

The Splunk sourcetype to assign to log events.

type [required]

enum

The destination type. Always splunk_hec. Allowed enum values: splunk_hec

default: splunk_hec

Option 21

object

The sumo_logic destination forwards logs to Sumo Logic.

Supported pipeline types: logs

encoding

enum

The output encoding format. Allowed enum values: json,raw_message,logfmt

header_custom_fields

[object]

A list of custom headers to include in the request to Sumo Logic.

name [required]

string

The header field name.

value [required]

string

The header field value.

header_host_name

string

Optional override for the host name header.

header_source_category

string

Optional override for the source category header.

header_source_name

string

Optional override for the source name header.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be sumo_logic. Allowed enum values: sumo_logic

default: sumo_logic

Option 22

object

The syslog_ng destination forwards logs to an external syslog-ng server over TCP or UDP using the syslog protocol.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

keepalive

int64

Optional socket keepalive duration in milliseconds.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be syslog_ng. Allowed enum values: syslog_ng

default: syslog_ng

Option 23

object

The datadog_metrics destination forwards metrics to Datadog.

Supported pipeline types: metrics

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be datadog_metrics. Allowed enum values: datadog_metrics

default: datadog_metrics

pipeline_type

enum

The type of data being ingested. Defaults to logs if not specified. Allowed enum values: logs,metrics

default: logs

processor_groups

[object]

A list of processor groups that transform or enrich log data.

display_name

string

The display name for a component.

enabled [required]

boolean

Whether this processor group is enabled.

id [required]

string

The unique identifier for the processor group.

include [required]

string

Conditional expression for when this processor group should execute.

inputs [required]

[string]

A list of IDs for components whose output is used as the input for this processor group.

processors [required]

[ <oneOf>]

Processors applied sequentially within this group. Events flow through each processor in order.

Option 1

object

The filter processor allows conditional processing of logs/metrics based on a Datadog search query. Logs/metrics that match the include query are passed through; others are discarded.

Supported pipeline types: logs, metrics

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs/metrics should pass through the filter. Logs/metrics that match this query continue to downstream components; others are dropped.

type [required]

enum

The processor type. The value should always be filter. Allowed enum values: filter

default: filter

Option 2

object

The add_env_vars processor adds environment variable values to log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this processor in the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_env_vars. Allowed enum values: add_env_vars

default: add_env_vars

variables [required]

[object]

A list of environment variable mappings to apply to log fields.

field [required]

string

The target field in the log event.

name [required]

string

The name of the environment variable to read.

Option 3

object

The add_fields processor adds static key-value fields to logs.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[object]

A list of static fields (key-value pairs) that is added to each log event processed by this component.

name [required]

string

The field name.

value [required]

string

The field value.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_fields. Allowed enum values: add_fields

default: add_fields

Option 4

object

The add_hostname processor adds the hostname to log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_hostname. Allowed enum values: add_hostname

default: add_hostname

Option 5

object

The custom_processor processor transforms events using Vector Remap Language (VRL) scripts with advanced filtering capabilities.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets. This field should always be set to * for the custom_processor processor.

default: *

remaps [required]

[object]

Array of VRL remap rules.

drop_on_error [required]

boolean

Whether to drop events that caused errors during processing.

enabled

boolean

Whether this remap rule is enabled.

include [required]

string

A Datadog search query used to filter events for this specific remap rule.

name [required]

string

A descriptive name for this remap rule.

source [required]

string

The VRL script source code that defines the processing logic.

type [required]

enum

The processor type. The value should always be custom_processor. Allowed enum values: custom_processor

default: custom_processor

Option 6

object

The datadog_tags processor includes or excludes specific Datadog tags in your logs.

Supported pipeline types: logs

action [required]

enum

The action to take on tags with matching keys. Allowed enum values: include,exclude

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

keys [required]

[string]

A list of tag keys.

mode [required]

enum

The processing mode. Allowed enum values: filter

type [required]

enum

The processor type. The value should always be datadog_tags. Allowed enum values: datadog_tags

default: datadog_tags

Option 7

object

The dedupe processor removes duplicate fields in log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[string]

A list of log field paths to check for duplicates.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

mode [required]

enum

The deduplication mode to apply to the fields. Allowed enum values: match,ignore

type [required]

enum

The processor type. The value should always be dedupe. Allowed enum values: dedupe

default: dedupe

Option 8

object

The enrichment_table processor enriches logs using a static CSV file, GeoIP database, or reference table. Exactly one of file, geoip, or reference_table must be configured.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

file

object

Defines a static enrichment table loaded from a CSV file.

encoding [required]

object

File encoding format.

delimiter [required]

string

The encoding delimiter.

includes_headers [required]

boolean

The encoding includes_headers.

type [required]

enum

Specifies the encoding format (e.g., CSV) used for enrichment tables. Allowed enum values: csv

key [required]

[object]

Key fields used to look up enrichment values.

column [required]

string

The items column.

comparison [required]

enum

Defines how to compare key fields for enrichment table lookups. Allowed enum values: equals

field [required]

string

The items field.

path [required]

string

Path to the CSV file.

schema [required]

[object]

Schema defining column names and their types.

column [required]

string

The items column.

type [required]

enum

Declares allowed data types for enrichment table columns. Allowed enum values: string,boolean,integer,float,date,timestamp

geoip

object

Uses a GeoIP database to enrich logs based on an IP field.

key_field [required]

string

Path to the IP field in the log.

locale [required]

string

Locale used to resolve geographical names.

path [required]

string

Path to the GeoIP database file.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

reference_table

object

Uses a Datadog reference table to enrich logs.

columns

[string]

List of column names to include from the reference table. If not provided, all columns are included.

key_field [required]

string

Path to the field in the log event to match against the reference table.

table_id [required]

string

The unique identifier of the reference table.

target [required]

string

Path where enrichment results should be stored in the log.

type [required]

enum

The processor type. The value should always be enrichment_table. Allowed enum values: enrichment_table

default: enrichment_table

Option 9

object

The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog. Metrics can be counters, gauges, or distributions and optionally grouped by log fields.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include

string

A Datadog search query used to determine which logs this processor targets.

metrics

[object]

Configuration for generating individual metrics.

group_by

[string]

Optional fields used to group the metric series.

include [required]

string

Datadog filter query to match logs for metric generation.

metric_type [required]

enum

Type of metric to create. Allowed enum values: count,gauge,distribution

name [required]

string

Name of the custom metric to be created.

value [required]

 <oneOf>

Specifies how the value of the generated metric is computed.

Option 1

object

Strategy that increments a generated metric by one for each matching event.

strategy [required]

enum

Increments the metric by 1 for each matching event. Allowed enum values: increment_by_one

Option 2

object

Strategy that increments a generated metric based on the value of a log field.

field [required]

string

Name of the log field containing the numeric value to increment the metric by.

strategy [required]

enum

Uses a numeric field in the log event as the metric increment. Allowed enum values: increment_by_field

type [required]

enum

The processor type. Always generate_datadog_metrics. Allowed enum values: generate_datadog_metrics

default: generate_datadog_metrics

Option 10

object

The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

mappings [required]

[object]

A list of mapping rules to convert events to the OCSF format.

include [required]

string

A Datadog search query used to select the logs that this mapping should apply to.

mapping [required]

 <oneOf>

Defines a single mapping rule for transforming logs into the OCSF schema.

Option 1

enum

Predefined library mappings for common log formats. Allowed enum values: CloudTrail Account Change,GCP Cloud Audit CreateBucket,GCP Cloud Audit CreateSink,GCP Cloud Audit SetIamPolicy,GCP Cloud Audit UpdateSink,Github Audit Log API Activity,Google Workspace Admin Audit addPrivilege,Microsoft 365 Defender Incident,Microsoft 365 Defender UserLoggedIn,Okta System Log Authentication,Palo Alto Networks Firewall Traffic

type [required]

enum

The processor type. The value should always be ocsf_mapper. Allowed enum values: ocsf_mapper

default: ocsf_mapper

Option 11

object

The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.

Supported pipeline types: logs

disable_library_rules

boolean

If set to true, disables the default Grok rules provided by Datadog.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

A unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

rules [required]

[object]

The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.

match_rules [required]

[object]

A list of Grok parsing rules that define how to extract fields from the source field. Each rule must contain a name and a valid Grok pattern.

name [required]

string

The name of the rule.

rule [required]

string

The definition of the Grok rule.

source [required]

string

The name of the field in the log event to apply the Grok rules to.

support_rules

[object]

A list of Grok helper rules that can be referenced by the parsing rules.

name [required]

string

The name of the Grok helper rule.

rule [required]

string

The definition of the Grok helper rule.

type [required]

enum

The processor type. The value should always be parse_grok. Allowed enum values: parse_grok

default: parse_grok

Option 12

object

The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

field [required]

string

The name of the log field that contains a JSON string.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be parse_json. Allowed enum values: parse_json

default: parse_json

Option 13

object

The parse_xml processor parses XML from a specified field and extracts it into the event.

Supported pipeline types: logs

always_use_text_key

boolean

Whether to always use a text key for element content.

attr_prefix

string

The prefix to use for XML attributes in the parsed output.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

field [required]

string

The name of the log field that contains an XML string.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

include_attr

boolean

Whether to include XML attributes in the parsed output.

parse_bool

boolean

Whether to parse boolean values from strings.

parse_null

boolean

Whether to parse null values.

parse_number

boolean

Whether to parse numeric values from strings.

text_key

string

The key name to use for text content within XML elements. Must be at least 1 character if specified.

type [required]

enum

The processor type. The value should always be parse_xml. Allowed enum values: parse_xml

default: parse_xml

Option 14

object

The quota processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.

Supported pipeline types: logs

display_name

string

The display name for a component.

drop_events

boolean

If set to true, logs that match the quota filter and are sent after the quota is exceeded are dropped. Logs that do not match the filter continue through the pipeline. Note: You can set either drop_events or overflow_action, but not both.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

ignore_when_missing_partitions

boolean

If true, the processor skips quota checks when partition fields are missing from the logs.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

name [required]

string

Name of the quota.

overflow_action

enum

The action to take when the quota or bucket limit is exceeded. Options:

  • drop: Drop the event.
  • no_action: Let the event pass through.
  • overflow_routing: Route to an overflow destination. Allowed enum values: drop,no_action,overflow_routing

overrides

[object]

A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.

fields [required]

[object]

A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.

name [required]

string

The field name.

value [required]

string

The field value.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

partition_fields

[string]

A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.

too_many_buckets_action

enum

The action to take when the quota or bucket limit is exceeded. Options:

  • drop: Drop the event.
  • no_action: Let the event pass through.
  • overflow_routing: Route to an overflow destination. Allowed enum values: drop,no_action,overflow_routing

type [required]

enum

The processor type. The value should always be quota. Allowed enum values: quota

default: quota

Option 15

object

The reduce processor aggregates and merges logs based on matching keys and merge strategies.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by [required]

[string]

A list of fields used to group log events for merging.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

merge_strategies [required]

[object]

List of merge strategies defining how values from grouped events should be combined.

path [required]

string

The field path in the log event.

strategy [required]

enum

The merge strategy to apply. Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique

type [required]

enum

The processor type. The value should always be reduce. Allowed enum values: reduce

default: reduce

Option 16

object

The remove_fields processor deletes specified fields from logs.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[string]

A list of field names to be removed from each log event.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be remove_fields. Allowed enum values: remove_fields

default: remove_fields

Option 17

object

The rename_fields processor changes field names.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[object]

A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.

destination [required]

string

The field name to assign the renamed value to.

preserve_source [required]

boolean

Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.

source [required]

string

The original field name in the log event that should be renamed.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be rename_fields. Allowed enum values: rename_fields

default: rename_fields

Option 18

object

The sample processor allows probabilistic sampling of logs at a fixed rate.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by

[string]

Optional list of fields to group events by. Each group is sampled independently.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

percentage [required]

double

The percentage of logs to sample.

type [required]

enum

The processor type. The value should always be sample. Allowed enum values: sample

default: sample

Option 19

object

The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

rules [required]

[object]

A list of rules for identifying and acting on sensitive data patterns.

keyword_options

object

Configuration for keywords used to reinforce sensitive data pattern detection.

keywords [required]

[string]

A list of keywords to match near the sensitive pattern.

proximity [required]

int64

Maximum number of tokens between a keyword and a sensitive value match.

name [required]

string

A name identifying the rule.

on_match [required]

 <oneOf>

Defines what action to take when sensitive data is matched.

Option 1

object

Configuration for completely redacting matched sensitive data.

action [required]

enum

Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility. Allowed enum values: redact

options [required]

object

Configuration for fully redacting sensitive data.

replace [required]

string

The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptions replace.

Option 2

object

Configuration for hashing matched sensitive values.

action [required]

enum

Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content. Allowed enum values: hash

options

object

The ObservabilityPipelineSensitiveDataScannerProcessorActionHash options.

Option 3

object

Configuration for partially redacting matched sensitive data.

action [required]

enum

Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card). Allowed enum values: partial_redact

options [required]

object

Controls how partial redaction is applied, including character count and direction.

characters [required]

int64

The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptions characters.

direction [required]

enum

Indicates whether to redact characters from the first or last part of the matched value. Allowed enum values: first,last

pattern [required]

 <oneOf>

Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.

Option 1

object

Defines a custom regex-based pattern for identifying sensitive data in logs.

options [required]

object

Options for defining a custom regex pattern.

description

string

Human-readable description providing context about a sensitive data scanner rule

rule [required]

string

A regular expression used to detect sensitive values. Must be a valid regex.

type [required]

enum

Indicates a custom regular expression is used for matching. Allowed enum values: custom

Option 2

object

Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.

options [required]

object

Options for selecting a predefined library pattern and enabling keyword support.

description

string

Human-readable description providing context about a sensitive data scanner rule

id [required]

string

Identifier for a predefined pattern from the sensitive data scanner pattern library.

use_recommended_keywords

boolean

Whether to augment the pattern with recommended keywords (optional).

type [required]

enum

Indicates that a predefined library pattern is used. Allowed enum values: library

scope [required]

 <oneOf>

Determines which parts of the log the pattern-matching rule should be applied to.

Option 1

object

Includes only specific fields for sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Applies the rule only to included fields. Allowed enum values: include

Option 2

object

Excludes specific fields from sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Excludes specific fields from processing. Allowed enum values: exclude

Option 3

object

Applies scanning across all available fields.

target [required]

enum

Applies the rule to all fields. Allowed enum values: all

tags [required]

[string]

Tags assigned to this rule for filtering and classification.

type [required]

enum

The processor type. The value should always be sensitive_data_scanner. Allowed enum values: sensitive_data_scanner

default: sensitive_data_scanner

Option 20

object

The split_array processor splits array fields into separate events based on configured rules.

Supported pipeline types: logs

arrays [required]

[object]

A list of array split configurations.

field [required]

string

The path to the array field to split.

include [required]

string

A Datadog search query used to determine which logs this array split operation targets.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets. For split_array, this should typically be *.

type [required]

enum

The processor type. The value should always be split_array. Allowed enum values: split_array

default: split_array

Option 21

object

The throttle processor limits the number of events that pass through over a given time window.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by

[string]

Optional list of fields used to group events before the threshold has been reached.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

threshold [required]

int64

the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.

type [required]

enum

The processor type. The value should always be throttle. Allowed enum values: throttle

default: throttle

window [required]

double

The time window in seconds over which the threshold applies.

Option 22

object

The metric_tags processor filters metrics based on their tags using Datadog tag key patterns.

Supported pipeline types: metrics

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query that determines which metrics the processor targets.

rules [required]

[object]

A list of rules for filtering metric tags.

action [required]

enum

The action to take on tags with matching keys. Allowed enum values: include,exclude

include [required]

string

A Datadog search query used to determine which metrics this rule targets.

keys [required]

[string]

A list of tag keys to include or exclude.

mode [required]

enum

The processing mode for tag filtering. Allowed enum values: filter

type [required]

enum

The processor type. The value should always be metric_tags. Allowed enum values: metric_tags

default: metric_tags

processors

[object]

DEPRECATED: A list of processor groups that transform or enrich log data.

Deprecated: This field is deprecated, you should now use the processor_groups field.

display_name

string

The display name for a component.

enabled [required]

boolean

Whether this processor group is enabled.

id [required]

string

The unique identifier for the processor group.

include [required]

string

Conditional expression for when this processor group should execute.

inputs [required]

[string]

A list of IDs for components whose output is used as the input for this processor group.

processors [required]

[ <oneOf>]

Processors applied sequentially within this group. Events flow through each processor in order.

Option 1

object

The filter processor allows conditional processing of logs/metrics based on a Datadog search query. Logs/metrics that match the include query are passed through; others are discarded.

Supported pipeline types: logs, metrics

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs/metrics should pass through the filter. Logs/metrics that match this query continue to downstream components; others are dropped.

type [required]

enum

The processor type. The value should always be filter. Allowed enum values: filter

default: filter

Option 2

object

The add_env_vars processor adds environment variable values to log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this processor in the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_env_vars. Allowed enum values: add_env_vars

default: add_env_vars

variables [required]

[object]

A list of environment variable mappings to apply to log fields.

field [required]

string

The target field in the log event.

name [required]

string

The name of the environment variable to read.

Option 3

object

The add_fields processor adds static key-value fields to logs.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[object]

A list of static fields (key-value pairs) that is added to each log event processed by this component.

name [required]

string

The field name.

value [required]

string

The field value.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_fields. Allowed enum values: add_fields

default: add_fields

Option 4

object

The add_hostname processor adds the hostname to log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_hostname. Allowed enum values: add_hostname

default: add_hostname

Option 5

object

The custom_processor processor transforms events using Vector Remap Language (VRL) scripts with advanced filtering capabilities.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets. This field should always be set to * for the custom_processor processor.

default: *

remaps [required]

[object]

Array of VRL remap rules.

drop_on_error [required]

boolean

Whether to drop events that caused errors during processing.

enabled

boolean

Whether this remap rule is enabled.

include [required]

string

A Datadog search query used to filter events for this specific remap rule.

name [required]

string

A descriptive name for this remap rule.

source [required]

string

The VRL script source code that defines the processing logic.

type [required]

enum

The processor type. The value should always be custom_processor. Allowed enum values: custom_processor

default: custom_processor

Option 6

object

The datadog_tags processor includes or excludes specific Datadog tags in your logs.

Supported pipeline types: logs

action [required]

enum

The action to take on tags with matching keys. Allowed enum values: include,exclude

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

keys [required]

[string]

A list of tag keys.

mode [required]

enum

The processing mode. Allowed enum values: filter

type [required]

enum

The processor type. The value should always be datadog_tags. Allowed enum values: datadog_tags

default: datadog_tags

Option 7

object

The dedupe processor removes duplicate fields in log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[string]

A list of log field paths to check for duplicates.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

mode [required]

enum

The deduplication mode to apply to the fields. Allowed enum values: match,ignore

type [required]

enum

The processor type. The value should always be dedupe. Allowed enum values: dedupe

default: dedupe

Option 8

object

The enrichment_table processor enriches logs using a static CSV file, GeoIP database, or reference table. Exactly one of file, geoip, or reference_table must be configured.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

file

object

Defines a static enrichment table loaded from a CSV file.

encoding [required]

object

File encoding format.

delimiter [required]

string

The encoding delimiter.

includes_headers [required]

boolean

The encoding includes_headers.

type [required]

enum

Specifies the encoding format (e.g., CSV) used for enrichment tables. Allowed enum values: csv

key [required]

[object]

Key fields used to look up enrichment values.

column [required]

string

The items column.

comparison [required]

enum

Defines how to compare key fields for enrichment table lookups. Allowed enum values: equals

field [required]

string

The items field.

path [required]

string

Path to the CSV file.

schema [required]

[object]

Schema defining column names and their types.

column [required]

string

The items column.

type [required]

enum

Declares allowed data types for enrichment table columns. Allowed enum values: string,boolean,integer,float,date,timestamp

geoip

object

Uses a GeoIP database to enrich logs based on an IP field.

key_field [required]

string

Path to the IP field in the log.

locale [required]

string

Locale used to resolve geographical names.

path [required]

string

Path to the GeoIP database file.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

reference_table

object

Uses a Datadog reference table to enrich logs.

columns

[string]

List of column names to include from the reference table. If not provided, all columns are included.

key_field [required]

string

Path to the field in the log event to match against the reference table.

table_id [required]

string

The unique identifier of the reference table.

target [required]

string

Path where enrichment results should be stored in the log.

type [required]

enum

The processor type. The value should always be enrichment_table. Allowed enum values: enrichment_table

default: enrichment_table

Option 9

object

The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog. Metrics can be counters, gauges, or distributions and optionally grouped by log fields.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include

string

A Datadog search query used to determine which logs this processor targets.

metrics

[object]

Configuration for generating individual metrics.

group_by

[string]

Optional fields used to group the metric series.

include [required]

string

Datadog filter query to match logs for metric generation.

metric_type [required]

enum

Type of metric to create. Allowed enum values: count,gauge,distribution

name [required]

string

Name of the custom metric to be created.

value [required]

 <oneOf>

Specifies how the value of the generated metric is computed.

Option 1

object

Strategy that increments a generated metric by one for each matching event.

strategy [required]

enum

Increments the metric by 1 for each matching event. Allowed enum values: increment_by_one

Option 2

object

Strategy that increments a generated metric based on the value of a log field.

field [required]

string

Name of the log field containing the numeric value to increment the metric by.

strategy [required]

enum

Uses a numeric field in the log event as the metric increment. Allowed enum values: increment_by_field

type [required]

enum

The processor type. Always generate_datadog_metrics. Allowed enum values: generate_datadog_metrics

default: generate_datadog_metrics

Option 10

object

The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

mappings [required]

[object]

A list of mapping rules to convert events to the OCSF format.

include [required]

string

A Datadog search query used to select the logs that this mapping should apply to.

mapping [required]

 <oneOf>

Defines a single mapping rule for transforming logs into the OCSF schema.

Option 1

enum

Predefined library mappings for common log formats. Allowed enum values: CloudTrail Account Change,GCP Cloud Audit CreateBucket,GCP Cloud Audit CreateSink,GCP Cloud Audit SetIamPolicy,GCP Cloud Audit UpdateSink,Github Audit Log API Activity,Google Workspace Admin Audit addPrivilege,Microsoft 365 Defender Incident,Microsoft 365 Defender UserLoggedIn,Okta System Log Authentication,Palo Alto Networks Firewall Traffic

type [required]

enum

The processor type. The value should always be ocsf_mapper. Allowed enum values: ocsf_mapper

default: ocsf_mapper

Option 11

object

The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.

Supported pipeline types: logs

disable_library_rules

boolean

If set to true, disables the default Grok rules provided by Datadog.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

A unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

rules [required]

[object]

The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.

match_rules [required]

[object]

A list of Grok parsing rules that define how to extract fields from the source field. Each rule must contain a name and a valid Grok pattern.

name [required]

string

The name of the rule.

rule [required]

string

The definition of the Grok rule.

source [required]

string

The name of the field in the log event to apply the Grok rules to.

support_rules

[object]

A list of Grok helper rules that can be referenced by the parsing rules.

name [required]

string

The name of the Grok helper rule.

rule [required]

string

The definition of the Grok helper rule.

type [required]

enum

The processor type. The value should always be parse_grok. Allowed enum values: parse_grok

default: parse_grok

Option 12

object

The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

field [required]

string

The name of the log field that contains a JSON string.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be parse_json. Allowed enum values: parse_json

default: parse_json

Option 13

object

The parse_xml processor parses XML from a specified field and extracts it into the event.

Supported pipeline types: logs

always_use_text_key

boolean

Whether to always use a text key for element content.

attr_prefix

string

The prefix to use for XML attributes in the parsed output.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

field [required]

string

The name of the log field that contains an XML string.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

include_attr

boolean

Whether to include XML attributes in the parsed output.

parse_bool

boolean

Whether to parse boolean values from strings.

parse_null

boolean

Whether to parse null values.

parse_number

boolean

Whether to parse numeric values from strings.

text_key

string

The key name to use for text content within XML elements. Must be at least 1 character if specified.

type [required]

enum

The processor type. The value should always be parse_xml. Allowed enum values: parse_xml

default: parse_xml

Option 14

object

The quota processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.

Supported pipeline types: logs

display_name

string

The display name for a component.

drop_events

boolean

If set to true, logs that match the quota filter and are sent after the quota is exceeded are dropped. Logs that do not match the filter continue through the pipeline. Note: You can set either drop_events or overflow_action, but not both.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

ignore_when_missing_partitions

boolean

If true, the processor skips quota checks when partition fields are missing from the logs.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

name [required]

string

Name of the quota.

overflow_action

enum

The action to take when the quota or bucket limit is exceeded. Options:

  • drop: Drop the event.
  • no_action: Let the event pass through.
  • overflow_routing: Route to an overflow destination. Allowed enum values: drop,no_action,overflow_routing

overrides

[object]

A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.

fields [required]

[object]

A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.

name [required]

string

The field name.

value [required]

string

The field value.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

partition_fields

[string]

A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.

too_many_buckets_action

enum

The action to take when the quota or bucket limit is exceeded. Options:

  • drop: Drop the event.
  • no_action: Let the event pass through.
  • overflow_routing: Route to an overflow destination. Allowed enum values: drop,no_action,overflow_routing

type [required]

enum

The processor type. The value should always be quota. Allowed enum values: quota

default: quota

Option 15

object

The reduce processor aggregates and merges logs based on matching keys and merge strategies.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by [required]

[string]

A list of fields used to group log events for merging.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

merge_strategies [required]

[object]

List of merge strategies defining how values from grouped events should be combined.

path [required]

string

The field path in the log event.

strategy [required]

enum

The merge strategy to apply. Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique

type [required]

enum

The processor type. The value should always be reduce. Allowed enum values: reduce

default: reduce

Option 16

object

The remove_fields processor deletes specified fields from logs.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[string]

A list of field names to be removed from each log event.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be remove_fields. Allowed enum values: remove_fields

default: remove_fields

Option 17

object

The rename_fields processor changes field names.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[object]

A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.

destination [required]

string

The field name to assign the renamed value to.

preserve_source [required]

boolean

Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.

source [required]

string

The original field name in the log event that should be renamed.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be rename_fields. Allowed enum values: rename_fields

default: rename_fields

Option 18

object

The sample processor allows probabilistic sampling of logs at a fixed rate.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by

[string]

Optional list of fields to group events by. Each group is sampled independently.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

percentage [required]

double

The percentage of logs to sample.

type [required]

enum

The processor type. The value should always be sample. Allowed enum values: sample

default: sample

Option 19

object

The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

rules [required]

[object]

A list of rules for identifying and acting on sensitive data patterns.

keyword_options

object

Configuration for keywords used to reinforce sensitive data pattern detection.

keywords [required]

[string]

A list of keywords to match near the sensitive pattern.

proximity [required]

int64

Maximum number of tokens between a keyword and a sensitive value match.

name [required]

string

A name identifying the rule.

on_match [required]

 <oneOf>

Defines what action to take when sensitive data is matched.

Option 1

object

Configuration for completely redacting matched sensitive data.

action [required]

enum

Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility. Allowed enum values: redact

options [required]

object

Configuration for fully redacting sensitive data.

replace [required]

string

The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptions replace.

Option 2

object

Configuration for hashing matched sensitive values.

action [required]

enum

Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content. Allowed enum values: hash

options

object

The ObservabilityPipelineSensitiveDataScannerProcessorActionHash options.

Option 3

object

Configuration for partially redacting matched sensitive data.

action [required]

enum

Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card). Allowed enum values: partial_redact

options [required]

object

Controls how partial redaction is applied, including character count and direction.

characters [required]

int64

The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptions characters.

direction [required]

enum

Indicates whether to redact characters from the first or last part of the matched value. Allowed enum values: first,last

pattern [required]

 <oneOf>

Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.

Option 1

object

Defines a custom regex-based pattern for identifying sensitive data in logs.

options [required]

object

Options for defining a custom regex pattern.

description

string

Human-readable description providing context about a sensitive data scanner rule

rule [required]

string

A regular expression used to detect sensitive values. Must be a valid regex.

type [required]

enum

Indicates a custom regular expression is used for matching. Allowed enum values: custom

Option 2

object

Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.

options [required]

object

Options for selecting a predefined library pattern and enabling keyword support.

description

string

Human-readable description providing context about a sensitive data scanner rule

id [required]

string

Identifier for a predefined pattern from the sensitive data scanner pattern library.

use_recommended_keywords

boolean

Whether to augment the pattern with recommended keywords (optional).

type [required]

enum

Indicates that a predefined library pattern is used. Allowed enum values: library

scope [required]

 <oneOf>

Determines which parts of the log the pattern-matching rule should be applied to.

Option 1

object

Includes only specific fields for sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Applies the rule only to included fields. Allowed enum values: include

Option 2

object

Excludes specific fields from sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Excludes specific fields from processing. Allowed enum values: exclude

Option 3

object

Applies scanning across all available fields.

target [required]

enum

Applies the rule to all fields. Allowed enum values: all

tags [required]

[string]

Tags assigned to this rule for filtering and classification.

type [required]

enum

The processor type. The value should always be sensitive_data_scanner. Allowed enum values: sensitive_data_scanner

default: sensitive_data_scanner

Option 20

object

The split_array processor splits array fields into separate events based on configured rules.

Supported pipeline types: logs

arrays [required]

[object]

A list of array split configurations.

field [required]

string

The path to the array field to split.

include [required]

string

A Datadog search query used to determine which logs this array split operation targets.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets. For split_array, this should typically be *.

type [required]

enum

The processor type. The value should always be split_array. Allowed enum values: split_array

default: split_array

Option 21

object

The throttle processor limits the number of events that pass through over a given time window.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by

[string]

Optional list of fields used to group events before the threshold has been reached.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

threshold [required]

int64

the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.

type [required]

enum

The processor type. The value should always be throttle. Allowed enum values: throttle

default: throttle

window [required]

double

The time window in seconds over which the threshold applies.

Option 22

object

The metric_tags processor filters metrics based on their tags using Datadog tag key patterns.

Supported pipeline types: metrics

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query that determines which metrics the processor targets.

rules [required]

[object]

A list of rules for filtering metric tags.

action [required]

enum

The action to take on tags with matching keys. Allowed enum values: include,exclude

include [required]

string

A Datadog search query used to determine which metrics this rule targets.

keys [required]

[string]

A list of tag keys to include or exclude.

mode [required]

enum

The processing mode for tag filtering. Allowed enum values: filter

type [required]

enum

The processor type. The value should always be metric_tags. Allowed enum values: metric_tags

default: metric_tags

sources [required]

[ <oneOf>]

A list of configured data sources for the pipeline.

Option 1

object

The datadog_agent source collects logs/metrics from the Datadog Agent.

Supported pipeline types: logs, metrics

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be datadog_agent. Allowed enum values: datadog_agent

default: datadog_agent

Option 2

object

The amazon_data_firehose source ingests logs from AWS Data Firehose.

Supported pipeline types: logs

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be amazon_data_firehose. Allowed enum values: amazon_data_firehose

default: amazon_data_firehose

Option 3

object

The amazon_s3 source ingests logs from an Amazon S3 bucket. It supports AWS authentication and TLS encryption.

Supported pipeline types: logs

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

region [required]

string

AWS region where the S3 bucket resides.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always amazon_s3. Allowed enum values: amazon_s3

default: amazon_s3

Option 4

object

The fluent_bit source ingests logs from Fluent Bit.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be fluent_bit. Allowed enum values: fluent_bit

default: fluent_bit

Option 5

object

The fluentd source ingests logs from a Fluentd-compatible service.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be `fluentd. Allowed enum values: fluentd

default: fluentd

Option 6

object

The google_pubsub source ingests logs from a Google Cloud Pub/Sub subscription.

Supported pipeline types: logs

auth

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

project [required]

string

The GCP project ID that owns the Pub/Sub subscription.

subscription [required]

string

The Pub/Sub subscription name from which messages are consumed.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be google_pubsub. Allowed enum values: google_pubsub

default: google_pubsub

Option 7

object

The http_client source scrapes logs from HTTP endpoints at regular intervals.

Supported pipeline types: logs

auth_strategy

enum

Optional authentication strategy for HTTP requests. Allowed enum values: none,basic,bearer,custom

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

scrape_interval_secs

int64

The interval (in seconds) between HTTP scrape requests.

scrape_timeout_secs

int64

The timeout (in seconds) for each scrape request.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be http_client. Allowed enum values: http_client

default: http_client

Option 8

object

The http_server source collects logs over HTTP POST from external services.

Supported pipeline types: logs

auth_strategy [required]

enum

HTTP authentication method. Allowed enum values: none,plain

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

Unique ID for the HTTP server source.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be http_server. Allowed enum values: http_server

default: http_server

Option 9

object

The kafka source ingests data from Apache Kafka topics.

Supported pipeline types: logs

group_id [required]

string

Consumer group ID used by the Kafka client.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

librdkafka_options

[object]

Optional list of advanced Kafka client configuration options, defined as key-value pairs.

name [required]

string

The name of the librdkafka configuration option to set.

value [required]

string

The value assigned to the specified librdkafka configuration option.

sasl

object

Specifies the SASL mechanism for authenticating with a Kafka cluster.

mechanism

enum

SASL mechanism used for Kafka authentication. Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topics [required]

[string]

A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.

type [required]

enum

The source type. The value should always be kafka. Allowed enum values: kafka

default: kafka

Option 10

object

The logstash source ingests logs from a Logstash forwarder.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be logstash. Allowed enum values: logstash

default: logstash

Option 11

object

The rsyslog source listens for logs over TCP or UDP from an rsyslog server using the syslog protocol.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

mode [required]

enum

Protocol used by the syslog source to receive messages. Allowed enum values: tcp,udp

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be rsyslog. Allowed enum values: rsyslog

default: rsyslog

Option 12

object

The socket source ingests logs over TCP or UDP.

Supported pipeline types: logs

framing [required]

 <oneOf>

Framing method configuration for the socket source.

Option 1

object

Byte frames which are delimited by a newline character.

method [required]

enum

Byte frames which are delimited by a newline character. Allowed enum values: newline_delimited

Option 2

object

Byte frames are passed through as-is according to the underlying I/O boundaries (for example, split between messages or stream segments).

method [required]

enum

Byte frames are passed through as-is according to the underlying I/O boundaries (for example, split between messages or stream segments). Allowed enum values: bytes

Option 3

object

Byte frames which are delimited by a chosen character.

delimiter [required]

string

A single ASCII character used to delimit events.

method [required]

enum

Byte frames which are delimited by a chosen character. Allowed enum values: character_delimited

Option 4

object

Byte frames according to the octet counting format as per RFC6587.

method [required]

enum

Byte frames according to the octet counting format as per RFC6587. Allowed enum values: octet_counting

Option 5

object

Byte frames which are chunked GELF messages.

method [required]

enum

Byte frames which are chunked GELF messages. Allowed enum values: chunked_gelf

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

mode [required]

enum

Protocol used to receive logs. Allowed enum values: tcp,udp

tls

object

TLS configuration. Relevant only when mode is tcp.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be socket. Allowed enum values: socket

default: socket

Option 13

object

The splunk_hec source implements the Splunk HTTP Event Collector (HEC) API.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always splunk_hec. Allowed enum values: splunk_hec

default: splunk_hec

Option 14

object

The splunk_tcp source receives logs from a Splunk Universal Forwarder over TCP. TLS is supported for secure transmission.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always splunk_tcp. Allowed enum values: splunk_tcp

default: splunk_tcp

Option 15

object

The sumo_logic source receives logs from Sumo Logic collectors.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

type [required]

enum

The source type. The value should always be sumo_logic. Allowed enum values: sumo_logic

default: sumo_logic

Option 16

object

The syslog_ng source listens for logs over TCP or UDP from a syslog-ng server using the syslog protocol.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

mode [required]

enum

Protocol used by the syslog source to receive messages. Allowed enum values: tcp,udp

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be syslog_ng. Allowed enum values: syslog_ng

default: syslog_ng

Option 17

object

The opentelemetry source receives telemetry data using the OpenTelemetry Protocol (OTLP) over gRPC and HTTP.

Supported pipeline types: logs

grpc_address_key

string

Environment variable name containing the gRPC server address for receiving OTLP data. Must be a valid environment variable name (alphanumeric characters and underscores only).

http_address_key

string

Environment variable name containing the HTTP server address for receiving OTLP data. Must be a valid environment variable name (alphanumeric characters and underscores only).

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be opentelemetry. Allowed enum values: opentelemetry

default: opentelemetry

name [required]

string

Name of the pipeline.

id [required]

string

Unique identifier for the pipeline.

type [required]

string

The resource type identifier. For pipeline resources, this should always be set to pipelines.

default: pipelines

{
  "data": {
    "attributes": {
      "config": {
        "destinations": [
          {
            "auth_strategy": "basic",
            "compression": {
              "algorithm": "gzip"
            },
            "encoding": "json",
            "id": "http-client-destination",
            "inputs": [
              "filter-processor"
            ],
            "tls": {
              "ca_file": "string",
              "crt_file": "/path/to/cert.crt",
              "key_file": "string"
            },
            "type": "http_client"
          }
        ],
        "pipeline_type": "logs",
        "processor_groups": [
          {
            "display_name": "my component",
            "enabled": true,
            "id": "grouped-processors",
            "include": "service:my-service",
            "inputs": [
              "datadog-agent-source"
            ],
            "processors": [
              []
            ]
          }
        ],
        "processors": [
          {
            "display_name": "my component",
            "enabled": true,
            "id": "grouped-processors",
            "include": "service:my-service",
            "inputs": [
              "datadog-agent-source"
            ],
            "processors": [
              []
            ]
          }
        ],
        "sources": [
          {
            "id": "datadog-agent-source",
            "tls": {
              "ca_file": "string",
              "crt_file": "/path/to/cert.crt",
              "key_file": "string"
            },
            "type": "datadog_agent"
          }
        ]
      },
      "name": "Main Observability Pipeline"
    },
    "id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
    "type": "pipelines"
  }
}

Forbidden

API error response.

Expand All

Champ

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Too many requests

API error response.

Expand All

Champ

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Exemple de code

                  # Path parameters
export pipeline_id="CHANGE_ME"
# Curl command
curl -X GET "https://api.ap1.datadoghq.com"https://api.ap2.datadoghq.com"https://api.datadoghq.eu"https://api.ddog-gov.com"https://api.datadoghq.com"https://api.us3.datadoghq.com"https://api.us5.datadoghq.com/api/v2/obs-pipelines/pipelines/${pipeline_id}" \ -H "Accept: application/json" \ -H "DD-API-KEY: ${DD_API_KEY}" \ -H "DD-APPLICATION-KEY: ${DD_APP_KEY}"
"""
Get a specific pipeline returns "OK" response
"""

from os import environ
from datadog_api_client import ApiClient, Configuration
from datadog_api_client.v2.api.observability_pipelines_api import ObservabilityPipelinesApi

# there is a valid "pipeline" in the system
PIPELINE_DATA_ID = environ["PIPELINE_DATA_ID"]

configuration = Configuration()
configuration.unstable_operations["get_pipeline"] = True
with ApiClient(configuration) as api_client:
    api_instance = ObservabilityPipelinesApi(api_client)
    response = api_instance.get_pipeline(
        pipeline_id=PIPELINE_DATA_ID,
    )

    print(response)

Instructions

First install the library and its dependencies and then save the example to example.py and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" python3 "example.py"
# Get a specific pipeline returns "OK" response

require "datadog_api_client"
DatadogAPIClient.configure do |config|
  config.unstable_operations["v2.get_pipeline".to_sym] = true
end
api_instance = DatadogAPIClient::V2::ObservabilityPipelinesAPI.new

# there is a valid "pipeline" in the system
PIPELINE_DATA_ID = ENV["PIPELINE_DATA_ID"]
p api_instance.get_pipeline(PIPELINE_DATA_ID)

Instructions

First install the library and its dependencies and then save the example to example.rb and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" rb "example.rb"
// Get a specific pipeline returns "OK" response

package main

import (
	"context"
	"encoding/json"
	"fmt"
	"os"

	"github.com/DataDog/datadog-api-client-go/v2/api/datadog"
	"github.com/DataDog/datadog-api-client-go/v2/api/datadogV2"
)

func main() {
	// there is a valid "pipeline" in the system
	PipelineDataID := os.Getenv("PIPELINE_DATA_ID")

	ctx := datadog.NewDefaultContext(context.Background())
	configuration := datadog.NewConfiguration()
	configuration.SetUnstableOperationEnabled("v2.GetPipeline", true)
	apiClient := datadog.NewAPIClient(configuration)
	api := datadogV2.NewObservabilityPipelinesApi(apiClient)
	resp, r, err := api.GetPipeline(ctx, PipelineDataID)

	if err != nil {
		fmt.Fprintf(os.Stderr, "Error when calling `ObservabilityPipelinesApi.GetPipeline`: %v\n", err)
		fmt.Fprintf(os.Stderr, "Full HTTP response: %v\n", r)
	}

	responseContent, _ := json.MarshalIndent(resp, "", "  ")
	fmt.Fprintf(os.Stdout, "Response from `ObservabilityPipelinesApi.GetPipeline`:\n%s\n", responseContent)
}

Instructions

First install the library and its dependencies and then save the example to main.go and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" go run "main.go"
// Get a specific pipeline returns "OK" response

import com.datadog.api.client.ApiClient;
import com.datadog.api.client.ApiException;
import com.datadog.api.client.v2.api.ObservabilityPipelinesApi;
import com.datadog.api.client.v2.model.ObservabilityPipeline;

public class Example {
  public static void main(String[] args) {
    ApiClient defaultClient = ApiClient.getDefaultApiClient();
    defaultClient.setUnstableOperationEnabled("v2.getPipeline", true);
    ObservabilityPipelinesApi apiInstance = new ObservabilityPipelinesApi(defaultClient);

    // there is a valid "pipeline" in the system
    String PIPELINE_DATA_ID = System.getenv("PIPELINE_DATA_ID");

    try {
      ObservabilityPipeline result = apiInstance.getPipeline(PIPELINE_DATA_ID);
      System.out.println(result);
    } catch (ApiException e) {
      System.err.println("Exception when calling ObservabilityPipelinesApi#getPipeline");
      System.err.println("Status code: " + e.getCode());
      System.err.println("Reason: " + e.getResponseBody());
      System.err.println("Response headers: " + e.getResponseHeaders());
      e.printStackTrace();
    }
  }
}

Instructions

First install the library and its dependencies and then save the example to Example.java and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" java "Example.java"
// Get a specific pipeline returns "OK" response
use datadog_api_client::datadog;
use datadog_api_client::datadogV2::api_observability_pipelines::ObservabilityPipelinesAPI;

#[tokio::main]
async fn main() {
    // there is a valid "pipeline" in the system
    let pipeline_data_id = std::env::var("PIPELINE_DATA_ID").unwrap();
    let mut configuration = datadog::Configuration::new();
    configuration.set_unstable_operation_enabled("v2.GetPipeline", true);
    let api = ObservabilityPipelinesAPI::with_config(configuration);
    let resp = api.get_pipeline(pipeline_data_id.clone()).await;
    if let Ok(value) = resp {
        println!("{:#?}", value);
    } else {
        println!("{:#?}", resp.unwrap_err());
    }
}

Instructions

First install the library and its dependencies and then save the example to src/main.rs and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" cargo run
/**
 * Get a specific pipeline returns "OK" response
 */

import { client, v2 } from "@datadog/datadog-api-client";

const configuration = client.createConfiguration();
configuration.unstableOperations["v2.getPipeline"] = true;
const apiInstance = new v2.ObservabilityPipelinesApi(configuration);

// there is a valid "pipeline" in the system
const PIPELINE_DATA_ID = process.env.PIPELINE_DATA_ID as string;

const params: v2.ObservabilityPipelinesApiGetPipelineRequest = {
  pipelineId: PIPELINE_DATA_ID,
};

apiInstance
  .getPipeline(params)
  .then((data: v2.ObservabilityPipeline) => {
    console.log(
      "API called successfully. Returned data: " + JSON.stringify(data)
    );
  })
  .catch((error: any) => console.error(error));

Instructions

First install the library and its dependencies and then save the example to example.ts and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" tsc "example.ts"

Note: This endpoint is in Preview. Fill out this form to request access.

PUT https://api.ap1.datadoghq.com/api/v2/obs-pipelines/pipelines/{pipeline_id}https://api.ap2.datadoghq.com/api/v2/obs-pipelines/pipelines/{pipeline_id}https://api.datadoghq.eu/api/v2/obs-pipelines/pipelines/{pipeline_id}https://api.ddog-gov.com/api/v2/obs-pipelines/pipelines/{pipeline_id}https://api.datadoghq.com/api/v2/obs-pipelines/pipelines/{pipeline_id}https://api.us3.datadoghq.com/api/v2/obs-pipelines/pipelines/{pipeline_id}https://api.us5.datadoghq.com/api/v2/obs-pipelines/pipelines/{pipeline_id}

Présentation

Update a pipeline. This endpoint requires the observability_pipelines_deploy permission.

Arguments

Paramètres du chemin

Nom

Type

Description

pipeline_id [required]

string

The ID of the pipeline to update.

Requête

Body Data (required)

Expand All

Champ

Type

Description

data [required]

object

Contains the pipeline’s ID, type, and configuration attributes.

attributes [required]

object

Defines the pipeline’s name and its components (sources, processors, and destinations).

config [required]

object

Specifies the pipeline's configuration, including its sources, processors, and destinations.

destinations [required]

[ <oneOf>]

A list of destination components where processed logs are sent.

Option 1

object

The http_client destination sends data to an HTTP endpoint.

Supported pipeline types: logs, metrics

auth_strategy

enum

HTTP authentication strategy. Allowed enum values: none,basic,bearer

compression

object

Compression configuration for HTTP requests.

algorithm [required]

enum

Compression algorithm. Allowed enum values: gzip

encoding [required]

enum

Encoding format for log events. Allowed enum values: json

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be http_client. Allowed enum values: http_client

default: http_client

Option 2

object

The amazon_opensearch destination writes logs to Amazon OpenSearch.

Supported pipeline types: logs

auth [required]

object

Authentication settings for the Amazon OpenSearch destination. The strategy field determines whether basic or AWS-based authentication is used.

assume_role

string

The ARN of the role to assume (used with aws strategy).

aws_region

string

AWS region

external_id

string

External ID for the assumed role (used with aws strategy).

session_name

string

Session name for the assumed role (used with aws strategy).

strategy [required]

enum

The authentication strategy to use. Allowed enum values: basic,aws

bulk_index

string

The index to write logs to.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be amazon_opensearch. Allowed enum values: amazon_opensearch

default: amazon_opensearch

Option 3

object

The amazon_s3 destination sends your logs in Datadog-rehydratable format to an Amazon S3 bucket for archiving.

Supported pipeline types: logs

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

bucket [required]

string

S3 bucket name.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_prefix

string

Optional prefix for object keys.

region [required]

string

AWS region of the S3 bucket.

storage_class [required]

enum

S3 storage class. Allowed enum values: STANDARD,REDUCED_REDUNDANCY,INTELLIGENT_TIERING,STANDARD_IA,EXPRESS_ONEZONE,ONEZONE_IA,GLACIER,GLACIER_IR,DEEP_ARCHIVE

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. Always amazon_s3. Allowed enum values: amazon_s3

default: amazon_s3

Option 4

object

The amazon_security_lake destination sends your logs to Amazon Security Lake.

Supported pipeline types: logs

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

bucket [required]

string

Name of the Amazon S3 bucket in Security Lake (3-63 characters).

custom_source_name [required]

string

Custom source name for the logs in Security Lake.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

string

AWS region of the S3 bucket.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. Always amazon_security_lake. Allowed enum values: amazon_security_lake

default: amazon_security_lake

Option 5

object

The azure_storage destination forwards logs to an Azure Blob Storage container.

Supported pipeline types: logs

blob_prefix

string

Optional prefix for blobs written to the container.

container_name [required]

string

The name of the Azure Blob Storage container to store logs in.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be azure_storage. Allowed enum values: azure_storage

default: azure_storage

Option 6

object

The cloud_prem destination sends logs to Datadog CloudPrem.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be cloud_prem. Allowed enum values: cloud_prem

default: cloud_prem

Option 7

object

The crowdstrike_next_gen_siem destination forwards logs to CrowdStrike Next Gen SIEM.

Supported pipeline types: logs

compression

object

Compression configuration for log events.

algorithm [required]

enum

Compression algorithm for log events. Allowed enum values: gzip,zlib

level

int64

Compression level.

encoding [required]

enum

Encoding format for log events. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be crowdstrike_next_gen_siem. Allowed enum values: crowdstrike_next_gen_siem

default: crowdstrike_next_gen_siem

Option 8

object

The datadog_logs destination forwards logs to Datadog Log Management.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

routes

[object]

A list of routing rules that forward matching logs to Datadog using dedicated API keys.

api_key_key

string

Name of the environment variable or secret that stores the Datadog API key used by this route.

include

string

A Datadog search query that determines which logs are forwarded using this route.

route_id

string

Unique identifier for this route within the destination.

site

string

Datadog site where matching logs are sent (for example, us1).

type [required]

enum

The destination type. The value should always be datadog_logs. Allowed enum values: datadog_logs

default: datadog_logs

Option 9

object

The elasticsearch destination writes logs to an Elasticsearch cluster.

Supported pipeline types: logs

api_version

enum

The Elasticsearch API version to use. Set to auto to auto-detect. Allowed enum values: auto,v6,v7,v8

bulk_index

string

The index to write logs to in Elasticsearch.

data_stream

object

Configuration options for writing to Elasticsearch Data Streams instead of a fixed index.

dataset

string

The data stream dataset for your logs. This groups logs by their source or application.

dtype

string

The data stream type for your logs. This determines how logs are categorized within the data stream.

namespace

string

The data stream namespace for your logs. This separates logs into different environments or domains.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be elasticsearch. Allowed enum values: elasticsearch

default: elasticsearch

Option 10

object

The google_chronicle destination sends logs to Google Chronicle.

Supported pipeline types: logs

auth

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

customer_id [required]

string

The Google Chronicle customer ID.

encoding

enum

The encoding format for the logs sent to Chronicle. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

log_type

string

The log type metadata associated with the Chronicle destination.

type [required]

enum

The destination type. The value should always be google_chronicle. Allowed enum values: google_chronicle

default: google_chronicle

Option 11

object

The google_cloud_storage destination stores logs in a Google Cloud Storage (GCS) bucket. It requires a bucket name, GCP authentication, and metadata fields.

Supported pipeline types: logs

acl

enum

Access control list setting for objects written to the bucket. Allowed enum values: private,project-private,public-read,authenticated-read,bucket-owner-read,bucket-owner-full-control

auth

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

bucket [required]

string

Name of the GCS bucket.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_prefix

string

Optional prefix for object keys within the GCS bucket.

metadata

[object]

Custom metadata to attach to each object uploaded to the GCS bucket.

name [required]

string

The metadata key.

value [required]

string

The metadata value.

storage_class [required]

enum

Storage class used for objects stored in GCS. Allowed enum values: STANDARD,NEARLINE,COLDLINE,ARCHIVE

type [required]

enum

The destination type. Always google_cloud_storage. Allowed enum values: google_cloud_storage

default: google_cloud_storage

Option 12

object

The google_pubsub destination publishes logs to a Google Cloud Pub/Sub topic.

Supported pipeline types: logs

auth

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

encoding [required]

enum

Encoding format for log events. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

project [required]

string

The GCP project ID that owns the Pub/Sub topic.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topic [required]

string

The Pub/Sub topic name to publish logs to.

type [required]

enum

The destination type. The value should always be google_pubsub. Allowed enum values: google_pubsub

default: google_pubsub

Option 13

object

The kafka destination sends logs to Apache Kafka topics.

Supported pipeline types: logs

compression

enum

Compression codec for Kafka messages. Allowed enum values: none,gzip,snappy,lz4,zstd

encoding [required]

enum

Encoding format for log events. Allowed enum values: json,raw_message

headers_key

string

The field name to use for Kafka message headers.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_field

string

The field name to use as the Kafka message key.

librdkafka_options

[object]

Optional list of advanced Kafka producer configuration options, defined as key-value pairs.

name [required]

string

The name of the librdkafka configuration option to set.

value [required]

string

The value assigned to the specified librdkafka configuration option.

message_timeout_ms

int64

Maximum time in milliseconds to wait for message delivery confirmation.

rate_limit_duration_secs

int64

Duration in seconds for the rate limit window.

rate_limit_num

int64

Maximum number of messages allowed per rate limit duration.

sasl

object

Specifies the SASL mechanism for authenticating with a Kafka cluster.

mechanism

enum

SASL mechanism used for Kafka authentication. Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512

socket_timeout_ms

int64

Socket timeout in milliseconds for network requests.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topic [required]

string

The Kafka topic name to publish logs to.

type [required]

enum

The destination type. The value should always be kafka. Allowed enum values: kafka

default: kafka

Option 14

object

The microsoft_sentinel destination forwards logs to Microsoft Sentinel.

Supported pipeline types: logs

client_id [required]

string

Azure AD client ID used for authentication.

dcr_immutable_id [required]

string

The immutable ID of the Data Collection Rule (DCR).

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

table [required]

string

The name of the Log Analytics table where logs are sent.

tenant_id [required]

string

Azure AD tenant ID.

type [required]

enum

The destination type. The value should always be microsoft_sentinel. Allowed enum values: microsoft_sentinel

default: microsoft_sentinel

Option 15

object

The new_relic destination sends logs to the New Relic platform.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

enum

The New Relic region. Allowed enum values: us,eu

type [required]

enum

The destination type. The value should always be new_relic. Allowed enum values: new_relic

default: new_relic

Option 16

object

The opensearch destination writes logs to an OpenSearch cluster.

Supported pipeline types: logs

bulk_index

string

The index to write logs to.

data_stream

object

Configuration options for writing to OpenSearch Data Streams instead of a fixed index.

dataset

string

The data stream dataset for your logs. This groups logs by their source or application.

dtype

string

The data stream type for your logs. This determines how logs are categorized within the data stream.

namespace

string

The data stream namespace for your logs. This separates logs into different environments or domains.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be opensearch. Allowed enum values: opensearch

default: opensearch

Option 17

object

The rsyslog destination forwards logs to an external rsyslog server over TCP or UDP using the syslog protocol.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

keepalive

int64

Optional socket keepalive duration in milliseconds.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be rsyslog. Allowed enum values: rsyslog

default: rsyslog

Option 18

object

The sentinel_one destination sends logs to SentinelOne.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

enum

The SentinelOne region to send logs to. Allowed enum values: us,eu,ca,data_set_us

type [required]

enum

The destination type. The value should always be sentinel_one. Allowed enum values: sentinel_one

default: sentinel_one

Option 19

object

The socket destination sends logs over TCP or UDP to a remote server.

Supported pipeline types: logs

encoding [required]

enum

Encoding format for log events. Allowed enum values: json,raw_message

framing [required]

 <oneOf>

Framing method configuration.

Option 1

object

Each log event is delimited by a newline character.

method [required]

enum

The definition of ObservabilityPipelineSocketDestinationFramingNewlineDelimitedMethod object. Allowed enum values: newline_delimited

Option 2

object

Event data is not delimited at all.

method [required]

enum

The definition of ObservabilityPipelineSocketDestinationFramingBytesMethod object. Allowed enum values: bytes

Option 3

object

Each log event is separated using the specified delimiter character.

delimiter [required]

string

A single ASCII character used as a delimiter.

method [required]

enum

The definition of ObservabilityPipelineSocketDestinationFramingCharacterDelimitedMethod object. Allowed enum values: character_delimited

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

mode [required]

enum

Protocol used to send logs. Allowed enum values: tcp,udp

tls

object

TLS configuration. Relevant only when mode is tcp.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be socket. Allowed enum values: socket

default: socket

Option 20

object

The splunk_hec destination forwards logs to Splunk using the HTTP Event Collector (HEC).

Supported pipeline types: logs

auto_extract_timestamp

boolean

If true, Splunk tries to extract timestamps from incoming log events. If false, Splunk assigns the time the event was received.

encoding

enum

Encoding format for log events. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

index

string

Optional name of the Splunk index where logs are written.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

sourcetype

string

The Splunk sourcetype to assign to log events.

type [required]

enum

The destination type. Always splunk_hec. Allowed enum values: splunk_hec

default: splunk_hec

Option 21

object

The sumo_logic destination forwards logs to Sumo Logic.

Supported pipeline types: logs

encoding

enum

The output encoding format. Allowed enum values: json,raw_message,logfmt

header_custom_fields

[object]

A list of custom headers to include in the request to Sumo Logic.

name [required]

string

The header field name.

value [required]

string

The header field value.

header_host_name

string

Optional override for the host name header.

header_source_category

string

Optional override for the source category header.

header_source_name

string

Optional override for the source name header.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be sumo_logic. Allowed enum values: sumo_logic

default: sumo_logic

Option 22

object

The syslog_ng destination forwards logs to an external syslog-ng server over TCP or UDP using the syslog protocol.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

keepalive

int64

Optional socket keepalive duration in milliseconds.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be syslog_ng. Allowed enum values: syslog_ng

default: syslog_ng

Option 23

object

The datadog_metrics destination forwards metrics to Datadog.

Supported pipeline types: metrics

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be datadog_metrics. Allowed enum values: datadog_metrics

default: datadog_metrics

pipeline_type

enum

The type of data being ingested. Defaults to logs if not specified. Allowed enum values: logs,metrics

default: logs

processor_groups

[object]

A list of processor groups that transform or enrich log data.

display_name

string

The display name for a component.

enabled [required]

boolean

Whether this processor group is enabled.

id [required]

string

The unique identifier for the processor group.

include [required]

string

Conditional expression for when this processor group should execute.

inputs [required]

[string]

A list of IDs for components whose output is used as the input for this processor group.

processors [required]

[ <oneOf>]

Processors applied sequentially within this group. Events flow through each processor in order.

Option 1

object

The filter processor allows conditional processing of logs/metrics based on a Datadog search query. Logs/metrics that match the include query are passed through; others are discarded.

Supported pipeline types: logs, metrics

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs/metrics should pass through the filter. Logs/metrics that match this query continue to downstream components; others are dropped.

type [required]

enum

The processor type. The value should always be filter. Allowed enum values: filter

default: filter

Option 2

object

The add_env_vars processor adds environment variable values to log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this processor in the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_env_vars. Allowed enum values: add_env_vars

default: add_env_vars

variables [required]

[object]

A list of environment variable mappings to apply to log fields.

field [required]

string

The target field in the log event.

name [required]

string

The name of the environment variable to read.

Option 3

object

The add_fields processor adds static key-value fields to logs.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[object]

A list of static fields (key-value pairs) that is added to each log event processed by this component.

name [required]

string

The field name.

value [required]

string

The field value.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_fields. Allowed enum values: add_fields

default: add_fields

Option 4

object

The add_hostname processor adds the hostname to log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_hostname. Allowed enum values: add_hostname

default: add_hostname

Option 5

object

The custom_processor processor transforms events using Vector Remap Language (VRL) scripts with advanced filtering capabilities.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets. This field should always be set to * for the custom_processor processor.

default: *

remaps [required]

[object]

Array of VRL remap rules.

drop_on_error [required]

boolean

Whether to drop events that caused errors during processing.

enabled

boolean

Whether this remap rule is enabled.

include [required]

string

A Datadog search query used to filter events for this specific remap rule.

name [required]

string

A descriptive name for this remap rule.

source [required]

string

The VRL script source code that defines the processing logic.

type [required]

enum

The processor type. The value should always be custom_processor. Allowed enum values: custom_processor

default: custom_processor

Option 6

object

The datadog_tags processor includes or excludes specific Datadog tags in your logs.

Supported pipeline types: logs

action [required]

enum

The action to take on tags with matching keys. Allowed enum values: include,exclude

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

keys [required]

[string]

A list of tag keys.

mode [required]

enum

The processing mode. Allowed enum values: filter

type [required]

enum

The processor type. The value should always be datadog_tags. Allowed enum values: datadog_tags

default: datadog_tags

Option 7

object

The dedupe processor removes duplicate fields in log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[string]

A list of log field paths to check for duplicates.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

mode [required]

enum

The deduplication mode to apply to the fields. Allowed enum values: match,ignore

type [required]

enum

The processor type. The value should always be dedupe. Allowed enum values: dedupe

default: dedupe

Option 8

object

The enrichment_table processor enriches logs using a static CSV file, GeoIP database, or reference table. Exactly one of file, geoip, or reference_table must be configured.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

file

object

Defines a static enrichment table loaded from a CSV file.

encoding [required]

object

File encoding format.

delimiter [required]

string

The encoding delimiter.

includes_headers [required]

boolean

The encoding includes_headers.

type [required]

enum

Specifies the encoding format (e.g., CSV) used for enrichment tables. Allowed enum values: csv

key [required]

[object]

Key fields used to look up enrichment values.

column [required]

string

The items column.

comparison [required]

enum

Defines how to compare key fields for enrichment table lookups. Allowed enum values: equals

field [required]

string

The items field.

path [required]

string

Path to the CSV file.

schema [required]

[object]

Schema defining column names and their types.

column [required]

string

The items column.

type [required]

enum

Declares allowed data types for enrichment table columns. Allowed enum values: string,boolean,integer,float,date,timestamp

geoip

object

Uses a GeoIP database to enrich logs based on an IP field.

key_field [required]

string

Path to the IP field in the log.

locale [required]

string

Locale used to resolve geographical names.

path [required]

string

Path to the GeoIP database file.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

reference_table

object

Uses a Datadog reference table to enrich logs.

columns

[string]

List of column names to include from the reference table. If not provided, all columns are included.

key_field [required]

string

Path to the field in the log event to match against the reference table.

table_id [required]

string

The unique identifier of the reference table.

target [required]

string

Path where enrichment results should be stored in the log.

type [required]

enum

The processor type. The value should always be enrichment_table. Allowed enum values: enrichment_table

default: enrichment_table

Option 9

object

The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog. Metrics can be counters, gauges, or distributions and optionally grouped by log fields.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include

string

A Datadog search query used to determine which logs this processor targets.

metrics

[object]

Configuration for generating individual metrics.

group_by

[string]

Optional fields used to group the metric series.

include [required]

string

Datadog filter query to match logs for metric generation.

metric_type [required]

enum

Type of metric to create. Allowed enum values: count,gauge,distribution

name [required]

string

Name of the custom metric to be created.

value [required]

 <oneOf>

Specifies how the value of the generated metric is computed.

Option 1

object

Strategy that increments a generated metric by one for each matching event.

strategy [required]

enum

Increments the metric by 1 for each matching event. Allowed enum values: increment_by_one

Option 2

object

Strategy that increments a generated metric based on the value of a log field.

field [required]

string

Name of the log field containing the numeric value to increment the metric by.

strategy [required]

enum

Uses a numeric field in the log event as the metric increment. Allowed enum values: increment_by_field

type [required]

enum

The processor type. Always generate_datadog_metrics. Allowed enum values: generate_datadog_metrics

default: generate_datadog_metrics

Option 10

object

The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

mappings [required]

[object]

A list of mapping rules to convert events to the OCSF format.

include [required]

string

A Datadog search query used to select the logs that this mapping should apply to.

mapping [required]

 <oneOf>

Defines a single mapping rule for transforming logs into the OCSF schema.

Option 1

enum

Predefined library mappings for common log formats. Allowed enum values: CloudTrail Account Change,GCP Cloud Audit CreateBucket,GCP Cloud Audit CreateSink,GCP Cloud Audit SetIamPolicy,GCP Cloud Audit UpdateSink,Github Audit Log API Activity,Google Workspace Admin Audit addPrivilege,Microsoft 365 Defender Incident,Microsoft 365 Defender UserLoggedIn,Okta System Log Authentication,Palo Alto Networks Firewall Traffic

type [required]

enum

The processor type. The value should always be ocsf_mapper. Allowed enum values: ocsf_mapper

default: ocsf_mapper

Option 11

object

The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.

Supported pipeline types: logs

disable_library_rules

boolean

If set to true, disables the default Grok rules provided by Datadog.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

A unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

rules [required]

[object]

The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.

match_rules [required]

[object]

A list of Grok parsing rules that define how to extract fields from the source field. Each rule must contain a name and a valid Grok pattern.

name [required]

string

The name of the rule.

rule [required]

string

The definition of the Grok rule.

source [required]

string

The name of the field in the log event to apply the Grok rules to.

support_rules

[object]

A list of Grok helper rules that can be referenced by the parsing rules.

name [required]

string

The name of the Grok helper rule.

rule [required]

string

The definition of the Grok helper rule.

type [required]

enum

The processor type. The value should always be parse_grok. Allowed enum values: parse_grok

default: parse_grok

Option 12

object

The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

field [required]

string

The name of the log field that contains a JSON string.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be parse_json. Allowed enum values: parse_json

default: parse_json

Option 13

object

The parse_xml processor parses XML from a specified field and extracts it into the event.

Supported pipeline types: logs

always_use_text_key

boolean

Whether to always use a text key for element content.

attr_prefix

string

The prefix to use for XML attributes in the parsed output.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

field [required]

string

The name of the log field that contains an XML string.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

include_attr

boolean

Whether to include XML attributes in the parsed output.

parse_bool

boolean

Whether to parse boolean values from strings.

parse_null

boolean

Whether to parse null values.

parse_number

boolean

Whether to parse numeric values from strings.

text_key

string

The key name to use for text content within XML elements. Must be at least 1 character if specified.

type [required]

enum

The processor type. The value should always be parse_xml. Allowed enum values: parse_xml

default: parse_xml

Option 14

object

The quota processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.

Supported pipeline types: logs

display_name

string

The display name for a component.

drop_events

boolean

If set to true, logs that match the quota filter and are sent after the quota is exceeded are dropped. Logs that do not match the filter continue through the pipeline. Note: You can set either drop_events or overflow_action, but not both.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

ignore_when_missing_partitions

boolean

If true, the processor skips quota checks when partition fields are missing from the logs.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

name [required]

string

Name of the quota.

overflow_action

enum

The action to take when the quota or bucket limit is exceeded. Options:

  • drop: Drop the event.
  • no_action: Let the event pass through.
  • overflow_routing: Route to an overflow destination. Allowed enum values: drop,no_action,overflow_routing

overrides

[object]

A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.

fields [required]

[object]

A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.

name [required]

string

The field name.

value [required]

string

The field value.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

partition_fields

[string]

A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.

too_many_buckets_action

enum

The action to take when the quota or bucket limit is exceeded. Options:

  • drop: Drop the event.
  • no_action: Let the event pass through.
  • overflow_routing: Route to an overflow destination. Allowed enum values: drop,no_action,overflow_routing

type [required]

enum

The processor type. The value should always be quota. Allowed enum values: quota

default: quota

Option 15

object

The reduce processor aggregates and merges logs based on matching keys and merge strategies.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by [required]

[string]

A list of fields used to group log events for merging.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

merge_strategies [required]

[object]

List of merge strategies defining how values from grouped events should be combined.

path [required]

string

The field path in the log event.

strategy [required]

enum

The merge strategy to apply. Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique

type [required]

enum

The processor type. The value should always be reduce. Allowed enum values: reduce

default: reduce

Option 16

object

The remove_fields processor deletes specified fields from logs.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[string]

A list of field names to be removed from each log event.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be remove_fields. Allowed enum values: remove_fields

default: remove_fields

Option 17

object

The rename_fields processor changes field names.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[object]

A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.

destination [required]

string

The field name to assign the renamed value to.

preserve_source [required]

boolean

Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.

source [required]

string

The original field name in the log event that should be renamed.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be rename_fields. Allowed enum values: rename_fields

default: rename_fields

Option 18

object

The sample processor allows probabilistic sampling of logs at a fixed rate.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by

[string]

Optional list of fields to group events by. Each group is sampled independently.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

percentage [required]

double

The percentage of logs to sample.

type [required]

enum

The processor type. The value should always be sample. Allowed enum values: sample

default: sample

Option 19

object

The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

rules [required]

[object]

A list of rules for identifying and acting on sensitive data patterns.

keyword_options

object

Configuration for keywords used to reinforce sensitive data pattern detection.

keywords [required]

[string]

A list of keywords to match near the sensitive pattern.

proximity [required]

int64

Maximum number of tokens between a keyword and a sensitive value match.

name [required]

string

A name identifying the rule.

on_match [required]

 <oneOf>

Defines what action to take when sensitive data is matched.

Option 1

object

Configuration for completely redacting matched sensitive data.

action [required]

enum

Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility. Allowed enum values: redact

options [required]

object

Configuration for fully redacting sensitive data.

replace [required]

string

The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptions replace.

Option 2

object

Configuration for hashing matched sensitive values.

action [required]

enum

Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content. Allowed enum values: hash

options

object

The ObservabilityPipelineSensitiveDataScannerProcessorActionHash options.

Option 3

object

Configuration for partially redacting matched sensitive data.

action [required]

enum

Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card). Allowed enum values: partial_redact

options [required]

object

Controls how partial redaction is applied, including character count and direction.

characters [required]

int64

The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptions characters.

direction [required]

enum

Indicates whether to redact characters from the first or last part of the matched value. Allowed enum values: first,last

pattern [required]

 <oneOf>

Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.

Option 1

object

Defines a custom regex-based pattern for identifying sensitive data in logs.

options [required]

object

Options for defining a custom regex pattern.

description

string

Human-readable description providing context about a sensitive data scanner rule

rule [required]

string

A regular expression used to detect sensitive values. Must be a valid regex.

type [required]

enum

Indicates a custom regular expression is used for matching. Allowed enum values: custom

Option 2

object

Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.

options [required]

object

Options for selecting a predefined library pattern and enabling keyword support.

description

string

Human-readable description providing context about a sensitive data scanner rule

id [required]

string

Identifier for a predefined pattern from the sensitive data scanner pattern library.

use_recommended_keywords

boolean

Whether to augment the pattern with recommended keywords (optional).

type [required]

enum

Indicates that a predefined library pattern is used. Allowed enum values: library

scope [required]

 <oneOf>

Determines which parts of the log the pattern-matching rule should be applied to.

Option 1

object

Includes only specific fields for sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Applies the rule only to included fields. Allowed enum values: include

Option 2

object

Excludes specific fields from sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Excludes specific fields from processing. Allowed enum values: exclude

Option 3

object

Applies scanning across all available fields.

target [required]

enum

Applies the rule to all fields. Allowed enum values: all

tags [required]

[string]

Tags assigned to this rule for filtering and classification.

type [required]

enum

The processor type. The value should always be sensitive_data_scanner. Allowed enum values: sensitive_data_scanner

default: sensitive_data_scanner

Option 20

object

The split_array processor splits array fields into separate events based on configured rules.

Supported pipeline types: logs

arrays [required]

[object]

A list of array split configurations.

field [required]

string

The path to the array field to split.

include [required]

string

A Datadog search query used to determine which logs this array split operation targets.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets. For split_array, this should typically be *.

type [required]

enum

The processor type. The value should always be split_array. Allowed enum values: split_array

default: split_array

Option 21

object

The throttle processor limits the number of events that pass through over a given time window.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by

[string]

Optional list of fields used to group events before the threshold has been reached.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

threshold [required]

int64

the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.

type [required]

enum

The processor type. The value should always be throttle. Allowed enum values: throttle

default: throttle

window [required]

double

The time window in seconds over which the threshold applies.

Option 22

object

The metric_tags processor filters metrics based on their tags using Datadog tag key patterns.

Supported pipeline types: metrics

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query that determines which metrics the processor targets.

rules [required]

[object]

A list of rules for filtering metric tags.

action [required]

enum

The action to take on tags with matching keys. Allowed enum values: include,exclude

include [required]

string

A Datadog search query used to determine which metrics this rule targets.

keys [required]

[string]

A list of tag keys to include or exclude.

mode [required]

enum

The processing mode for tag filtering. Allowed enum values: filter

type [required]

enum

The processor type. The value should always be metric_tags. Allowed enum values: metric_tags

default: metric_tags

processors

[object]

DEPRECATED: A list of processor groups that transform or enrich log data.

Deprecated: This field is deprecated, you should now use the processor_groups field.

display_name

string

The display name for a component.

enabled [required]

boolean

Whether this processor group is enabled.

id [required]

string

The unique identifier for the processor group.

include [required]

string

Conditional expression for when this processor group should execute.

inputs [required]

[string]

A list of IDs for components whose output is used as the input for this processor group.

processors [required]

[ <oneOf>]

Processors applied sequentially within this group. Events flow through each processor in order.

Option 1

object

The filter processor allows conditional processing of logs/metrics based on a Datadog search query. Logs/metrics that match the include query are passed through; others are discarded.

Supported pipeline types: logs, metrics

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs/metrics should pass through the filter. Logs/metrics that match this query continue to downstream components; others are dropped.

type [required]

enum

The processor type. The value should always be filter. Allowed enum values: filter

default: filter

Option 2

object

The add_env_vars processor adds environment variable values to log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this processor in the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_env_vars. Allowed enum values: add_env_vars

default: add_env_vars

variables [required]

[object]

A list of environment variable mappings to apply to log fields.

field [required]

string

The target field in the log event.

name [required]

string

The name of the environment variable to read.

Option 3

object

The add_fields processor adds static key-value fields to logs.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[object]

A list of static fields (key-value pairs) that is added to each log event processed by this component.

name [required]

string

The field name.

value [required]

string

The field value.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_fields. Allowed enum values: add_fields

default: add_fields

Option 4

object

The add_hostname processor adds the hostname to log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_hostname. Allowed enum values: add_hostname

default: add_hostname

Option 5

object

The custom_processor processor transforms events using Vector Remap Language (VRL) scripts with advanced filtering capabilities.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets. This field should always be set to * for the custom_processor processor.

default: *

remaps [required]

[object]

Array of VRL remap rules.

drop_on_error [required]

boolean

Whether to drop events that caused errors during processing.

enabled

boolean

Whether this remap rule is enabled.

include [required]

string

A Datadog search query used to filter events for this specific remap rule.

name [required]

string

A descriptive name for this remap rule.

source [required]

string

The VRL script source code that defines the processing logic.

type [required]

enum

The processor type. The value should always be custom_processor. Allowed enum values: custom_processor

default: custom_processor

Option 6

object

The datadog_tags processor includes or excludes specific Datadog tags in your logs.

Supported pipeline types: logs

action [required]

enum

The action to take on tags with matching keys. Allowed enum values: include,exclude

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

keys [required]

[string]

A list of tag keys.

mode [required]

enum

The processing mode. Allowed enum values: filter

type [required]

enum

The processor type. The value should always be datadog_tags. Allowed enum values: datadog_tags

default: datadog_tags

Option 7

object

The dedupe processor removes duplicate fields in log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[string]

A list of log field paths to check for duplicates.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

mode [required]

enum

The deduplication mode to apply to the fields. Allowed enum values: match,ignore

type [required]

enum

The processor type. The value should always be dedupe. Allowed enum values: dedupe

default: dedupe

Option 8

object

The enrichment_table processor enriches logs using a static CSV file, GeoIP database, or reference table. Exactly one of file, geoip, or reference_table must be configured.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

file

object

Defines a static enrichment table loaded from a CSV file.

encoding [required]

object

File encoding format.

delimiter [required]

string

The encoding delimiter.

includes_headers [required]

boolean

The encoding includes_headers.

type [required]

enum

Specifies the encoding format (e.g., CSV) used for enrichment tables. Allowed enum values: csv

key [required]

[object]

Key fields used to look up enrichment values.

column [required]

string

The items column.

comparison [required]

enum

Defines how to compare key fields for enrichment table lookups. Allowed enum values: equals

field [required]

string

The items field.

path [required]

string

Path to the CSV file.

schema [required]

[object]

Schema defining column names and their types.

column [required]

string

The items column.

type [required]

enum

Declares allowed data types for enrichment table columns. Allowed enum values: string,boolean,integer,float,date,timestamp

geoip

object

Uses a GeoIP database to enrich logs based on an IP field.

key_field [required]

string

Path to the IP field in the log.

locale [required]

string

Locale used to resolve geographical names.

path [required]

string

Path to the GeoIP database file.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

reference_table

object

Uses a Datadog reference table to enrich logs.

columns

[string]

List of column names to include from the reference table. If not provided, all columns are included.

key_field [required]

string

Path to the field in the log event to match against the reference table.

table_id [required]

string

The unique identifier of the reference table.

target [required]

string

Path where enrichment results should be stored in the log.

type [required]

enum

The processor type. The value should always be enrichment_table. Allowed enum values: enrichment_table

default: enrichment_table

Option 9

object

The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog. Metrics can be counters, gauges, or distributions and optionally grouped by log fields.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include

string

A Datadog search query used to determine which logs this processor targets.

metrics

[object]

Configuration for generating individual metrics.

group_by

[string]

Optional fields used to group the metric series.

include [required]

string

Datadog filter query to match logs for metric generation.

metric_type [required]

enum

Type of metric to create. Allowed enum values: count,gauge,distribution

name [required]

string

Name of the custom metric to be created.

value [required]

 <oneOf>

Specifies how the value of the generated metric is computed.

Option 1

object

Strategy that increments a generated metric by one for each matching event.

strategy [required]

enum

Increments the metric by 1 for each matching event. Allowed enum values: increment_by_one

Option 2

object

Strategy that increments a generated metric based on the value of a log field.

field [required]

string

Name of the log field containing the numeric value to increment the metric by.

strategy [required]

enum

Uses a numeric field in the log event as the metric increment. Allowed enum values: increment_by_field

type [required]

enum

The processor type. Always generate_datadog_metrics. Allowed enum values: generate_datadog_metrics

default: generate_datadog_metrics

Option 10

object

The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

mappings [required]

[object]

A list of mapping rules to convert events to the OCSF format.

include [required]

string

A Datadog search query used to select the logs that this mapping should apply to.

mapping [required]

 <oneOf>

Defines a single mapping rule for transforming logs into the OCSF schema.

Option 1

enum

Predefined library mappings for common log formats. Allowed enum values: CloudTrail Account Change,GCP Cloud Audit CreateBucket,GCP Cloud Audit CreateSink,GCP Cloud Audit SetIamPolicy,GCP Cloud Audit UpdateSink,Github Audit Log API Activity,Google Workspace Admin Audit addPrivilege,Microsoft 365 Defender Incident,Microsoft 365 Defender UserLoggedIn,Okta System Log Authentication,Palo Alto Networks Firewall Traffic

type [required]

enum

The processor type. The value should always be ocsf_mapper. Allowed enum values: ocsf_mapper

default: ocsf_mapper

Option 11

object

The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.

Supported pipeline types: logs

disable_library_rules

boolean

If set to true, disables the default Grok rules provided by Datadog.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

A unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

rules [required]

[object]

The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.

match_rules [required]

[object]

A list of Grok parsing rules that define how to extract fields from the source field. Each rule must contain a name and a valid Grok pattern.

name [required]

string

The name of the rule.

rule [required]

string

The definition of the Grok rule.

source [required]

string

The name of the field in the log event to apply the Grok rules to.

support_rules

[object]

A list of Grok helper rules that can be referenced by the parsing rules.

name [required]

string

The name of the Grok helper rule.

rule [required]

string

The definition of the Grok helper rule.

type [required]

enum

The processor type. The value should always be parse_grok. Allowed enum values: parse_grok

default: parse_grok

Option 12

object

The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

field [required]

string

The name of the log field that contains a JSON string.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be parse_json. Allowed enum values: parse_json

default: parse_json

Option 13

object

The parse_xml processor parses XML from a specified field and extracts it into the event.

Supported pipeline types: logs

always_use_text_key

boolean

Whether to always use a text key for element content.

attr_prefix

string

The prefix to use for XML attributes in the parsed output.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

field [required]

string

The name of the log field that contains an XML string.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

include_attr

boolean

Whether to include XML attributes in the parsed output.

parse_bool

boolean

Whether to parse boolean values from strings.

parse_null

boolean

Whether to parse null values.

parse_number

boolean

Whether to parse numeric values from strings.

text_key

string

The key name to use for text content within XML elements. Must be at least 1 character if specified.

type [required]

enum

The processor type. The value should always be parse_xml. Allowed enum values: parse_xml

default: parse_xml

Option 14

object

The quota processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.

Supported pipeline types: logs

display_name

string

The display name for a component.

drop_events

boolean

If set to true, logs that match the quota filter and are sent after the quota is exceeded are dropped. Logs that do not match the filter continue through the pipeline. Note: You can set either drop_events or overflow_action, but not both.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

ignore_when_missing_partitions

boolean

If true, the processor skips quota checks when partition fields are missing from the logs.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

name [required]

string

Name of the quota.

overflow_action

enum

The action to take when the quota or bucket limit is exceeded. Options:

  • drop: Drop the event.
  • no_action: Let the event pass through.
  • overflow_routing: Route to an overflow destination. Allowed enum values: drop,no_action,overflow_routing

overrides

[object]

A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.

fields [required]

[object]

A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.

name [required]

string

The field name.

value [required]

string

The field value.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

partition_fields

[string]

A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.

too_many_buckets_action

enum

The action to take when the quota or bucket limit is exceeded. Options:

  • drop: Drop the event.
  • no_action: Let the event pass through.
  • overflow_routing: Route to an overflow destination. Allowed enum values: drop,no_action,overflow_routing

type [required]

enum

The processor type. The value should always be quota. Allowed enum values: quota

default: quota

Option 15

object

The reduce processor aggregates and merges logs based on matching keys and merge strategies.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by [required]

[string]

A list of fields used to group log events for merging.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

merge_strategies [required]

[object]

List of merge strategies defining how values from grouped events should be combined.

path [required]

string

The field path in the log event.

strategy [required]

enum

The merge strategy to apply. Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique

type [required]

enum

The processor type. The value should always be reduce. Allowed enum values: reduce

default: reduce

Option 16

object

The remove_fields processor deletes specified fields from logs.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[string]

A list of field names to be removed from each log event.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be remove_fields. Allowed enum values: remove_fields

default: remove_fields

Option 17

object

The rename_fields processor changes field names.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[object]

A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.

destination [required]

string

The field name to assign the renamed value to.

preserve_source [required]

boolean

Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.

source [required]

string

The original field name in the log event that should be renamed.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be rename_fields. Allowed enum values: rename_fields

default: rename_fields

Option 18

object

The sample processor allows probabilistic sampling of logs at a fixed rate.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by

[string]

Optional list of fields to group events by. Each group is sampled independently.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

percentage [required]

double

The percentage of logs to sample.

type [required]

enum

The processor type. The value should always be sample. Allowed enum values: sample

default: sample

Option 19

object

The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

rules [required]

[object]

A list of rules for identifying and acting on sensitive data patterns.

keyword_options

object

Configuration for keywords used to reinforce sensitive data pattern detection.

keywords [required]

[string]

A list of keywords to match near the sensitive pattern.

proximity [required]

int64

Maximum number of tokens between a keyword and a sensitive value match.

name [required]

string

A name identifying the rule.

on_match [required]

 <oneOf>

Defines what action to take when sensitive data is matched.

Option 1

object

Configuration for completely redacting matched sensitive data.

action [required]

enum

Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility. Allowed enum values: redact

options [required]

object

Configuration for fully redacting sensitive data.

replace [required]

string

The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptions replace.

Option 2

object

Configuration for hashing matched sensitive values.

action [required]

enum

Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content. Allowed enum values: hash

options

object

The ObservabilityPipelineSensitiveDataScannerProcessorActionHash options.

Option 3

object

Configuration for partially redacting matched sensitive data.

action [required]

enum

Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card). Allowed enum values: partial_redact

options [required]

object

Controls how partial redaction is applied, including character count and direction.

characters [required]

int64

The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptions characters.

direction [required]

enum

Indicates whether to redact characters from the first or last part of the matched value. Allowed enum values: first,last

pattern [required]

 <oneOf>

Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.

Option 1

object

Defines a custom regex-based pattern for identifying sensitive data in logs.

options [required]

object

Options for defining a custom regex pattern.

description

string

Human-readable description providing context about a sensitive data scanner rule

rule [required]

string

A regular expression used to detect sensitive values. Must be a valid regex.

type [required]

enum

Indicates a custom regular expression is used for matching. Allowed enum values: custom

Option 2

object

Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.

options [required]

object

Options for selecting a predefined library pattern and enabling keyword support.

description

string

Human-readable description providing context about a sensitive data scanner rule

id [required]

string

Identifier for a predefined pattern from the sensitive data scanner pattern library.

use_recommended_keywords

boolean

Whether to augment the pattern with recommended keywords (optional).

type [required]

enum

Indicates that a predefined library pattern is used. Allowed enum values: library

scope [required]

 <oneOf>

Determines which parts of the log the pattern-matching rule should be applied to.

Option 1

object

Includes only specific fields for sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Applies the rule only to included fields. Allowed enum values: include

Option 2

object

Excludes specific fields from sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Excludes specific fields from processing. Allowed enum values: exclude

Option 3

object

Applies scanning across all available fields.

target [required]

enum

Applies the rule to all fields. Allowed enum values: all

tags [required]

[string]

Tags assigned to this rule for filtering and classification.

type [required]

enum

The processor type. The value should always be sensitive_data_scanner. Allowed enum values: sensitive_data_scanner

default: sensitive_data_scanner

Option 20

object

The split_array processor splits array fields into separate events based on configured rules.

Supported pipeline types: logs

arrays [required]

[object]

A list of array split configurations.

field [required]

string

The path to the array field to split.

include [required]

string

A Datadog search query used to determine which logs this array split operation targets.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets. For split_array, this should typically be *.

type [required]

enum

The processor type. The value should always be split_array. Allowed enum values: split_array

default: split_array

Option 21

object

The throttle processor limits the number of events that pass through over a given time window.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by

[string]

Optional list of fields used to group events before the threshold has been reached.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

threshold [required]

int64

the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.

type [required]

enum

The processor type. The value should always be throttle. Allowed enum values: throttle

default: throttle

window [required]

double

The time window in seconds over which the threshold applies.

Option 22

object

The metric_tags processor filters metrics based on their tags using Datadog tag key patterns.

Supported pipeline types: metrics

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query that determines which metrics the processor targets.

rules [required]

[object]

A list of rules for filtering metric tags.

action [required]

enum

The action to take on tags with matching keys. Allowed enum values: include,exclude

include [required]

string

A Datadog search query used to determine which metrics this rule targets.

keys [required]

[string]

A list of tag keys to include or exclude.

mode [required]

enum

The processing mode for tag filtering. Allowed enum values: filter

type [required]

enum

The processor type. The value should always be metric_tags. Allowed enum values: metric_tags

default: metric_tags

sources [required]

[ <oneOf>]

A list of configured data sources for the pipeline.

Option 1

object

The datadog_agent source collects logs/metrics from the Datadog Agent.

Supported pipeline types: logs, metrics

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be datadog_agent. Allowed enum values: datadog_agent

default: datadog_agent

Option 2

object

The amazon_data_firehose source ingests logs from AWS Data Firehose.

Supported pipeline types: logs

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be amazon_data_firehose. Allowed enum values: amazon_data_firehose

default: amazon_data_firehose

Option 3

object

The amazon_s3 source ingests logs from an Amazon S3 bucket. It supports AWS authentication and TLS encryption.

Supported pipeline types: logs

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

region [required]

string

AWS region where the S3 bucket resides.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always amazon_s3. Allowed enum values: amazon_s3

default: amazon_s3

Option 4

object

The fluent_bit source ingests logs from Fluent Bit.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be fluent_bit. Allowed enum values: fluent_bit

default: fluent_bit

Option 5

object

The fluentd source ingests logs from a Fluentd-compatible service.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be `fluentd. Allowed enum values: fluentd

default: fluentd

Option 6

object

The google_pubsub source ingests logs from a Google Cloud Pub/Sub subscription.

Supported pipeline types: logs

auth

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

project [required]

string

The GCP project ID that owns the Pub/Sub subscription.

subscription [required]

string

The Pub/Sub subscription name from which messages are consumed.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be google_pubsub. Allowed enum values: google_pubsub

default: google_pubsub

Option 7

object

The http_client source scrapes logs from HTTP endpoints at regular intervals.

Supported pipeline types: logs

auth_strategy

enum

Optional authentication strategy for HTTP requests. Allowed enum values: none,basic,bearer,custom

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

scrape_interval_secs

int64

The interval (in seconds) between HTTP scrape requests.

scrape_timeout_secs

int64

The timeout (in seconds) for each scrape request.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be http_client. Allowed enum values: http_client

default: http_client

Option 8

object

The http_server source collects logs over HTTP POST from external services.

Supported pipeline types: logs

auth_strategy [required]

enum

HTTP authentication method. Allowed enum values: none,plain

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

Unique ID for the HTTP server source.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be http_server. Allowed enum values: http_server

default: http_server

Option 9

object

The kafka source ingests data from Apache Kafka topics.

Supported pipeline types: logs

group_id [required]

string

Consumer group ID used by the Kafka client.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

librdkafka_options

[object]

Optional list of advanced Kafka client configuration options, defined as key-value pairs.

name [required]

string

The name of the librdkafka configuration option to set.

value [required]

string

The value assigned to the specified librdkafka configuration option.

sasl

object

Specifies the SASL mechanism for authenticating with a Kafka cluster.

mechanism

enum

SASL mechanism used for Kafka authentication. Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topics [required]

[string]

A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.

type [required]

enum

The source type. The value should always be kafka. Allowed enum values: kafka

default: kafka

Option 10

object

The logstash source ingests logs from a Logstash forwarder.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be logstash. Allowed enum values: logstash

default: logstash

Option 11

object

The rsyslog source listens for logs over TCP or UDP from an rsyslog server using the syslog protocol.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

mode [required]

enum

Protocol used by the syslog source to receive messages. Allowed enum values: tcp,udp

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be rsyslog. Allowed enum values: rsyslog

default: rsyslog

Option 12

object

The socket source ingests logs over TCP or UDP.

Supported pipeline types: logs

framing [required]

 <oneOf>

Framing method configuration for the socket source.

Option 1

object

Byte frames which are delimited by a newline character.

method [required]

enum

Byte frames which are delimited by a newline character. Allowed enum values: newline_delimited

Option 2

object

Byte frames are passed through as-is according to the underlying I/O boundaries (for example, split between messages or stream segments).

method [required]

enum

Byte frames are passed through as-is according to the underlying I/O boundaries (for example, split between messages or stream segments). Allowed enum values: bytes

Option 3

object

Byte frames which are delimited by a chosen character.

delimiter [required]

string

A single ASCII character used to delimit events.

method [required]

enum

Byte frames which are delimited by a chosen character. Allowed enum values: character_delimited

Option 4

object

Byte frames according to the octet counting format as per RFC6587.

method [required]

enum

Byte frames according to the octet counting format as per RFC6587. Allowed enum values: octet_counting

Option 5

object

Byte frames which are chunked GELF messages.

method [required]

enum

Byte frames which are chunked GELF messages. Allowed enum values: chunked_gelf

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

mode [required]

enum

Protocol used to receive logs. Allowed enum values: tcp,udp

tls

object

TLS configuration. Relevant only when mode is tcp.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be socket. Allowed enum values: socket

default: socket

Option 13

object

The splunk_hec source implements the Splunk HTTP Event Collector (HEC) API.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always splunk_hec. Allowed enum values: splunk_hec

default: splunk_hec

Option 14

object

The splunk_tcp source receives logs from a Splunk Universal Forwarder over TCP. TLS is supported for secure transmission.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always splunk_tcp. Allowed enum values: splunk_tcp

default: splunk_tcp

Option 15

object

The sumo_logic source receives logs from Sumo Logic collectors.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

type [required]

enum

The source type. The value should always be sumo_logic. Allowed enum values: sumo_logic

default: sumo_logic

Option 16

object

The syslog_ng source listens for logs over TCP or UDP from a syslog-ng server using the syslog protocol.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

mode [required]

enum

Protocol used by the syslog source to receive messages. Allowed enum values: tcp,udp

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be syslog_ng. Allowed enum values: syslog_ng

default: syslog_ng

Option 17

object

The opentelemetry source receives telemetry data using the OpenTelemetry Protocol (OTLP) over gRPC and HTTP.

Supported pipeline types: logs

grpc_address_key

string

Environment variable name containing the gRPC server address for receiving OTLP data. Must be a valid environment variable name (alphanumeric characters and underscores only).

http_address_key

string

Environment variable name containing the HTTP server address for receiving OTLP data. Must be a valid environment variable name (alphanumeric characters and underscores only).

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be opentelemetry. Allowed enum values: opentelemetry

default: opentelemetry

name [required]

string

Name of the pipeline.

id [required]

string

Unique identifier for the pipeline.

type [required]

string

The resource type identifier. For pipeline resources, this should always be set to pipelines.

default: pipelines

{
  "data": {
    "attributes": {
      "config": {
        "destinations": [
          {
            "id": "updated-datadog-logs-destination-id",
            "inputs": [
              "my-processor-group"
            ],
            "type": "datadog_logs"
          }
        ],
        "processor_groups": [
          {
            "enabled": true,
            "id": "my-processor-group",
            "include": "service:my-service",
            "inputs": [
              "datadog-agent-source"
            ],
            "processors": [
              {
                "enabled": true,
                "id": "filter-processor",
                "include": "status:error",
                "type": "filter"
              }
            ]
          }
        ],
        "sources": [
          {
            "id": "datadog-agent-source",
            "type": "datadog_agent"
          }
        ]
      },
      "name": "Updated Pipeline Name"
    },
    "id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
    "type": "pipelines"
  }
}

Réponse

OK

Top-level schema representing a pipeline.

Expand All

Champ

Type

Description

data [required]

object

Contains the pipeline’s ID, type, and configuration attributes.

attributes [required]

object

Defines the pipeline’s name and its components (sources, processors, and destinations).

config [required]

object

Specifies the pipeline's configuration, including its sources, processors, and destinations.

destinations [required]

[ <oneOf>]

A list of destination components where processed logs are sent.

Option 1

object

The http_client destination sends data to an HTTP endpoint.

Supported pipeline types: logs, metrics

auth_strategy

enum

HTTP authentication strategy. Allowed enum values: none,basic,bearer

compression

object

Compression configuration for HTTP requests.

algorithm [required]

enum

Compression algorithm. Allowed enum values: gzip

encoding [required]

enum

Encoding format for log events. Allowed enum values: json

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be http_client. Allowed enum values: http_client

default: http_client

Option 2

object

The amazon_opensearch destination writes logs to Amazon OpenSearch.

Supported pipeline types: logs

auth [required]

object

Authentication settings for the Amazon OpenSearch destination. The strategy field determines whether basic or AWS-based authentication is used.

assume_role

string

The ARN of the role to assume (used with aws strategy).

aws_region

string

AWS region

external_id

string

External ID for the assumed role (used with aws strategy).

session_name

string

Session name for the assumed role (used with aws strategy).

strategy [required]

enum

The authentication strategy to use. Allowed enum values: basic,aws

bulk_index

string

The index to write logs to.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be amazon_opensearch. Allowed enum values: amazon_opensearch

default: amazon_opensearch

Option 3

object

The amazon_s3 destination sends your logs in Datadog-rehydratable format to an Amazon S3 bucket for archiving.

Supported pipeline types: logs

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

bucket [required]

string

S3 bucket name.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_prefix

string

Optional prefix for object keys.

region [required]

string

AWS region of the S3 bucket.

storage_class [required]

enum

S3 storage class. Allowed enum values: STANDARD,REDUCED_REDUNDANCY,INTELLIGENT_TIERING,STANDARD_IA,EXPRESS_ONEZONE,ONEZONE_IA,GLACIER,GLACIER_IR,DEEP_ARCHIVE

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. Always amazon_s3. Allowed enum values: amazon_s3

default: amazon_s3

Option 4

object

The amazon_security_lake destination sends your logs to Amazon Security Lake.

Supported pipeline types: logs

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

bucket [required]

string

Name of the Amazon S3 bucket in Security Lake (3-63 characters).

custom_source_name [required]

string

Custom source name for the logs in Security Lake.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

string

AWS region of the S3 bucket.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. Always amazon_security_lake. Allowed enum values: amazon_security_lake

default: amazon_security_lake

Option 5

object

The azure_storage destination forwards logs to an Azure Blob Storage container.

Supported pipeline types: logs

blob_prefix

string

Optional prefix for blobs written to the container.

container_name [required]

string

The name of the Azure Blob Storage container to store logs in.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be azure_storage. Allowed enum values: azure_storage

default: azure_storage

Option 6

object

The cloud_prem destination sends logs to Datadog CloudPrem.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be cloud_prem. Allowed enum values: cloud_prem

default: cloud_prem

Option 7

object

The crowdstrike_next_gen_siem destination forwards logs to CrowdStrike Next Gen SIEM.

Supported pipeline types: logs

compression

object

Compression configuration for log events.

algorithm [required]

enum

Compression algorithm for log events. Allowed enum values: gzip,zlib

level

int64

Compression level.

encoding [required]

enum

Encoding format for log events. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be crowdstrike_next_gen_siem. Allowed enum values: crowdstrike_next_gen_siem

default: crowdstrike_next_gen_siem

Option 8

object

The datadog_logs destination forwards logs to Datadog Log Management.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

routes

[object]

A list of routing rules that forward matching logs to Datadog using dedicated API keys.

api_key_key

string

Name of the environment variable or secret that stores the Datadog API key used by this route.

include

string

A Datadog search query that determines which logs are forwarded using this route.

route_id

string

Unique identifier for this route within the destination.

site

string

Datadog site where matching logs are sent (for example, us1).

type [required]

enum

The destination type. The value should always be datadog_logs. Allowed enum values: datadog_logs

default: datadog_logs

Option 9

object

The elasticsearch destination writes logs to an Elasticsearch cluster.

Supported pipeline types: logs

api_version

enum

The Elasticsearch API version to use. Set to auto to auto-detect. Allowed enum values: auto,v6,v7,v8

bulk_index

string

The index to write logs to in Elasticsearch.

data_stream

object

Configuration options for writing to Elasticsearch Data Streams instead of a fixed index.

dataset

string

The data stream dataset for your logs. This groups logs by their source or application.

dtype

string

The data stream type for your logs. This determines how logs are categorized within the data stream.

namespace

string

The data stream namespace for your logs. This separates logs into different environments or domains.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be elasticsearch. Allowed enum values: elasticsearch

default: elasticsearch

Option 10

object

The google_chronicle destination sends logs to Google Chronicle.

Supported pipeline types: logs

auth

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

customer_id [required]

string

The Google Chronicle customer ID.

encoding

enum

The encoding format for the logs sent to Chronicle. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

log_type

string

The log type metadata associated with the Chronicle destination.

type [required]

enum

The destination type. The value should always be google_chronicle. Allowed enum values: google_chronicle

default: google_chronicle

Option 11

object

The google_cloud_storage destination stores logs in a Google Cloud Storage (GCS) bucket. It requires a bucket name, GCP authentication, and metadata fields.

Supported pipeline types: logs

acl

enum

Access control list setting for objects written to the bucket. Allowed enum values: private,project-private,public-read,authenticated-read,bucket-owner-read,bucket-owner-full-control

auth

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

bucket [required]

string

Name of the GCS bucket.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_prefix

string

Optional prefix for object keys within the GCS bucket.

metadata

[object]

Custom metadata to attach to each object uploaded to the GCS bucket.

name [required]

string

The metadata key.

value [required]

string

The metadata value.

storage_class [required]

enum

Storage class used for objects stored in GCS. Allowed enum values: STANDARD,NEARLINE,COLDLINE,ARCHIVE

type [required]

enum

The destination type. Always google_cloud_storage. Allowed enum values: google_cloud_storage

default: google_cloud_storage

Option 12

object

The google_pubsub destination publishes logs to a Google Cloud Pub/Sub topic.

Supported pipeline types: logs

auth

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

encoding [required]

enum

Encoding format for log events. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

project [required]

string

The GCP project ID that owns the Pub/Sub topic.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topic [required]

string

The Pub/Sub topic name to publish logs to.

type [required]

enum

The destination type. The value should always be google_pubsub. Allowed enum values: google_pubsub

default: google_pubsub

Option 13

object

The kafka destination sends logs to Apache Kafka topics.

Supported pipeline types: logs

compression

enum

Compression codec for Kafka messages. Allowed enum values: none,gzip,snappy,lz4,zstd

encoding [required]

enum

Encoding format for log events. Allowed enum values: json,raw_message

headers_key

string

The field name to use for Kafka message headers.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_field

string

The field name to use as the Kafka message key.

librdkafka_options

[object]

Optional list of advanced Kafka producer configuration options, defined as key-value pairs.

name [required]

string

The name of the librdkafka configuration option to set.

value [required]

string

The value assigned to the specified librdkafka configuration option.

message_timeout_ms

int64

Maximum time in milliseconds to wait for message delivery confirmation.

rate_limit_duration_secs

int64

Duration in seconds for the rate limit window.

rate_limit_num

int64

Maximum number of messages allowed per rate limit duration.

sasl

object

Specifies the SASL mechanism for authenticating with a Kafka cluster.

mechanism

enum

SASL mechanism used for Kafka authentication. Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512

socket_timeout_ms

int64

Socket timeout in milliseconds for network requests.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topic [required]

string

The Kafka topic name to publish logs to.

type [required]

enum

The destination type. The value should always be kafka. Allowed enum values: kafka

default: kafka

Option 14

object

The microsoft_sentinel destination forwards logs to Microsoft Sentinel.

Supported pipeline types: logs

client_id [required]

string

Azure AD client ID used for authentication.

dcr_immutable_id [required]

string

The immutable ID of the Data Collection Rule (DCR).

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

table [required]

string

The name of the Log Analytics table where logs are sent.

tenant_id [required]

string

Azure AD tenant ID.

type [required]

enum

The destination type. The value should always be microsoft_sentinel. Allowed enum values: microsoft_sentinel

default: microsoft_sentinel

Option 15

object

The new_relic destination sends logs to the New Relic platform.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

enum

The New Relic region. Allowed enum values: us,eu

type [required]

enum

The destination type. The value should always be new_relic. Allowed enum values: new_relic

default: new_relic

Option 16

object

The opensearch destination writes logs to an OpenSearch cluster.

Supported pipeline types: logs

bulk_index

string

The index to write logs to.

data_stream

object

Configuration options for writing to OpenSearch Data Streams instead of a fixed index.

dataset

string

The data stream dataset for your logs. This groups logs by their source or application.

dtype

string

The data stream type for your logs. This determines how logs are categorized within the data stream.

namespace

string

The data stream namespace for your logs. This separates logs into different environments or domains.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be opensearch. Allowed enum values: opensearch

default: opensearch

Option 17

object

The rsyslog destination forwards logs to an external rsyslog server over TCP or UDP using the syslog protocol.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

keepalive

int64

Optional socket keepalive duration in milliseconds.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be rsyslog. Allowed enum values: rsyslog

default: rsyslog

Option 18

object

The sentinel_one destination sends logs to SentinelOne.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

enum

The SentinelOne region to send logs to. Allowed enum values: us,eu,ca,data_set_us

type [required]

enum

The destination type. The value should always be sentinel_one. Allowed enum values: sentinel_one

default: sentinel_one

Option 19

object

The socket destination sends logs over TCP or UDP to a remote server.

Supported pipeline types: logs

encoding [required]

enum

Encoding format for log events. Allowed enum values: json,raw_message

framing [required]

 <oneOf>

Framing method configuration.

Option 1

object

Each log event is delimited by a newline character.

method [required]

enum

The definition of ObservabilityPipelineSocketDestinationFramingNewlineDelimitedMethod object. Allowed enum values: newline_delimited

Option 2

object

Event data is not delimited at all.

method [required]

enum

The definition of ObservabilityPipelineSocketDestinationFramingBytesMethod object. Allowed enum values: bytes

Option 3

object

Each log event is separated using the specified delimiter character.

delimiter [required]

string

A single ASCII character used as a delimiter.

method [required]

enum

The definition of ObservabilityPipelineSocketDestinationFramingCharacterDelimitedMethod object. Allowed enum values: character_delimited

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

mode [required]

enum

Protocol used to send logs. Allowed enum values: tcp,udp

tls

object

TLS configuration. Relevant only when mode is tcp.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be socket. Allowed enum values: socket

default: socket

Option 20

object

The splunk_hec destination forwards logs to Splunk using the HTTP Event Collector (HEC).

Supported pipeline types: logs

auto_extract_timestamp

boolean

If true, Splunk tries to extract timestamps from incoming log events. If false, Splunk assigns the time the event was received.

encoding

enum

Encoding format for log events. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

index

string

Optional name of the Splunk index where logs are written.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

sourcetype

string

The Splunk sourcetype to assign to log events.

type [required]

enum

The destination type. Always splunk_hec. Allowed enum values: splunk_hec

default: splunk_hec

Option 21

object

The sumo_logic destination forwards logs to Sumo Logic.

Supported pipeline types: logs

encoding

enum

The output encoding format. Allowed enum values: json,raw_message,logfmt

header_custom_fields

[object]

A list of custom headers to include in the request to Sumo Logic.

name [required]

string

The header field name.

value [required]

string

The header field value.

header_host_name

string

Optional override for the host name header.

header_source_category

string

Optional override for the source category header.

header_source_name

string

Optional override for the source name header.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be sumo_logic. Allowed enum values: sumo_logic

default: sumo_logic

Option 22

object

The syslog_ng destination forwards logs to an external syslog-ng server over TCP or UDP using the syslog protocol.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

keepalive

int64

Optional socket keepalive duration in milliseconds.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be syslog_ng. Allowed enum values: syslog_ng

default: syslog_ng

Option 23

object

The datadog_metrics destination forwards metrics to Datadog.

Supported pipeline types: metrics

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be datadog_metrics. Allowed enum values: datadog_metrics

default: datadog_metrics

pipeline_type

enum

The type of data being ingested. Defaults to logs if not specified. Allowed enum values: logs,metrics

default: logs

processor_groups

[object]

A list of processor groups that transform or enrich log data.

display_name

string

The display name for a component.

enabled [required]

boolean

Whether this processor group is enabled.

id [required]

string

The unique identifier for the processor group.

include [required]

string

Conditional expression for when this processor group should execute.

inputs [required]

[string]

A list of IDs for components whose output is used as the input for this processor group.

processors [required]

[ <oneOf>]

Processors applied sequentially within this group. Events flow through each processor in order.

Option 1

object

The filter processor allows conditional processing of logs/metrics based on a Datadog search query. Logs/metrics that match the include query are passed through; others are discarded.

Supported pipeline types: logs, metrics

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs/metrics should pass through the filter. Logs/metrics that match this query continue to downstream components; others are dropped.

type [required]

enum

The processor type. The value should always be filter. Allowed enum values: filter

default: filter

Option 2

object

The add_env_vars processor adds environment variable values to log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this processor in the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_env_vars. Allowed enum values: add_env_vars

default: add_env_vars

variables [required]

[object]

A list of environment variable mappings to apply to log fields.

field [required]

string

The target field in the log event.

name [required]

string

The name of the environment variable to read.

Option 3

object

The add_fields processor adds static key-value fields to logs.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[object]

A list of static fields (key-value pairs) that is added to each log event processed by this component.

name [required]

string

The field name.

value [required]

string

The field value.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_fields. Allowed enum values: add_fields

default: add_fields

Option 4

object

The add_hostname processor adds the hostname to log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_hostname. Allowed enum values: add_hostname

default: add_hostname

Option 5

object

The custom_processor processor transforms events using Vector Remap Language (VRL) scripts with advanced filtering capabilities.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets. This field should always be set to * for the custom_processor processor.

default: *

remaps [required]

[object]

Array of VRL remap rules.

drop_on_error [required]

boolean

Whether to drop events that caused errors during processing.

enabled

boolean

Whether this remap rule is enabled.

include [required]

string

A Datadog search query used to filter events for this specific remap rule.

name [required]

string

A descriptive name for this remap rule.

source [required]

string

The VRL script source code that defines the processing logic.

type [required]

enum

The processor type. The value should always be custom_processor. Allowed enum values: custom_processor

default: custom_processor

Option 6

object

The datadog_tags processor includes or excludes specific Datadog tags in your logs.

Supported pipeline types: logs

action [required]

enum

The action to take on tags with matching keys. Allowed enum values: include,exclude

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

keys [required]

[string]

A list of tag keys.

mode [required]

enum

The processing mode. Allowed enum values: filter

type [required]

enum

The processor type. The value should always be datadog_tags. Allowed enum values: datadog_tags

default: datadog_tags

Option 7

object

The dedupe processor removes duplicate fields in log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[string]

A list of log field paths to check for duplicates.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

mode [required]

enum

The deduplication mode to apply to the fields. Allowed enum values: match,ignore

type [required]

enum

The processor type. The value should always be dedupe. Allowed enum values: dedupe

default: dedupe

Option 8

object

The enrichment_table processor enriches logs using a static CSV file, GeoIP database, or reference table. Exactly one of file, geoip, or reference_table must be configured.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

file

object

Defines a static enrichment table loaded from a CSV file.

encoding [required]

object

File encoding format.

delimiter [required]

string

The encoding delimiter.

includes_headers [required]

boolean

The encoding includes_headers.

type [required]

enum

Specifies the encoding format (e.g., CSV) used for enrichment tables. Allowed enum values: csv

key [required]

[object]

Key fields used to look up enrichment values.

column [required]

string

The items column.

comparison [required]

enum

Defines how to compare key fields for enrichment table lookups. Allowed enum values: equals

field [required]

string

The items field.

path [required]

string

Path to the CSV file.

schema [required]

[object]

Schema defining column names and their types.

column [required]

string

The items column.

type [required]

enum

Declares allowed data types for enrichment table columns. Allowed enum values: string,boolean,integer,float,date,timestamp

geoip

object

Uses a GeoIP database to enrich logs based on an IP field.

key_field [required]

string

Path to the IP field in the log.

locale [required]

string

Locale used to resolve geographical names.

path [required]

string

Path to the GeoIP database file.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

reference_table

object

Uses a Datadog reference table to enrich logs.

columns

[string]

List of column names to include from the reference table. If not provided, all columns are included.

key_field [required]

string

Path to the field in the log event to match against the reference table.

table_id [required]

string

The unique identifier of the reference table.

target [required]

string

Path where enrichment results should be stored in the log.

type [required]

enum

The processor type. The value should always be enrichment_table. Allowed enum values: enrichment_table

default: enrichment_table

Option 9

object

The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog. Metrics can be counters, gauges, or distributions and optionally grouped by log fields.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include

string

A Datadog search query used to determine which logs this processor targets.

metrics

[object]

Configuration for generating individual metrics.

group_by

[string]

Optional fields used to group the metric series.

include [required]

string

Datadog filter query to match logs for metric generation.

metric_type [required]

enum

Type of metric to create. Allowed enum values: count,gauge,distribution

name [required]

string

Name of the custom metric to be created.

value [required]

 <oneOf>

Specifies how the value of the generated metric is computed.

Option 1

object

Strategy that increments a generated metric by one for each matching event.

strategy [required]

enum

Increments the metric by 1 for each matching event. Allowed enum values: increment_by_one

Option 2

object

Strategy that increments a generated metric based on the value of a log field.

field [required]

string

Name of the log field containing the numeric value to increment the metric by.

strategy [required]

enum

Uses a numeric field in the log event as the metric increment. Allowed enum values: increment_by_field

type [required]

enum

The processor type. Always generate_datadog_metrics. Allowed enum values: generate_datadog_metrics

default: generate_datadog_metrics

Option 10

object

The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

mappings [required]

[object]

A list of mapping rules to convert events to the OCSF format.

include [required]

string

A Datadog search query used to select the logs that this mapping should apply to.

mapping [required]

 <oneOf>

Defines a single mapping rule for transforming logs into the OCSF schema.

Option 1

enum

Predefined library mappings for common log formats. Allowed enum values: CloudTrail Account Change,GCP Cloud Audit CreateBucket,GCP Cloud Audit CreateSink,GCP Cloud Audit SetIamPolicy,GCP Cloud Audit UpdateSink,Github Audit Log API Activity,Google Workspace Admin Audit addPrivilege,Microsoft 365 Defender Incident,Microsoft 365 Defender UserLoggedIn,Okta System Log Authentication,Palo Alto Networks Firewall Traffic

type [required]

enum

The processor type. The value should always be ocsf_mapper. Allowed enum values: ocsf_mapper

default: ocsf_mapper

Option 11

object

The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.

Supported pipeline types: logs

disable_library_rules

boolean

If set to true, disables the default Grok rules provided by Datadog.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

A unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

rules [required]

[object]

The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.

match_rules [required]

[object]

A list of Grok parsing rules that define how to extract fields from the source field. Each rule must contain a name and a valid Grok pattern.

name [required]

string

The name of the rule.

rule [required]

string

The definition of the Grok rule.

source [required]

string

The name of the field in the log event to apply the Grok rules to.

support_rules

[object]

A list of Grok helper rules that can be referenced by the parsing rules.

name [required]

string

The name of the Grok helper rule.

rule [required]

string

The definition of the Grok helper rule.

type [required]

enum

The processor type. The value should always be parse_grok. Allowed enum values: parse_grok

default: parse_grok

Option 12

object

The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

field [required]

string

The name of the log field that contains a JSON string.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be parse_json. Allowed enum values: parse_json

default: parse_json

Option 13

object

The parse_xml processor parses XML from a specified field and extracts it into the event.

Supported pipeline types: logs

always_use_text_key

boolean

Whether to always use a text key for element content.

attr_prefix

string

The prefix to use for XML attributes in the parsed output.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

field [required]

string

The name of the log field that contains an XML string.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

include_attr

boolean

Whether to include XML attributes in the parsed output.

parse_bool

boolean

Whether to parse boolean values from strings.

parse_null

boolean

Whether to parse null values.

parse_number

boolean

Whether to parse numeric values from strings.

text_key

string

The key name to use for text content within XML elements. Must be at least 1 character if specified.

type [required]

enum

The processor type. The value should always be parse_xml. Allowed enum values: parse_xml

default: parse_xml

Option 14

object

The quota processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.

Supported pipeline types: logs

display_name

string

The display name for a component.

drop_events

boolean

If set to true, logs that match the quota filter and are sent after the quota is exceeded are dropped. Logs that do not match the filter continue through the pipeline. Note: You can set either drop_events or overflow_action, but not both.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

ignore_when_missing_partitions

boolean

If true, the processor skips quota checks when partition fields are missing from the logs.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

name [required]

string

Name of the quota.

overflow_action

enum

The action to take when the quota or bucket limit is exceeded. Options:

  • drop: Drop the event.
  • no_action: Let the event pass through.
  • overflow_routing: Route to an overflow destination. Allowed enum values: drop,no_action,overflow_routing

overrides

[object]

A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.

fields [required]

[object]

A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.

name [required]

string

The field name.

value [required]

string

The field value.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

partition_fields

[string]

A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.

too_many_buckets_action

enum

The action to take when the quota or bucket limit is exceeded. Options:

  • drop: Drop the event.
  • no_action: Let the event pass through.
  • overflow_routing: Route to an overflow destination. Allowed enum values: drop,no_action,overflow_routing

type [required]

enum

The processor type. The value should always be quota. Allowed enum values: quota

default: quota

Option 15

object

The reduce processor aggregates and merges logs based on matching keys and merge strategies.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by [required]

[string]

A list of fields used to group log events for merging.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

merge_strategies [required]

[object]

List of merge strategies defining how values from grouped events should be combined.

path [required]

string

The field path in the log event.

strategy [required]

enum

The merge strategy to apply. Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique

type [required]

enum

The processor type. The value should always be reduce. Allowed enum values: reduce

default: reduce

Option 16

object

The remove_fields processor deletes specified fields from logs.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[string]

A list of field names to be removed from each log event.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be remove_fields. Allowed enum values: remove_fields

default: remove_fields

Option 17

object

The rename_fields processor changes field names.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[object]

A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.

destination [required]

string

The field name to assign the renamed value to.

preserve_source [required]

boolean

Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.

source [required]

string

The original field name in the log event that should be renamed.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be rename_fields. Allowed enum values: rename_fields

default: rename_fields

Option 18

object

The sample processor allows probabilistic sampling of logs at a fixed rate.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by

[string]

Optional list of fields to group events by. Each group is sampled independently.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

percentage [required]

double

The percentage of logs to sample.

type [required]

enum

The processor type. The value should always be sample. Allowed enum values: sample

default: sample

Option 19

object

The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

rules [required]

[object]

A list of rules for identifying and acting on sensitive data patterns.

keyword_options

object

Configuration for keywords used to reinforce sensitive data pattern detection.

keywords [required]

[string]

A list of keywords to match near the sensitive pattern.

proximity [required]

int64

Maximum number of tokens between a keyword and a sensitive value match.

name [required]

string

A name identifying the rule.

on_match [required]

 <oneOf>

Defines what action to take when sensitive data is matched.

Option 1

object

Configuration for completely redacting matched sensitive data.

action [required]

enum

Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility. Allowed enum values: redact

options [required]

object

Configuration for fully redacting sensitive data.

replace [required]

string

The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptions replace.

Option 2

object

Configuration for hashing matched sensitive values.

action [required]

enum

Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content. Allowed enum values: hash

options

object

The ObservabilityPipelineSensitiveDataScannerProcessorActionHash options.

Option 3

object

Configuration for partially redacting matched sensitive data.

action [required]

enum

Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card). Allowed enum values: partial_redact

options [required]

object

Controls how partial redaction is applied, including character count and direction.

characters [required]

int64

The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptions characters.

direction [required]

enum

Indicates whether to redact characters from the first or last part of the matched value. Allowed enum values: first,last

pattern [required]

 <oneOf>

Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.

Option 1

object

Defines a custom regex-based pattern for identifying sensitive data in logs.

options [required]

object

Options for defining a custom regex pattern.

description

string

Human-readable description providing context about a sensitive data scanner rule

rule [required]

string

A regular expression used to detect sensitive values. Must be a valid regex.

type [required]

enum

Indicates a custom regular expression is used for matching. Allowed enum values: custom

Option 2

object

Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.

options [required]

object

Options for selecting a predefined library pattern and enabling keyword support.

description

string

Human-readable description providing context about a sensitive data scanner rule

id [required]

string

Identifier for a predefined pattern from the sensitive data scanner pattern library.

use_recommended_keywords

boolean

Whether to augment the pattern with recommended keywords (optional).

type [required]

enum

Indicates that a predefined library pattern is used. Allowed enum values: library

scope [required]

 <oneOf>

Determines which parts of the log the pattern-matching rule should be applied to.

Option 1

object

Includes only specific fields for sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Applies the rule only to included fields. Allowed enum values: include

Option 2

object

Excludes specific fields from sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Excludes specific fields from processing. Allowed enum values: exclude

Option 3

object

Applies scanning across all available fields.

target [required]

enum

Applies the rule to all fields. Allowed enum values: all

tags [required]

[string]

Tags assigned to this rule for filtering and classification.

type [required]

enum

The processor type. The value should always be sensitive_data_scanner. Allowed enum values: sensitive_data_scanner

default: sensitive_data_scanner

Option 20

object

The split_array processor splits array fields into separate events based on configured rules.

Supported pipeline types: logs

arrays [required]

[object]

A list of array split configurations.

field [required]

string

The path to the array field to split.

include [required]

string

A Datadog search query used to determine which logs this array split operation targets.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets. For split_array, this should typically be *.

type [required]

enum

The processor type. The value should always be split_array. Allowed enum values: split_array

default: split_array

Option 21

object

The throttle processor limits the number of events that pass through over a given time window.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by

[string]

Optional list of fields used to group events before the threshold has been reached.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

threshold [required]

int64

the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.

type [required]

enum

The processor type. The value should always be throttle. Allowed enum values: throttle

default: throttle

window [required]

double

The time window in seconds over which the threshold applies.

Option 22

object

The metric_tags processor filters metrics based on their tags using Datadog tag key patterns.

Supported pipeline types: metrics

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query that determines which metrics the processor targets.

rules [required]

[object]

A list of rules for filtering metric tags.

action [required]

enum

The action to take on tags with matching keys. Allowed enum values: include,exclude

include [required]

string

A Datadog search query used to determine which metrics this rule targets.

keys [required]

[string]

A list of tag keys to include or exclude.

mode [required]

enum

The processing mode for tag filtering. Allowed enum values: filter

type [required]

enum

The processor type. The value should always be metric_tags. Allowed enum values: metric_tags

default: metric_tags

processors

[object]

DEPRECATED: A list of processor groups that transform or enrich log data.

Deprecated: This field is deprecated, you should now use the processor_groups field.

display_name

string

The display name for a component.

enabled [required]

boolean

Whether this processor group is enabled.

id [required]

string

The unique identifier for the processor group.

include [required]

string

Conditional expression for when this processor group should execute.

inputs [required]

[string]

A list of IDs for components whose output is used as the input for this processor group.

processors [required]

[ <oneOf>]

Processors applied sequentially within this group. Events flow through each processor in order.

Option 1

object

The filter processor allows conditional processing of logs/metrics based on a Datadog search query. Logs/metrics that match the include query are passed through; others are discarded.

Supported pipeline types: logs, metrics

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs/metrics should pass through the filter. Logs/metrics that match this query continue to downstream components; others are dropped.

type [required]

enum

The processor type. The value should always be filter. Allowed enum values: filter

default: filter

Option 2

object

The add_env_vars processor adds environment variable values to log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this processor in the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_env_vars. Allowed enum values: add_env_vars

default: add_env_vars

variables [required]

[object]

A list of environment variable mappings to apply to log fields.

field [required]

string

The target field in the log event.

name [required]

string

The name of the environment variable to read.

Option 3

object

The add_fields processor adds static key-value fields to logs.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[object]

A list of static fields (key-value pairs) that is added to each log event processed by this component.

name [required]

string

The field name.

value [required]

string

The field value.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_fields. Allowed enum values: add_fields

default: add_fields

Option 4

object

The add_hostname processor adds the hostname to log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_hostname. Allowed enum values: add_hostname

default: add_hostname

Option 5

object

The custom_processor processor transforms events using Vector Remap Language (VRL) scripts with advanced filtering capabilities.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets. This field should always be set to * for the custom_processor processor.

default: *

remaps [required]

[object]

Array of VRL remap rules.

drop_on_error [required]

boolean

Whether to drop events that caused errors during processing.

enabled

boolean

Whether this remap rule is enabled.

include [required]

string

A Datadog search query used to filter events for this specific remap rule.

name [required]

string

A descriptive name for this remap rule.

source [required]

string

The VRL script source code that defines the processing logic.

type [required]

enum

The processor type. The value should always be custom_processor. Allowed enum values: custom_processor

default: custom_processor

Option 6

object

The datadog_tags processor includes or excludes specific Datadog tags in your logs.

Supported pipeline types: logs

action [required]

enum

The action to take on tags with matching keys. Allowed enum values: include,exclude

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

keys [required]

[string]

A list of tag keys.

mode [required]

enum

The processing mode. Allowed enum values: filter

type [required]

enum

The processor type. The value should always be datadog_tags. Allowed enum values: datadog_tags

default: datadog_tags

Option 7

object

The dedupe processor removes duplicate fields in log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[string]

A list of log field paths to check for duplicates.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

mode [required]

enum

The deduplication mode to apply to the fields. Allowed enum values: match,ignore

type [required]

enum

The processor type. The value should always be dedupe. Allowed enum values: dedupe

default: dedupe

Option 8

object

The enrichment_table processor enriches logs using a static CSV file, GeoIP database, or reference table. Exactly one of file, geoip, or reference_table must be configured.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

file

object

Defines a static enrichment table loaded from a CSV file.

encoding [required]

object

File encoding format.

delimiter [required]

string

The encoding delimiter.

includes_headers [required]

boolean

The encoding includes_headers.

type [required]

enum

Specifies the encoding format (e.g., CSV) used for enrichment tables. Allowed enum values: csv

key [required]

[object]

Key fields used to look up enrichment values.

column [required]

string

The items column.

comparison [required]

enum

Defines how to compare key fields for enrichment table lookups. Allowed enum values: equals

field [required]

string

The items field.

path [required]

string

Path to the CSV file.

schema [required]

[object]

Schema defining column names and their types.

column [required]

string

The items column.

type [required]

enum

Declares allowed data types for enrichment table columns. Allowed enum values: string,boolean,integer,float,date,timestamp

geoip

object

Uses a GeoIP database to enrich logs based on an IP field.

key_field [required]

string

Path to the IP field in the log.

locale [required]

string

Locale used to resolve geographical names.

path [required]

string

Path to the GeoIP database file.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

reference_table

object

Uses a Datadog reference table to enrich logs.

columns

[string]

List of column names to include from the reference table. If not provided, all columns are included.

key_field [required]

string

Path to the field in the log event to match against the reference table.

table_id [required]

string

The unique identifier of the reference table.

target [required]

string

Path where enrichment results should be stored in the log.

type [required]

enum

The processor type. The value should always be enrichment_table. Allowed enum values: enrichment_table

default: enrichment_table

Option 9

object

The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog. Metrics can be counters, gauges, or distributions and optionally grouped by log fields.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include

string

A Datadog search query used to determine which logs this processor targets.

metrics

[object]

Configuration for generating individual metrics.

group_by

[string]

Optional fields used to group the metric series.

include [required]

string

Datadog filter query to match logs for metric generation.

metric_type [required]

enum

Type of metric to create. Allowed enum values: count,gauge,distribution

name [required]

string

Name of the custom metric to be created.

value [required]

 <oneOf>

Specifies how the value of the generated metric is computed.

Option 1

object

Strategy that increments a generated metric by one for each matching event.

strategy [required]

enum

Increments the metric by 1 for each matching event. Allowed enum values: increment_by_one

Option 2

object

Strategy that increments a generated metric based on the value of a log field.

field [required]

string

Name of the log field containing the numeric value to increment the metric by.

strategy [required]

enum

Uses a numeric field in the log event as the metric increment. Allowed enum values: increment_by_field

type [required]

enum

The processor type. Always generate_datadog_metrics. Allowed enum values: generate_datadog_metrics

default: generate_datadog_metrics

Option 10

object

The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

mappings [required]

[object]

A list of mapping rules to convert events to the OCSF format.

include [required]

string

A Datadog search query used to select the logs that this mapping should apply to.

mapping [required]

 <oneOf>

Defines a single mapping rule for transforming logs into the OCSF schema.

Option 1

enum

Predefined library mappings for common log formats. Allowed enum values: CloudTrail Account Change,GCP Cloud Audit CreateBucket,GCP Cloud Audit CreateSink,GCP Cloud Audit SetIamPolicy,GCP Cloud Audit UpdateSink,Github Audit Log API Activity,Google Workspace Admin Audit addPrivilege,Microsoft 365 Defender Incident,Microsoft 365 Defender UserLoggedIn,Okta System Log Authentication,Palo Alto Networks Firewall Traffic

type [required]

enum

The processor type. The value should always be ocsf_mapper. Allowed enum values: ocsf_mapper

default: ocsf_mapper

Option 11

object

The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.

Supported pipeline types: logs

disable_library_rules

boolean

If set to true, disables the default Grok rules provided by Datadog.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

A unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

rules [required]

[object]

The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.

match_rules [required]

[object]

A list of Grok parsing rules that define how to extract fields from the source field. Each rule must contain a name and a valid Grok pattern.

name [required]

string

The name of the rule.

rule [required]

string

The definition of the Grok rule.

source [required]

string

The name of the field in the log event to apply the Grok rules to.

support_rules

[object]

A list of Grok helper rules that can be referenced by the parsing rules.

name [required]

string

The name of the Grok helper rule.

rule [required]

string

The definition of the Grok helper rule.

type [required]

enum

The processor type. The value should always be parse_grok. Allowed enum values: parse_grok

default: parse_grok

Option 12

object

The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

field [required]

string

The name of the log field that contains a JSON string.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be parse_json. Allowed enum values: parse_json

default: parse_json

Option 13

object

The parse_xml processor parses XML from a specified field and extracts it into the event.

Supported pipeline types: logs

always_use_text_key

boolean

Whether to always use a text key for element content.

attr_prefix

string

The prefix to use for XML attributes in the parsed output.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

field [required]

string

The name of the log field that contains an XML string.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

include_attr

boolean

Whether to include XML attributes in the parsed output.

parse_bool

boolean

Whether to parse boolean values from strings.

parse_null

boolean

Whether to parse null values.

parse_number

boolean

Whether to parse numeric values from strings.

text_key

string

The key name to use for text content within XML elements. Must be at least 1 character if specified.

type [required]

enum

The processor type. The value should always be parse_xml. Allowed enum values: parse_xml

default: parse_xml

Option 14

object

The quota processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.

Supported pipeline types: logs

display_name

string

The display name for a component.

drop_events

boolean

If set to true, logs that match the quota filter and are sent after the quota is exceeded are dropped. Logs that do not match the filter continue through the pipeline. Note: You can set either drop_events or overflow_action, but not both.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

ignore_when_missing_partitions

boolean

If true, the processor skips quota checks when partition fields are missing from the logs.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

name [required]

string

Name of the quota.

overflow_action

enum

The action to take when the quota or bucket limit is exceeded. Options:

  • drop: Drop the event.
  • no_action: Let the event pass through.
  • overflow_routing: Route to an overflow destination. Allowed enum values: drop,no_action,overflow_routing

overrides

[object]

A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.

fields [required]

[object]

A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.

name [required]

string

The field name.

value [required]

string

The field value.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

partition_fields

[string]

A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.

too_many_buckets_action

enum

The action to take when the quota or bucket limit is exceeded. Options:

  • drop: Drop the event.
  • no_action: Let the event pass through.
  • overflow_routing: Route to an overflow destination. Allowed enum values: drop,no_action,overflow_routing

type [required]

enum

The processor type. The value should always be quota. Allowed enum values: quota

default: quota

Option 15

object

The reduce processor aggregates and merges logs based on matching keys and merge strategies.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by [required]

[string]

A list of fields used to group log events for merging.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

merge_strategies [required]

[object]

List of merge strategies defining how values from grouped events should be combined.

path [required]

string

The field path in the log event.

strategy [required]

enum

The merge strategy to apply. Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique

type [required]

enum

The processor type. The value should always be reduce. Allowed enum values: reduce

default: reduce

Option 16

object

The remove_fields processor deletes specified fields from logs.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[string]

A list of field names to be removed from each log event.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be remove_fields. Allowed enum values: remove_fields

default: remove_fields

Option 17

object

The rename_fields processor changes field names.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[object]

A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.

destination [required]

string

The field name to assign the renamed value to.

preserve_source [required]

boolean

Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.

source [required]

string

The original field name in the log event that should be renamed.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be rename_fields. Allowed enum values: rename_fields

default: rename_fields

Option 18

object

The sample processor allows probabilistic sampling of logs at a fixed rate.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by

[string]

Optional list of fields to group events by. Each group is sampled independently.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

percentage [required]

double

The percentage of logs to sample.

type [required]

enum

The processor type. The value should always be sample. Allowed enum values: sample

default: sample

Option 19

object

The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

rules [required]

[object]

A list of rules for identifying and acting on sensitive data patterns.

keyword_options

object

Configuration for keywords used to reinforce sensitive data pattern detection.

keywords [required]

[string]

A list of keywords to match near the sensitive pattern.

proximity [required]

int64

Maximum number of tokens between a keyword and a sensitive value match.

name [required]

string

A name identifying the rule.

on_match [required]

 <oneOf>

Defines what action to take when sensitive data is matched.

Option 1

object

Configuration for completely redacting matched sensitive data.

action [required]

enum

Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility. Allowed enum values: redact

options [required]

object

Configuration for fully redacting sensitive data.

replace [required]

string

The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptions replace.

Option 2

object

Configuration for hashing matched sensitive values.

action [required]

enum

Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content. Allowed enum values: hash

options

object

The ObservabilityPipelineSensitiveDataScannerProcessorActionHash options.

Option 3

object

Configuration for partially redacting matched sensitive data.

action [required]

enum

Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card). Allowed enum values: partial_redact

options [required]

object

Controls how partial redaction is applied, including character count and direction.

characters [required]

int64

The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptions characters.

direction [required]

enum

Indicates whether to redact characters from the first or last part of the matched value. Allowed enum values: first,last

pattern [required]

 <oneOf>

Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.

Option 1

object

Defines a custom regex-based pattern for identifying sensitive data in logs.

options [required]

object

Options for defining a custom regex pattern.

description

string

Human-readable description providing context about a sensitive data scanner rule

rule [required]

string

A regular expression used to detect sensitive values. Must be a valid regex.

type [required]

enum

Indicates a custom regular expression is used for matching. Allowed enum values: custom

Option 2

object

Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.

options [required]

object

Options for selecting a predefined library pattern and enabling keyword support.

description

string

Human-readable description providing context about a sensitive data scanner rule

id [required]

string

Identifier for a predefined pattern from the sensitive data scanner pattern library.

use_recommended_keywords

boolean

Whether to augment the pattern with recommended keywords (optional).

type [required]

enum

Indicates that a predefined library pattern is used. Allowed enum values: library

scope [required]

 <oneOf>

Determines which parts of the log the pattern-matching rule should be applied to.

Option 1

object

Includes only specific fields for sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Applies the rule only to included fields. Allowed enum values: include

Option 2

object

Excludes specific fields from sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Excludes specific fields from processing. Allowed enum values: exclude

Option 3

object

Applies scanning across all available fields.

target [required]

enum

Applies the rule to all fields. Allowed enum values: all

tags [required]

[string]

Tags assigned to this rule for filtering and classification.

type [required]

enum

The processor type. The value should always be sensitive_data_scanner. Allowed enum values: sensitive_data_scanner

default: sensitive_data_scanner

Option 20

object

The split_array processor splits array fields into separate events based on configured rules.

Supported pipeline types: logs

arrays [required]

[object]

A list of array split configurations.

field [required]

string

The path to the array field to split.

include [required]

string

A Datadog search query used to determine which logs this array split operation targets.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets. For split_array, this should typically be *.

type [required]

enum

The processor type. The value should always be split_array. Allowed enum values: split_array

default: split_array

Option 21

object

The throttle processor limits the number of events that pass through over a given time window.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by

[string]

Optional list of fields used to group events before the threshold has been reached.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

threshold [required]

int64

the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.

type [required]

enum

The processor type. The value should always be throttle. Allowed enum values: throttle

default: throttle

window [required]

double

The time window in seconds over which the threshold applies.

Option 22

object

The metric_tags processor filters metrics based on their tags using Datadog tag key patterns.

Supported pipeline types: metrics

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query that determines which metrics the processor targets.

rules [required]

[object]

A list of rules for filtering metric tags.

action [required]

enum

The action to take on tags with matching keys. Allowed enum values: include,exclude

include [required]

string

A Datadog search query used to determine which metrics this rule targets.

keys [required]

[string]

A list of tag keys to include or exclude.

mode [required]

enum

The processing mode for tag filtering. Allowed enum values: filter

type [required]

enum

The processor type. The value should always be metric_tags. Allowed enum values: metric_tags

default: metric_tags

sources [required]

[ <oneOf>]

A list of configured data sources for the pipeline.

Option 1

object

The datadog_agent source collects logs/metrics from the Datadog Agent.

Supported pipeline types: logs, metrics

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be datadog_agent. Allowed enum values: datadog_agent

default: datadog_agent

Option 2

object

The amazon_data_firehose source ingests logs from AWS Data Firehose.

Supported pipeline types: logs

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be amazon_data_firehose. Allowed enum values: amazon_data_firehose

default: amazon_data_firehose

Option 3

object

The amazon_s3 source ingests logs from an Amazon S3 bucket. It supports AWS authentication and TLS encryption.

Supported pipeline types: logs

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

region [required]

string

AWS region where the S3 bucket resides.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always amazon_s3. Allowed enum values: amazon_s3

default: amazon_s3

Option 4

object

The fluent_bit source ingests logs from Fluent Bit.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be fluent_bit. Allowed enum values: fluent_bit

default: fluent_bit

Option 5

object

The fluentd source ingests logs from a Fluentd-compatible service.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be `fluentd. Allowed enum values: fluentd

default: fluentd

Option 6

object

The google_pubsub source ingests logs from a Google Cloud Pub/Sub subscription.

Supported pipeline types: logs

auth

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

project [required]

string

The GCP project ID that owns the Pub/Sub subscription.

subscription [required]

string

The Pub/Sub subscription name from which messages are consumed.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be google_pubsub. Allowed enum values: google_pubsub

default: google_pubsub

Option 7

object

The http_client source scrapes logs from HTTP endpoints at regular intervals.

Supported pipeline types: logs

auth_strategy

enum

Optional authentication strategy for HTTP requests. Allowed enum values: none,basic,bearer,custom

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

scrape_interval_secs

int64

The interval (in seconds) between HTTP scrape requests.

scrape_timeout_secs

int64

The timeout (in seconds) for each scrape request.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be http_client. Allowed enum values: http_client

default: http_client

Option 8

object

The http_server source collects logs over HTTP POST from external services.

Supported pipeline types: logs

auth_strategy [required]

enum

HTTP authentication method. Allowed enum values: none,plain

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

Unique ID for the HTTP server source.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be http_server. Allowed enum values: http_server

default: http_server

Option 9

object

The kafka source ingests data from Apache Kafka topics.

Supported pipeline types: logs

group_id [required]

string

Consumer group ID used by the Kafka client.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

librdkafka_options

[object]

Optional list of advanced Kafka client configuration options, defined as key-value pairs.

name [required]

string

The name of the librdkafka configuration option to set.

value [required]

string

The value assigned to the specified librdkafka configuration option.

sasl

object

Specifies the SASL mechanism for authenticating with a Kafka cluster.

mechanism

enum

SASL mechanism used for Kafka authentication. Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topics [required]

[string]

A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.

type [required]

enum

The source type. The value should always be kafka. Allowed enum values: kafka

default: kafka

Option 10

object

The logstash source ingests logs from a Logstash forwarder.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be logstash. Allowed enum values: logstash

default: logstash

Option 11

object

The rsyslog source listens for logs over TCP or UDP from an rsyslog server using the syslog protocol.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

mode [required]

enum

Protocol used by the syslog source to receive messages. Allowed enum values: tcp,udp

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be rsyslog. Allowed enum values: rsyslog

default: rsyslog

Option 12

object

The socket source ingests logs over TCP or UDP.

Supported pipeline types: logs

framing [required]

 <oneOf>

Framing method configuration for the socket source.

Option 1

object

Byte frames which are delimited by a newline character.

method [required]

enum

Byte frames which are delimited by a newline character. Allowed enum values: newline_delimited

Option 2

object

Byte frames are passed through as-is according to the underlying I/O boundaries (for example, split between messages or stream segments).

method [required]

enum

Byte frames are passed through as-is according to the underlying I/O boundaries (for example, split between messages or stream segments). Allowed enum values: bytes

Option 3

object

Byte frames which are delimited by a chosen character.

delimiter [required]

string

A single ASCII character used to delimit events.

method [required]

enum

Byte frames which are delimited by a chosen character. Allowed enum values: character_delimited

Option 4

object

Byte frames according to the octet counting format as per RFC6587.

method [required]

enum

Byte frames according to the octet counting format as per RFC6587. Allowed enum values: octet_counting

Option 5

object

Byte frames which are chunked GELF messages.

method [required]

enum

Byte frames which are chunked GELF messages. Allowed enum values: chunked_gelf

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

mode [required]

enum

Protocol used to receive logs. Allowed enum values: tcp,udp

tls

object

TLS configuration. Relevant only when mode is tcp.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be socket. Allowed enum values: socket

default: socket

Option 13

object

The splunk_hec source implements the Splunk HTTP Event Collector (HEC) API.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always splunk_hec. Allowed enum values: splunk_hec

default: splunk_hec

Option 14

object

The splunk_tcp source receives logs from a Splunk Universal Forwarder over TCP. TLS is supported for secure transmission.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always splunk_tcp. Allowed enum values: splunk_tcp

default: splunk_tcp

Option 15

object

The sumo_logic source receives logs from Sumo Logic collectors.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

type [required]

enum

The source type. The value should always be sumo_logic. Allowed enum values: sumo_logic

default: sumo_logic

Option 16

object

The syslog_ng source listens for logs over TCP or UDP from a syslog-ng server using the syslog protocol.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

mode [required]

enum

Protocol used by the syslog source to receive messages. Allowed enum values: tcp,udp

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be syslog_ng. Allowed enum values: syslog_ng

default: syslog_ng

Option 17

object

The opentelemetry source receives telemetry data using the OpenTelemetry Protocol (OTLP) over gRPC and HTTP.

Supported pipeline types: logs

grpc_address_key

string

Environment variable name containing the gRPC server address for receiving OTLP data. Must be a valid environment variable name (alphanumeric characters and underscores only).

http_address_key

string

Environment variable name containing the HTTP server address for receiving OTLP data. Must be a valid environment variable name (alphanumeric characters and underscores only).

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be opentelemetry. Allowed enum values: opentelemetry

default: opentelemetry

name [required]

string

Name of the pipeline.

id [required]

string

Unique identifier for the pipeline.

type [required]

string

The resource type identifier. For pipeline resources, this should always be set to pipelines.

default: pipelines

{
  "data": {
    "attributes": {
      "config": {
        "destinations": [
          {
            "auth_strategy": "basic",
            "compression": {
              "algorithm": "gzip"
            },
            "encoding": "json",
            "id": "http-client-destination",
            "inputs": [
              "filter-processor"
            ],
            "tls": {
              "ca_file": "string",
              "crt_file": "/path/to/cert.crt",
              "key_file": "string"
            },
            "type": "http_client"
          }
        ],
        "pipeline_type": "logs",
        "processor_groups": [
          {
            "display_name": "my component",
            "enabled": true,
            "id": "grouped-processors",
            "include": "service:my-service",
            "inputs": [
              "datadog-agent-source"
            ],
            "processors": [
              []
            ]
          }
        ],
        "processors": [
          {
            "display_name": "my component",
            "enabled": true,
            "id": "grouped-processors",
            "include": "service:my-service",
            "inputs": [
              "datadog-agent-source"
            ],
            "processors": [
              []
            ]
          }
        ],
        "sources": [
          {
            "id": "datadog-agent-source",
            "tls": {
              "ca_file": "string",
              "crt_file": "/path/to/cert.crt",
              "key_file": "string"
            },
            "type": "datadog_agent"
          }
        ]
      },
      "name": "Main Observability Pipeline"
    },
    "id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
    "type": "pipelines"
  }
}

Bad Request

API error response.

Expand All

Champ

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Not Authorized

API error response.

Expand All

Champ

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Not Found

API error response.

Expand All

Champ

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Conflict

API error response.

Expand All

Champ

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Too many requests

API error response.

Expand All

Champ

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Exemple de code

                          # Path parameters
export pipeline_id="CHANGE_ME"
# Curl command
curl -X PUT "https://api.ap1.datadoghq.com"https://api.ap2.datadoghq.com"https://api.datadoghq.eu"https://api.ddog-gov.com"https://api.datadoghq.com"https://api.us3.datadoghq.com"https://api.us5.datadoghq.com/api/v2/obs-pipelines/pipelines/${pipeline_id}" \ -H "Accept: application/json" \ -H "Content-Type: application/json" \ -H "DD-API-KEY: ${DD_API_KEY}" \ -H "DD-APPLICATION-KEY: ${DD_APP_KEY}" \ -d @- << EOF { "data": { "attributes": { "config": { "destinations": [ { "id": "updated-datadog-logs-destination-id", "inputs": [ "my-processor-group" ], "type": "datadog_logs" } ], "processor_groups": [ { "enabled": true, "id": "my-processor-group", "include": "service:my-service", "inputs": [ "datadog-agent-source" ], "processors": [ { "enabled": true, "id": "filter-processor", "include": "status:error", "type": "filter" } ] } ], "sources": [ { "id": "datadog-agent-source", "type": "datadog_agent" } ] }, "name": "Updated Pipeline Name" }, "id": "3fa85f64-5717-4562-b3fc-2c963f66afa6", "type": "pipelines" } } EOF
// Update a pipeline returns "OK" response

package main

import (
	"context"
	"encoding/json"
	"fmt"
	"os"

	"github.com/DataDog/datadog-api-client-go/v2/api/datadog"
	"github.com/DataDog/datadog-api-client-go/v2/api/datadogV2"
)

func main() {
	// there is a valid "pipeline" in the system
	PipelineDataID := os.Getenv("PIPELINE_DATA_ID")

	body := datadogV2.ObservabilityPipeline{
		Data: datadogV2.ObservabilityPipelineData{
			Attributes: datadogV2.ObservabilityPipelineDataAttributes{
				Config: datadogV2.ObservabilityPipelineConfig{
					Destinations: []datadogV2.ObservabilityPipelineConfigDestinationItem{
						datadogV2.ObservabilityPipelineConfigDestinationItem{
							ObservabilityPipelineDatadogLogsDestination: &datadogV2.ObservabilityPipelineDatadogLogsDestination{
								Id: "updated-datadog-logs-destination-id",
								Inputs: []string{
									"my-processor-group",
								},
								Type: datadogV2.OBSERVABILITYPIPELINEDATADOGLOGSDESTINATIONTYPE_DATADOG_LOGS,
							}},
					},
					Processors: []datadogV2.ObservabilityPipelineConfigProcessorGroup{
						{
							Enabled: true,
							Id:      "my-processor-group",
							Include: "service:my-service",
							Inputs: []string{
								"datadog-agent-source",
							},
							Processors: []datadogV2.ObservabilityPipelineConfigProcessorItem{
								datadogV2.ObservabilityPipelineConfigProcessorItem{
									ObservabilityPipelineFilterProcessor: &datadogV2.ObservabilityPipelineFilterProcessor{
										Enabled: true,
										Id:      "filter-processor",
										Include: "status:error",
										Type:    datadogV2.OBSERVABILITYPIPELINEFILTERPROCESSORTYPE_FILTER,
									}},
							},
						},
					},
					Sources: []datadogV2.ObservabilityPipelineConfigSourceItem{
						datadogV2.ObservabilityPipelineConfigSourceItem{
							ObservabilityPipelineDatadogAgentSource: &datadogV2.ObservabilityPipelineDatadogAgentSource{
								Id:   "datadog-agent-source",
								Type: datadogV2.OBSERVABILITYPIPELINEDATADOGAGENTSOURCETYPE_DATADOG_AGENT,
							}},
					},
				},
				Name: "Updated Pipeline Name",
			},
			Id:   PipelineDataID,
			Type: "pipelines",
		},
	}
	ctx := datadog.NewDefaultContext(context.Background())
	configuration := datadog.NewConfiguration()
	configuration.SetUnstableOperationEnabled("v2.UpdatePipeline", true)
	apiClient := datadog.NewAPIClient(configuration)
	api := datadogV2.NewObservabilityPipelinesApi(apiClient)
	resp, r, err := api.UpdatePipeline(ctx, PipelineDataID, body)

	if err != nil {
		fmt.Fprintf(os.Stderr, "Error when calling `ObservabilityPipelinesApi.UpdatePipeline`: %v\n", err)
		fmt.Fprintf(os.Stderr, "Full HTTP response: %v\n", r)
	}

	responseContent, _ := json.MarshalIndent(resp, "", "  ")
	fmt.Fprintf(os.Stdout, "Response from `ObservabilityPipelinesApi.UpdatePipeline`:\n%s\n", responseContent)
}

Instructions

First install the library and its dependencies and then save the example to main.go and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" go run "main.go"
// Update a pipeline returns "OK" response

import com.datadog.api.client.ApiClient;
import com.datadog.api.client.ApiException;
import com.datadog.api.client.v2.api.ObservabilityPipelinesApi;
import com.datadog.api.client.v2.model.ObservabilityPipeline;
import com.datadog.api.client.v2.model.ObservabilityPipelineConfig;
import com.datadog.api.client.v2.model.ObservabilityPipelineConfigDestinationItem;
import com.datadog.api.client.v2.model.ObservabilityPipelineConfigProcessorGroup;
import com.datadog.api.client.v2.model.ObservabilityPipelineConfigProcessorItem;
import com.datadog.api.client.v2.model.ObservabilityPipelineConfigSourceItem;
import com.datadog.api.client.v2.model.ObservabilityPipelineData;
import com.datadog.api.client.v2.model.ObservabilityPipelineDataAttributes;
import com.datadog.api.client.v2.model.ObservabilityPipelineDatadogAgentSource;
import com.datadog.api.client.v2.model.ObservabilityPipelineDatadogAgentSourceType;
import com.datadog.api.client.v2.model.ObservabilityPipelineDatadogLogsDestination;
import com.datadog.api.client.v2.model.ObservabilityPipelineDatadogLogsDestinationType;
import com.datadog.api.client.v2.model.ObservabilityPipelineFilterProcessor;
import com.datadog.api.client.v2.model.ObservabilityPipelineFilterProcessorType;
import java.util.Collections;

public class Example {
  public static void main(String[] args) {
    ApiClient defaultClient = ApiClient.getDefaultApiClient();
    defaultClient.setUnstableOperationEnabled("v2.updatePipeline", true);
    ObservabilityPipelinesApi apiInstance = new ObservabilityPipelinesApi(defaultClient);

    // there is a valid "pipeline" in the system
    String PIPELINE_DATA_ID = System.getenv("PIPELINE_DATA_ID");

    ObservabilityPipeline body =
        new ObservabilityPipeline()
            .data(
                new ObservabilityPipelineData()
                    .attributes(
                        new ObservabilityPipelineDataAttributes()
                            .config(
                                new ObservabilityPipelineConfig()
                                    .destinations(
                                        Collections.singletonList(
                                            new ObservabilityPipelineConfigDestinationItem(
                                                new ObservabilityPipelineDatadogLogsDestination()
                                                    .id("updated-datadog-logs-destination-id")
                                                    .inputs(
                                                        Collections.singletonList(
                                                            "my-processor-group"))
                                                    .type(
                                                        ObservabilityPipelineDatadogLogsDestinationType
                                                            .DATADOG_LOGS))))
                                    .processors(
                                        Collections.singletonList(
                                            new ObservabilityPipelineConfigProcessorGroup()
                                                .enabled(true)
                                                .id("my-processor-group")
                                                .include("service:my-service")
                                                .inputs(
                                                    Collections.singletonList(
                                                        "datadog-agent-source"))
                                                .processors(
                                                    Collections.singletonList(
                                                        new ObservabilityPipelineConfigProcessorItem(
                                                            new ObservabilityPipelineFilterProcessor()
                                                                .enabled(true)
                                                                .id("filter-processor")
                                                                .include("status:error")
                                                                .type(
                                                                    ObservabilityPipelineFilterProcessorType
                                                                        .FILTER))))))
                                    .sources(
                                        Collections.singletonList(
                                            new ObservabilityPipelineConfigSourceItem(
                                                new ObservabilityPipelineDatadogAgentSource()
                                                    .id("datadog-agent-source")
                                                    .type(
                                                        ObservabilityPipelineDatadogAgentSourceType
                                                            .DATADOG_AGENT)))))
                            .name("Updated Pipeline Name"))
                    .id(PIPELINE_DATA_ID)
                    .type("pipelines"));

    try {
      ObservabilityPipeline result = apiInstance.updatePipeline(PIPELINE_DATA_ID, body);
      System.out.println(result);
    } catch (ApiException e) {
      System.err.println("Exception when calling ObservabilityPipelinesApi#updatePipeline");
      System.err.println("Status code: " + e.getCode());
      System.err.println("Reason: " + e.getResponseBody());
      System.err.println("Response headers: " + e.getResponseHeaders());
      e.printStackTrace();
    }
  }
}

Instructions

First install the library and its dependencies and then save the example to Example.java and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" java "Example.java"
"""
Update a pipeline returns "OK" response
"""

from os import environ
from datadog_api_client import ApiClient, Configuration
from datadog_api_client.v2.api.observability_pipelines_api import ObservabilityPipelinesApi
from datadog_api_client.v2.model.observability_pipeline import ObservabilityPipeline
from datadog_api_client.v2.model.observability_pipeline_config import ObservabilityPipelineConfig
from datadog_api_client.v2.model.observability_pipeline_config_processor_group import (
    ObservabilityPipelineConfigProcessorGroup,
)
from datadog_api_client.v2.model.observability_pipeline_data import ObservabilityPipelineData
from datadog_api_client.v2.model.observability_pipeline_data_attributes import ObservabilityPipelineDataAttributes
from datadog_api_client.v2.model.observability_pipeline_datadog_agent_source import (
    ObservabilityPipelineDatadogAgentSource,
)
from datadog_api_client.v2.model.observability_pipeline_datadog_agent_source_type import (
    ObservabilityPipelineDatadogAgentSourceType,
)
from datadog_api_client.v2.model.observability_pipeline_datadog_logs_destination import (
    ObservabilityPipelineDatadogLogsDestination,
)
from datadog_api_client.v2.model.observability_pipeline_datadog_logs_destination_type import (
    ObservabilityPipelineDatadogLogsDestinationType,
)
from datadog_api_client.v2.model.observability_pipeline_filter_processor import ObservabilityPipelineFilterProcessor
from datadog_api_client.v2.model.observability_pipeline_filter_processor_type import (
    ObservabilityPipelineFilterProcessorType,
)

# there is a valid "pipeline" in the system
PIPELINE_DATA_ID = environ["PIPELINE_DATA_ID"]

body = ObservabilityPipeline(
    data=ObservabilityPipelineData(
        attributes=ObservabilityPipelineDataAttributes(
            config=ObservabilityPipelineConfig(
                destinations=[
                    ObservabilityPipelineDatadogLogsDestination(
                        id="updated-datadog-logs-destination-id",
                        inputs=[
                            "my-processor-group",
                        ],
                        type=ObservabilityPipelineDatadogLogsDestinationType.DATADOG_LOGS,
                    ),
                ],
                processors=[
                    ObservabilityPipelineConfigProcessorGroup(
                        enabled=True,
                        id="my-processor-group",
                        include="service:my-service",
                        inputs=[
                            "datadog-agent-source",
                        ],
                        processors=[
                            ObservabilityPipelineFilterProcessor(
                                enabled=True,
                                id="filter-processor",
                                include="status:error",
                                type=ObservabilityPipelineFilterProcessorType.FILTER,
                            ),
                        ],
                    ),
                ],
                sources=[
                    ObservabilityPipelineDatadogAgentSource(
                        id="datadog-agent-source",
                        type=ObservabilityPipelineDatadogAgentSourceType.DATADOG_AGENT,
                    ),
                ],
            ),
            name="Updated Pipeline Name",
        ),
        id=PIPELINE_DATA_ID,
        type="pipelines",
    ),
)

configuration = Configuration()
configuration.unstable_operations["update_pipeline"] = True
with ApiClient(configuration) as api_client:
    api_instance = ObservabilityPipelinesApi(api_client)
    response = api_instance.update_pipeline(pipeline_id=PIPELINE_DATA_ID, body=body)

    print(response)

Instructions

First install the library and its dependencies and then save the example to example.py and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" python3 "example.py"
# Update a pipeline returns "OK" response

require "datadog_api_client"
DatadogAPIClient.configure do |config|
  config.unstable_operations["v2.update_pipeline".to_sym] = true
end
api_instance = DatadogAPIClient::V2::ObservabilityPipelinesAPI.new

# there is a valid "pipeline" in the system
PIPELINE_DATA_ID = ENV["PIPELINE_DATA_ID"]

body = DatadogAPIClient::V2::ObservabilityPipeline.new({
  data: DatadogAPIClient::V2::ObservabilityPipelineData.new({
    attributes: DatadogAPIClient::V2::ObservabilityPipelineDataAttributes.new({
      config: DatadogAPIClient::V2::ObservabilityPipelineConfig.new({
        destinations: [
          DatadogAPIClient::V2::ObservabilityPipelineDatadogLogsDestination.new({
            id: "updated-datadog-logs-destination-id",
            inputs: [
              "my-processor-group",
            ],
            type: DatadogAPIClient::V2::ObservabilityPipelineDatadogLogsDestinationType::DATADOG_LOGS,
          }),
        ],
        processors: [
          DatadogAPIClient::V2::ObservabilityPipelineConfigProcessorGroup.new({
            enabled: true,
            id: "my-processor-group",
            include: "service:my-service",
            inputs: [
              "datadog-agent-source",
            ],
            processors: [
              DatadogAPIClient::V2::ObservabilityPipelineFilterProcessor.new({
                enabled: true,
                id: "filter-processor",
                include: "status:error",
                type: DatadogAPIClient::V2::ObservabilityPipelineFilterProcessorType::FILTER,
              }),
            ],
          }),
        ],
        sources: [
          DatadogAPIClient::V2::ObservabilityPipelineDatadogAgentSource.new({
            id: "datadog-agent-source",
            type: DatadogAPIClient::V2::ObservabilityPipelineDatadogAgentSourceType::DATADOG_AGENT,
          }),
        ],
      }),
      name: "Updated Pipeline Name",
    }),
    id: PIPELINE_DATA_ID,
    type: "pipelines",
  }),
})
p api_instance.update_pipeline(PIPELINE_DATA_ID, body)

Instructions

First install the library and its dependencies and then save the example to example.rb and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" rb "example.rb"
// Update a pipeline returns "OK" response
use datadog_api_client::datadog;
use datadog_api_client::datadogV2::api_observability_pipelines::ObservabilityPipelinesAPI;
use datadog_api_client::datadogV2::model::ObservabilityPipeline;
use datadog_api_client::datadogV2::model::ObservabilityPipelineConfig;
use datadog_api_client::datadogV2::model::ObservabilityPipelineConfigDestinationItem;
use datadog_api_client::datadogV2::model::ObservabilityPipelineConfigProcessorGroup;
use datadog_api_client::datadogV2::model::ObservabilityPipelineConfigProcessorItem;
use datadog_api_client::datadogV2::model::ObservabilityPipelineConfigSourceItem;
use datadog_api_client::datadogV2::model::ObservabilityPipelineData;
use datadog_api_client::datadogV2::model::ObservabilityPipelineDataAttributes;
use datadog_api_client::datadogV2::model::ObservabilityPipelineDatadogAgentSource;
use datadog_api_client::datadogV2::model::ObservabilityPipelineDatadogAgentSourceType;
use datadog_api_client::datadogV2::model::ObservabilityPipelineDatadogLogsDestination;
use datadog_api_client::datadogV2::model::ObservabilityPipelineDatadogLogsDestinationType;
use datadog_api_client::datadogV2::model::ObservabilityPipelineFilterProcessor;
use datadog_api_client::datadogV2::model::ObservabilityPipelineFilterProcessorType;

#[tokio::main]
async fn main() {
    // there is a valid "pipeline" in the system
    let pipeline_data_id = std::env::var("PIPELINE_DATA_ID").unwrap();
    let body =
        ObservabilityPipeline::new(
            ObservabilityPipelineData::new(
                ObservabilityPipelineDataAttributes::new(
                    ObservabilityPipelineConfig::new(
                        vec![
                            ObservabilityPipelineConfigDestinationItem::ObservabilityPipelineDatadogLogsDestination(
                                Box::new(
                                    ObservabilityPipelineDatadogLogsDestination::new(
                                        "updated-datadog-logs-destination-id".to_string(),
                                        vec!["my-processor-group".to_string()],
                                        ObservabilityPipelineDatadogLogsDestinationType::DATADOG_LOGS,
                                    ),
                                ),
                            )
                        ],
                        vec![
                            ObservabilityPipelineConfigSourceItem::ObservabilityPipelineDatadogAgentSource(
                                Box::new(
                                    ObservabilityPipelineDatadogAgentSource::new(
                                        "datadog-agent-source".to_string(),
                                        ObservabilityPipelineDatadogAgentSourceType::DATADOG_AGENT,
                                    ),
                                ),
                            )
                        ],
                    ).processors(
                        vec![
                            ObservabilityPipelineConfigProcessorGroup::new(
                                true,
                                "my-processor-group".to_string(),
                                "service:my-service".to_string(),
                                vec!["datadog-agent-source".to_string()],
                                vec![
                                    ObservabilityPipelineConfigProcessorItem::ObservabilityPipelineFilterProcessor(
                                        Box::new(
                                            ObservabilityPipelineFilterProcessor::new(
                                                true,
                                                "filter-processor".to_string(),
                                                "status:error".to_string(),
                                                ObservabilityPipelineFilterProcessorType::FILTER,
                                            ),
                                        ),
                                    )
                                ],
                            )
                        ],
                    ),
                    "Updated Pipeline Name".to_string(),
                ),
                pipeline_data_id.clone(),
                "pipelines".to_string(),
            ),
        );
    let mut configuration = datadog::Configuration::new();
    configuration.set_unstable_operation_enabled("v2.UpdatePipeline", true);
    let api = ObservabilityPipelinesAPI::with_config(configuration);
    let resp = api.update_pipeline(pipeline_data_id.clone(), body).await;
    if let Ok(value) = resp {
        println!("{:#?}", value);
    } else {
        println!("{:#?}", resp.unwrap_err());
    }
}

Instructions

First install the library and its dependencies and then save the example to src/main.rs and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" cargo run
/**
 * Update a pipeline returns "OK" response
 */

import { client, v2 } from "@datadog/datadog-api-client";

const configuration = client.createConfiguration();
configuration.unstableOperations["v2.updatePipeline"] = true;
const apiInstance = new v2.ObservabilityPipelinesApi(configuration);

// there is a valid "pipeline" in the system
const PIPELINE_DATA_ID = process.env.PIPELINE_DATA_ID as string;

const params: v2.ObservabilityPipelinesApiUpdatePipelineRequest = {
  body: {
    data: {
      attributes: {
        config: {
          destinations: [
            {
              id: "updated-datadog-logs-destination-id",
              inputs: ["my-processor-group"],
              type: "datadog_logs",
            },
          ],
          processors: [
            {
              enabled: true,
              id: "my-processor-group",
              include: "service:my-service",
              inputs: ["datadog-agent-source"],
              processors: [
                {
                  enabled: true,
                  id: "filter-processor",
                  include: "status:error",
                  type: "filter",
                },
              ],
            },
          ],
          sources: [
            {
              id: "datadog-agent-source",
              type: "datadog_agent",
            },
          ],
        },
        name: "Updated Pipeline Name",
      },
      id: PIPELINE_DATA_ID,
      type: "pipelines",
    },
  },
  pipelineId: PIPELINE_DATA_ID,
};

apiInstance
  .updatePipeline(params)
  .then((data: v2.ObservabilityPipeline) => {
    console.log(
      "API called successfully. Returned data: " + JSON.stringify(data)
    );
  })
  .catch((error: any) => console.error(error));

Instructions

First install the library and its dependencies and then save the example to example.ts and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" tsc "example.ts"

Note: This endpoint is in Preview. Fill out this form to request access.

DELETE https://api.ap1.datadoghq.com/api/v2/obs-pipelines/pipelines/{pipeline_id}https://api.ap2.datadoghq.com/api/v2/obs-pipelines/pipelines/{pipeline_id}https://api.datadoghq.eu/api/v2/obs-pipelines/pipelines/{pipeline_id}https://api.ddog-gov.com/api/v2/obs-pipelines/pipelines/{pipeline_id}https://api.datadoghq.com/api/v2/obs-pipelines/pipelines/{pipeline_id}https://api.us3.datadoghq.com/api/v2/obs-pipelines/pipelines/{pipeline_id}https://api.us5.datadoghq.com/api/v2/obs-pipelines/pipelines/{pipeline_id}

Présentation

Delete a pipeline. This endpoint requires the observability_pipelines_delete permission.

Arguments

Paramètres du chemin

Nom

Type

Description

pipeline_id [required]

string

The ID of the pipeline to delete.

Réponse

OK

Forbidden

API error response.

Expand All

Champ

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Not Found

API error response.

Expand All

Champ

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Conflict

API error response.

Expand All

Champ

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Too many requests

API error response.

Expand All

Champ

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Exemple de code

                  # Path parameters
export pipeline_id="CHANGE_ME"
# Curl command
curl -X DELETE "https://api.ap1.datadoghq.com"https://api.ap2.datadoghq.com"https://api.datadoghq.eu"https://api.ddog-gov.com"https://api.datadoghq.com"https://api.us3.datadoghq.com"https://api.us5.datadoghq.com/api/v2/obs-pipelines/pipelines/${pipeline_id}" \ -H "DD-API-KEY: ${DD_API_KEY}" \ -H "DD-APPLICATION-KEY: ${DD_APP_KEY}"
"""
Delete a pipeline returns "OK" response
"""

from os import environ
from datadog_api_client import ApiClient, Configuration
from datadog_api_client.v2.api.observability_pipelines_api import ObservabilityPipelinesApi

# there is a valid "pipeline" in the system
PIPELINE_DATA_ID = environ["PIPELINE_DATA_ID"]

configuration = Configuration()
configuration.unstable_operations["delete_pipeline"] = True
with ApiClient(configuration) as api_client:
    api_instance = ObservabilityPipelinesApi(api_client)
    api_instance.delete_pipeline(
        pipeline_id=PIPELINE_DATA_ID,
    )

Instructions

First install the library and its dependencies and then save the example to example.py and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" python3 "example.py"
# Delete a pipeline returns "OK" response

require "datadog_api_client"
DatadogAPIClient.configure do |config|
  config.unstable_operations["v2.delete_pipeline".to_sym] = true
end
api_instance = DatadogAPIClient::V2::ObservabilityPipelinesAPI.new

# there is a valid "pipeline" in the system
PIPELINE_DATA_ID = ENV["PIPELINE_DATA_ID"]
api_instance.delete_pipeline(PIPELINE_DATA_ID)

Instructions

First install the library and its dependencies and then save the example to example.rb and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" rb "example.rb"
// Delete a pipeline returns "OK" response

package main

import (
	"context"
	"fmt"
	"os"

	"github.com/DataDog/datadog-api-client-go/v2/api/datadog"
	"github.com/DataDog/datadog-api-client-go/v2/api/datadogV2"
)

func main() {
	// there is a valid "pipeline" in the system
	PipelineDataID := os.Getenv("PIPELINE_DATA_ID")

	ctx := datadog.NewDefaultContext(context.Background())
	configuration := datadog.NewConfiguration()
	configuration.SetUnstableOperationEnabled("v2.DeletePipeline", true)
	apiClient := datadog.NewAPIClient(configuration)
	api := datadogV2.NewObservabilityPipelinesApi(apiClient)
	r, err := api.DeletePipeline(ctx, PipelineDataID)

	if err != nil {
		fmt.Fprintf(os.Stderr, "Error when calling `ObservabilityPipelinesApi.DeletePipeline`: %v\n", err)
		fmt.Fprintf(os.Stderr, "Full HTTP response: %v\n", r)
	}
}

Instructions

First install the library and its dependencies and then save the example to main.go and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" go run "main.go"
// Delete a pipeline returns "OK" response

import com.datadog.api.client.ApiClient;
import com.datadog.api.client.ApiException;
import com.datadog.api.client.v2.api.ObservabilityPipelinesApi;

public class Example {
  public static void main(String[] args) {
    ApiClient defaultClient = ApiClient.getDefaultApiClient();
    defaultClient.setUnstableOperationEnabled("v2.deletePipeline", true);
    ObservabilityPipelinesApi apiInstance = new ObservabilityPipelinesApi(defaultClient);

    // there is a valid "pipeline" in the system
    String PIPELINE_DATA_ID = System.getenv("PIPELINE_DATA_ID");

    try {
      apiInstance.deletePipeline(PIPELINE_DATA_ID);
    } catch (ApiException e) {
      System.err.println("Exception when calling ObservabilityPipelinesApi#deletePipeline");
      System.err.println("Status code: " + e.getCode());
      System.err.println("Reason: " + e.getResponseBody());
      System.err.println("Response headers: " + e.getResponseHeaders());
      e.printStackTrace();
    }
  }
}

Instructions

First install the library and its dependencies and then save the example to Example.java and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" java "Example.java"
// Delete a pipeline returns "OK" response
use datadog_api_client::datadog;
use datadog_api_client::datadogV2::api_observability_pipelines::ObservabilityPipelinesAPI;

#[tokio::main]
async fn main() {
    // there is a valid "pipeline" in the system
    let pipeline_data_id = std::env::var("PIPELINE_DATA_ID").unwrap();
    let mut configuration = datadog::Configuration::new();
    configuration.set_unstable_operation_enabled("v2.DeletePipeline", true);
    let api = ObservabilityPipelinesAPI::with_config(configuration);
    let resp = api.delete_pipeline(pipeline_data_id.clone()).await;
    if let Ok(value) = resp {
        println!("{:#?}", value);
    } else {
        println!("{:#?}", resp.unwrap_err());
    }
}

Instructions

First install the library and its dependencies and then save the example to src/main.rs and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" cargo run
/**
 * Delete a pipeline returns "OK" response
 */

import { client, v2 } from "@datadog/datadog-api-client";

const configuration = client.createConfiguration();
configuration.unstableOperations["v2.deletePipeline"] = true;
const apiInstance = new v2.ObservabilityPipelinesApi(configuration);

// there is a valid "pipeline" in the system
const PIPELINE_DATA_ID = process.env.PIPELINE_DATA_ID as string;

const params: v2.ObservabilityPipelinesApiDeletePipelineRequest = {
  pipelineId: PIPELINE_DATA_ID,
};

apiInstance
  .deletePipeline(params)
  .then((data: any) => {
    console.log(
      "API called successfully. Returned data: " + JSON.stringify(data)
    );
  })
  .catch((error: any) => console.error(error));

Instructions

First install the library and its dependencies and then save the example to example.ts and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" tsc "example.ts"

Note: This endpoint is in Preview. Fill out this form to request access.

POST https://api.ap1.datadoghq.com/api/v2/obs-pipelines/pipelines/validatehttps://api.ap2.datadoghq.com/api/v2/obs-pipelines/pipelines/validatehttps://api.datadoghq.eu/api/v2/obs-pipelines/pipelines/validatehttps://api.ddog-gov.com/api/v2/obs-pipelines/pipelines/validatehttps://api.datadoghq.com/api/v2/obs-pipelines/pipelines/validatehttps://api.us3.datadoghq.com/api/v2/obs-pipelines/pipelines/validatehttps://api.us5.datadoghq.com/api/v2/obs-pipelines/pipelines/validate

Présentation

Validates a pipeline configuration without creating or updating any resources. Returns a list of validation errors, if any. This endpoint requires the observability_pipelines_read permission.

Requête

Body Data (required)

Expand All

Champ

Type

Description

data [required]

object

Contains the the pipeline configuration.

attributes [required]

object

Defines the pipeline’s name and its components (sources, processors, and destinations).

config [required]

object

Specifies the pipeline's configuration, including its sources, processors, and destinations.

destinations [required]

[ <oneOf>]

A list of destination components where processed logs are sent.

Option 1

object

The http_client destination sends data to an HTTP endpoint.

Supported pipeline types: logs, metrics

auth_strategy

enum

HTTP authentication strategy. Allowed enum values: none,basic,bearer

compression

object

Compression configuration for HTTP requests.

algorithm [required]

enum

Compression algorithm. Allowed enum values: gzip

encoding [required]

enum

Encoding format for log events. Allowed enum values: json

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be http_client. Allowed enum values: http_client

default: http_client

Option 2

object

The amazon_opensearch destination writes logs to Amazon OpenSearch.

Supported pipeline types: logs

auth [required]

object

Authentication settings for the Amazon OpenSearch destination. The strategy field determines whether basic or AWS-based authentication is used.

assume_role

string

The ARN of the role to assume (used with aws strategy).

aws_region

string

AWS region

external_id

string

External ID for the assumed role (used with aws strategy).

session_name

string

Session name for the assumed role (used with aws strategy).

strategy [required]

enum

The authentication strategy to use. Allowed enum values: basic,aws

bulk_index

string

The index to write logs to.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be amazon_opensearch. Allowed enum values: amazon_opensearch

default: amazon_opensearch

Option 3

object

The amazon_s3 destination sends your logs in Datadog-rehydratable format to an Amazon S3 bucket for archiving.

Supported pipeline types: logs

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

bucket [required]

string

S3 bucket name.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_prefix

string

Optional prefix for object keys.

region [required]

string

AWS region of the S3 bucket.

storage_class [required]

enum

S3 storage class. Allowed enum values: STANDARD,REDUCED_REDUNDANCY,INTELLIGENT_TIERING,STANDARD_IA,EXPRESS_ONEZONE,ONEZONE_IA,GLACIER,GLACIER_IR,DEEP_ARCHIVE

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. Always amazon_s3. Allowed enum values: amazon_s3

default: amazon_s3

Option 4

object

The amazon_security_lake destination sends your logs to Amazon Security Lake.

Supported pipeline types: logs

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

bucket [required]

string

Name of the Amazon S3 bucket in Security Lake (3-63 characters).

custom_source_name [required]

string

Custom source name for the logs in Security Lake.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

string

AWS region of the S3 bucket.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. Always amazon_security_lake. Allowed enum values: amazon_security_lake

default: amazon_security_lake

Option 5

object

The azure_storage destination forwards logs to an Azure Blob Storage container.

Supported pipeline types: logs

blob_prefix

string

Optional prefix for blobs written to the container.

container_name [required]

string

The name of the Azure Blob Storage container to store logs in.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be azure_storage. Allowed enum values: azure_storage

default: azure_storage

Option 6

object

The cloud_prem destination sends logs to Datadog CloudPrem.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be cloud_prem. Allowed enum values: cloud_prem

default: cloud_prem

Option 7

object

The crowdstrike_next_gen_siem destination forwards logs to CrowdStrike Next Gen SIEM.

Supported pipeline types: logs

compression

object

Compression configuration for log events.

algorithm [required]

enum

Compression algorithm for log events. Allowed enum values: gzip,zlib

level

int64

Compression level.

encoding [required]

enum

Encoding format for log events. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be crowdstrike_next_gen_siem. Allowed enum values: crowdstrike_next_gen_siem

default: crowdstrike_next_gen_siem

Option 8

object

The datadog_logs destination forwards logs to Datadog Log Management.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

routes

[object]

A list of routing rules that forward matching logs to Datadog using dedicated API keys.

api_key_key

string

Name of the environment variable or secret that stores the Datadog API key used by this route.

include

string

A Datadog search query that determines which logs are forwarded using this route.

route_id

string

Unique identifier for this route within the destination.

site

string

Datadog site where matching logs are sent (for example, us1).

type [required]

enum

The destination type. The value should always be datadog_logs. Allowed enum values: datadog_logs

default: datadog_logs

Option 9

object

The elasticsearch destination writes logs to an Elasticsearch cluster.

Supported pipeline types: logs

api_version

enum

The Elasticsearch API version to use. Set to auto to auto-detect. Allowed enum values: auto,v6,v7,v8

bulk_index

string

The index to write logs to in Elasticsearch.

data_stream

object

Configuration options for writing to Elasticsearch Data Streams instead of a fixed index.

dataset

string

The data stream dataset for your logs. This groups logs by their source or application.

dtype

string

The data stream type for your logs. This determines how logs are categorized within the data stream.

namespace

string

The data stream namespace for your logs. This separates logs into different environments or domains.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be elasticsearch. Allowed enum values: elasticsearch

default: elasticsearch

Option 10

object

The google_chronicle destination sends logs to Google Chronicle.

Supported pipeline types: logs

auth

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

customer_id [required]

string

The Google Chronicle customer ID.

encoding

enum

The encoding format for the logs sent to Chronicle. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

log_type

string

The log type metadata associated with the Chronicle destination.

type [required]

enum

The destination type. The value should always be google_chronicle. Allowed enum values: google_chronicle

default: google_chronicle

Option 11

object

The google_cloud_storage destination stores logs in a Google Cloud Storage (GCS) bucket. It requires a bucket name, GCP authentication, and metadata fields.

Supported pipeline types: logs

acl

enum

Access control list setting for objects written to the bucket. Allowed enum values: private,project-private,public-read,authenticated-read,bucket-owner-read,bucket-owner-full-control

auth

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

bucket [required]

string

Name of the GCS bucket.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_prefix

string

Optional prefix for object keys within the GCS bucket.

metadata

[object]

Custom metadata to attach to each object uploaded to the GCS bucket.

name [required]

string

The metadata key.

value [required]

string

The metadata value.

storage_class [required]

enum

Storage class used for objects stored in GCS. Allowed enum values: STANDARD,NEARLINE,COLDLINE,ARCHIVE

type [required]

enum

The destination type. Always google_cloud_storage. Allowed enum values: google_cloud_storage

default: google_cloud_storage

Option 12

object

The google_pubsub destination publishes logs to a Google Cloud Pub/Sub topic.

Supported pipeline types: logs

auth

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

encoding [required]

enum

Encoding format for log events. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

project [required]

string

The GCP project ID that owns the Pub/Sub topic.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topic [required]

string

The Pub/Sub topic name to publish logs to.

type [required]

enum

The destination type. The value should always be google_pubsub. Allowed enum values: google_pubsub

default: google_pubsub

Option 13

object

The kafka destination sends logs to Apache Kafka topics.

Supported pipeline types: logs

compression

enum

Compression codec for Kafka messages. Allowed enum values: none,gzip,snappy,lz4,zstd

encoding [required]

enum

Encoding format for log events. Allowed enum values: json,raw_message

headers_key

string

The field name to use for Kafka message headers.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_field

string

The field name to use as the Kafka message key.

librdkafka_options

[object]

Optional list of advanced Kafka producer configuration options, defined as key-value pairs.

name [required]

string

The name of the librdkafka configuration option to set.

value [required]

string

The value assigned to the specified librdkafka configuration option.

message_timeout_ms

int64

Maximum time in milliseconds to wait for message delivery confirmation.

rate_limit_duration_secs

int64

Duration in seconds for the rate limit window.

rate_limit_num

int64

Maximum number of messages allowed per rate limit duration.

sasl

object

Specifies the SASL mechanism for authenticating with a Kafka cluster.

mechanism

enum

SASL mechanism used for Kafka authentication. Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512

socket_timeout_ms

int64

Socket timeout in milliseconds for network requests.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topic [required]

string

The Kafka topic name to publish logs to.

type [required]

enum

The destination type. The value should always be kafka. Allowed enum values: kafka

default: kafka

Option 14

object

The microsoft_sentinel destination forwards logs to Microsoft Sentinel.

Supported pipeline types: logs

client_id [required]

string

Azure AD client ID used for authentication.

dcr_immutable_id [required]

string

The immutable ID of the Data Collection Rule (DCR).

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

table [required]

string

The name of the Log Analytics table where logs are sent.

tenant_id [required]

string

Azure AD tenant ID.

type [required]

enum

The destination type. The value should always be microsoft_sentinel. Allowed enum values: microsoft_sentinel

default: microsoft_sentinel

Option 15

object

The new_relic destination sends logs to the New Relic platform.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

enum

The New Relic region. Allowed enum values: us,eu

type [required]

enum

The destination type. The value should always be new_relic. Allowed enum values: new_relic

default: new_relic

Option 16

object

The opensearch destination writes logs to an OpenSearch cluster.

Supported pipeline types: logs

bulk_index

string

The index to write logs to.

data_stream

object

Configuration options for writing to OpenSearch Data Streams instead of a fixed index.

dataset

string

The data stream dataset for your logs. This groups logs by their source or application.

dtype

string

The data stream type for your logs. This determines how logs are categorized within the data stream.

namespace

string

The data stream namespace for your logs. This separates logs into different environments or domains.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be opensearch. Allowed enum values: opensearch

default: opensearch

Option 17

object

The rsyslog destination forwards logs to an external rsyslog server over TCP or UDP using the syslog protocol.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

keepalive

int64

Optional socket keepalive duration in milliseconds.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be rsyslog. Allowed enum values: rsyslog

default: rsyslog

Option 18

object

The sentinel_one destination sends logs to SentinelOne.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

enum

The SentinelOne region to send logs to. Allowed enum values: us,eu,ca,data_set_us

type [required]

enum

The destination type. The value should always be sentinel_one. Allowed enum values: sentinel_one

default: sentinel_one

Option 19

object

The socket destination sends logs over TCP or UDP to a remote server.

Supported pipeline types: logs

encoding [required]

enum

Encoding format for log events. Allowed enum values: json,raw_message

framing [required]

 <oneOf>

Framing method configuration.

Option 1

object

Each log event is delimited by a newline character.

method [required]

enum

The definition of ObservabilityPipelineSocketDestinationFramingNewlineDelimitedMethod object. Allowed enum values: newline_delimited

Option 2

object

Event data is not delimited at all.

method [required]

enum

The definition of ObservabilityPipelineSocketDestinationFramingBytesMethod object. Allowed enum values: bytes

Option 3

object

Each log event is separated using the specified delimiter character.

delimiter [required]

string

A single ASCII character used as a delimiter.

method [required]

enum

The definition of ObservabilityPipelineSocketDestinationFramingCharacterDelimitedMethod object. Allowed enum values: character_delimited

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

mode [required]

enum

Protocol used to send logs. Allowed enum values: tcp,udp

tls

object

TLS configuration. Relevant only when mode is tcp.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be socket. Allowed enum values: socket

default: socket

Option 20

object

The splunk_hec destination forwards logs to Splunk using the HTTP Event Collector (HEC).

Supported pipeline types: logs

auto_extract_timestamp

boolean

If true, Splunk tries to extract timestamps from incoming log events. If false, Splunk assigns the time the event was received.

encoding

enum

Encoding format for log events. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

index

string

Optional name of the Splunk index where logs are written.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

sourcetype

string

The Splunk sourcetype to assign to log events.

type [required]

enum

The destination type. Always splunk_hec. Allowed enum values: splunk_hec

default: splunk_hec

Option 21

object

The sumo_logic destination forwards logs to Sumo Logic.

Supported pipeline types: logs

encoding

enum

The output encoding format. Allowed enum values: json,raw_message,logfmt

header_custom_fields

[object]

A list of custom headers to include in the request to Sumo Logic.

name [required]

string

The header field name.

value [required]

string

The header field value.

header_host_name

string

Optional override for the host name header.

header_source_category

string

Optional override for the source category header.

header_source_name

string

Optional override for the source name header.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be sumo_logic. Allowed enum values: sumo_logic

default: sumo_logic

Option 22

object

The syslog_ng destination forwards logs to an external syslog-ng server over TCP or UDP using the syslog protocol.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

keepalive

int64

Optional socket keepalive duration in milliseconds.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be syslog_ng. Allowed enum values: syslog_ng

default: syslog_ng

Option 23

object

The datadog_metrics destination forwards metrics to Datadog.

Supported pipeline types: metrics

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be datadog_metrics. Allowed enum values: datadog_metrics

default: datadog_metrics

pipeline_type

enum

The type of data being ingested. Defaults to logs if not specified. Allowed enum values: logs,metrics

default: logs

processor_groups

[object]

A list of processor groups that transform or enrich log data.

display_name

string

The display name for a component.

enabled [required]

boolean

Whether this processor group is enabled.

id [required]

string

The unique identifier for the processor group.

include [required]

string

Conditional expression for when this processor group should execute.

inputs [required]

[string]

A list of IDs for components whose output is used as the input for this processor group.

processors [required]

[ <oneOf>]

Processors applied sequentially within this group. Events flow through each processor in order.

Option 1

object

The filter processor allows conditional processing of logs/metrics based on a Datadog search query. Logs/metrics that match the include query are passed through; others are discarded.

Supported pipeline types: logs, metrics

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs/metrics should pass through the filter. Logs/metrics that match this query continue to downstream components; others are dropped.

type [required]

enum

The processor type. The value should always be filter. Allowed enum values: filter

default: filter

Option 2

object

The add_env_vars processor adds environment variable values to log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this processor in the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_env_vars. Allowed enum values: add_env_vars

default: add_env_vars

variables [required]

[object]

A list of environment variable mappings to apply to log fields.

field [required]

string

The target field in the log event.

name [required]

string

The name of the environment variable to read.

Option 3

object

The add_fields processor adds static key-value fields to logs.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[object]

A list of static fields (key-value pairs) that is added to each log event processed by this component.

name [required]

string

The field name.

value [required]

string

The field value.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_fields. Allowed enum values: add_fields

default: add_fields

Option 4

object

The add_hostname processor adds the hostname to log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_hostname. Allowed enum values: add_hostname

default: add_hostname

Option 5

object

The custom_processor processor transforms events using Vector Remap Language (VRL) scripts with advanced filtering capabilities.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets. This field should always be set to * for the custom_processor processor.

default: *

remaps [required]

[object]

Array of VRL remap rules.

drop_on_error [required]

boolean

Whether to drop events that caused errors during processing.

enabled

boolean

Whether this remap rule is enabled.

include [required]

string

A Datadog search query used to filter events for this specific remap rule.

name [required]

string

A descriptive name for this remap rule.

source [required]

string

The VRL script source code that defines the processing logic.

type [required]

enum

The processor type. The value should always be custom_processor. Allowed enum values: custom_processor

default: custom_processor

Option 6

object

The datadog_tags processor includes or excludes specific Datadog tags in your logs.

Supported pipeline types: logs

action [required]

enum

The action to take on tags with matching keys. Allowed enum values: include,exclude

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

keys [required]

[string]

A list of tag keys.

mode [required]

enum

The processing mode. Allowed enum values: filter

type [required]

enum

The processor type. The value should always be datadog_tags. Allowed enum values: datadog_tags

default: datadog_tags

Option 7

object

The dedupe processor removes duplicate fields in log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[string]

A list of log field paths to check for duplicates.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

mode [required]

enum

The deduplication mode to apply to the fields. Allowed enum values: match,ignore

type [required]

enum

The processor type. The value should always be dedupe. Allowed enum values: dedupe

default: dedupe

Option 8

object

The enrichment_table processor enriches logs using a static CSV file, GeoIP database, or reference table. Exactly one of file, geoip, or reference_table must be configured.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

file

object

Defines a static enrichment table loaded from a CSV file.

encoding [required]

object

File encoding format.

delimiter [required]

string

The encoding delimiter.

includes_headers [required]

boolean

The encoding includes_headers.

type [required]

enum

Specifies the encoding format (e.g., CSV) used for enrichment tables. Allowed enum values: csv

key [required]

[object]

Key fields used to look up enrichment values.

column [required]

string

The items column.

comparison [required]

enum

Defines how to compare key fields for enrichment table lookups. Allowed enum values: equals

field [required]

string

The items field.

path [required]

string

Path to the CSV file.

schema [required]

[object]

Schema defining column names and their types.

column [required]

string

The items column.

type [required]

enum

Declares allowed data types for enrichment table columns. Allowed enum values: string,boolean,integer,float,date,timestamp

geoip

object

Uses a GeoIP database to enrich logs based on an IP field.

key_field [required]

string

Path to the IP field in the log.

locale [required]

string

Locale used to resolve geographical names.

path [required]

string

Path to the GeoIP database file.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

reference_table

object

Uses a Datadog reference table to enrich logs.

columns

[string]

List of column names to include from the reference table. If not provided, all columns are included.

key_field [required]

string

Path to the field in the log event to match against the reference table.

table_id [required]

string

The unique identifier of the reference table.

target [required]

string

Path where enrichment results should be stored in the log.

type [required]

enum

The processor type. The value should always be enrichment_table. Allowed enum values: enrichment_table

default: enrichment_table

Option 9

object

The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog. Metrics can be counters, gauges, or distributions and optionally grouped by log fields.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include

string

A Datadog search query used to determine which logs this processor targets.

metrics

[object]

Configuration for generating individual metrics.

group_by

[string]

Optional fields used to group the metric series.

include [required]

string

Datadog filter query to match logs for metric generation.

metric_type [required]

enum

Type of metric to create. Allowed enum values: count,gauge,distribution

name [required]

string

Name of the custom metric to be created.

value [required]

 <oneOf>

Specifies how the value of the generated metric is computed.

Option 1

object

Strategy that increments a generated metric by one for each matching event.

strategy [required]

enum

Increments the metric by 1 for each matching event. Allowed enum values: increment_by_one

Option 2

object

Strategy that increments a generated metric based on the value of a log field.

field [required]

string

Name of the log field containing the numeric value to increment the metric by.

strategy [required]

enum

Uses a numeric field in the log event as the metric increment. Allowed enum values: increment_by_field

type [required]

enum

The processor type. Always generate_datadog_metrics. Allowed enum values: generate_datadog_metrics

default: generate_datadog_metrics

Option 10

object

The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

mappings [required]

[object]

A list of mapping rules to convert events to the OCSF format.

include [required]

string

A Datadog search query used to select the logs that this mapping should apply to.

mapping [required]

 <oneOf>

Defines a single mapping rule for transforming logs into the OCSF schema.

Option 1

enum

Predefined library mappings for common log formats. Allowed enum values: CloudTrail Account Change,GCP Cloud Audit CreateBucket,GCP Cloud Audit CreateSink,GCP Cloud Audit SetIamPolicy,GCP Cloud Audit UpdateSink,Github Audit Log API Activity,Google Workspace Admin Audit addPrivilege,Microsoft 365 Defender Incident,Microsoft 365 Defender UserLoggedIn,Okta System Log Authentication,Palo Alto Networks Firewall Traffic

type [required]

enum

The processor type. The value should always be ocsf_mapper. Allowed enum values: ocsf_mapper

default: ocsf_mapper

Option 11

object

The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.

Supported pipeline types: logs

disable_library_rules

boolean

If set to true, disables the default Grok rules provided by Datadog.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

A unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

rules [required]

[object]

The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.

match_rules [required]

[object]

A list of Grok parsing rules that define how to extract fields from the source field. Each rule must contain a name and a valid Grok pattern.

name [required]

string

The name of the rule.

rule [required]

string

The definition of the Grok rule.

source [required]

string

The name of the field in the log event to apply the Grok rules to.

support_rules

[object]

A list of Grok helper rules that can be referenced by the parsing rules.

name [required]

string

The name of the Grok helper rule.

rule [required]

string

The definition of the Grok helper rule.

type [required]

enum

The processor type. The value should always be parse_grok. Allowed enum values: parse_grok

default: parse_grok

Option 12

object

The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

field [required]

string

The name of the log field that contains a JSON string.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be parse_json. Allowed enum values: parse_json

default: parse_json

Option 13

object

The parse_xml processor parses XML from a specified field and extracts it into the event.

Supported pipeline types: logs

always_use_text_key

boolean

Whether to always use a text key for element content.

attr_prefix

string

The prefix to use for XML attributes in the parsed output.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

field [required]

string

The name of the log field that contains an XML string.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

include_attr

boolean

Whether to include XML attributes in the parsed output.

parse_bool

boolean

Whether to parse boolean values from strings.

parse_null

boolean

Whether to parse null values.

parse_number

boolean

Whether to parse numeric values from strings.

text_key

string

The key name to use for text content within XML elements. Must be at least 1 character if specified.

type [required]

enum

The processor type. The value should always be parse_xml. Allowed enum values: parse_xml

default: parse_xml

Option 14

object

The quota processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.

Supported pipeline types: logs

display_name

string

The display name for a component.

drop_events

boolean

If set to true, logs that match the quota filter and are sent after the quota is exceeded are dropped. Logs that do not match the filter continue through the pipeline. Note: You can set either drop_events or overflow_action, but not both.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

ignore_when_missing_partitions

boolean

If true, the processor skips quota checks when partition fields are missing from the logs.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

name [required]

string

Name of the quota.

overflow_action

enum

The action to take when the quota or bucket limit is exceeded. Options:

  • drop: Drop the event.
  • no_action: Let the event pass through.
  • overflow_routing: Route to an overflow destination. Allowed enum values: drop,no_action,overflow_routing

overrides

[object]

A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.

fields [required]

[object]

A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.

name [required]

string

The field name.

value [required]

string

The field value.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

partition_fields

[string]

A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.

too_many_buckets_action

enum

The action to take when the quota or bucket limit is exceeded. Options:

  • drop: Drop the event.
  • no_action: Let the event pass through.
  • overflow_routing: Route to an overflow destination. Allowed enum values: drop,no_action,overflow_routing

type [required]

enum

The processor type. The value should always be quota. Allowed enum values: quota

default: quota

Option 15

object

The reduce processor aggregates and merges logs based on matching keys and merge strategies.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by [required]

[string]

A list of fields used to group log events for merging.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

merge_strategies [required]

[object]

List of merge strategies defining how values from grouped events should be combined.

path [required]

string

The field path in the log event.

strategy [required]

enum

The merge strategy to apply. Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique

type [required]

enum

The processor type. The value should always be reduce. Allowed enum values: reduce

default: reduce

Option 16

object

The remove_fields processor deletes specified fields from logs.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[string]

A list of field names to be removed from each log event.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be remove_fields. Allowed enum values: remove_fields

default: remove_fields

Option 17

object

The rename_fields processor changes field names.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[object]

A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.

destination [required]

string

The field name to assign the renamed value to.

preserve_source [required]

boolean

Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.

source [required]

string

The original field name in the log event that should be renamed.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be rename_fields. Allowed enum values: rename_fields

default: rename_fields

Option 18

object

The sample processor allows probabilistic sampling of logs at a fixed rate.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by

[string]

Optional list of fields to group events by. Each group is sampled independently.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

percentage [required]

double

The percentage of logs to sample.

type [required]

enum

The processor type. The value should always be sample. Allowed enum values: sample

default: sample

Option 19

object

The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

rules [required]

[object]

A list of rules for identifying and acting on sensitive data patterns.

keyword_options

object

Configuration for keywords used to reinforce sensitive data pattern detection.

keywords [required]

[string]

A list of keywords to match near the sensitive pattern.

proximity [required]

int64

Maximum number of tokens between a keyword and a sensitive value match.

name [required]

string

A name identifying the rule.

on_match [required]

 <oneOf>

Defines what action to take when sensitive data is matched.

Option 1

object

Configuration for completely redacting matched sensitive data.

action [required]

enum

Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility. Allowed enum values: redact

options [required]

object

Configuration for fully redacting sensitive data.

replace [required]

string

The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptions replace.

Option 2

object

Configuration for hashing matched sensitive values.

action [required]

enum

Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content. Allowed enum values: hash

options

object

The ObservabilityPipelineSensitiveDataScannerProcessorActionHash options.

Option 3

object

Configuration for partially redacting matched sensitive data.

action [required]

enum

Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card). Allowed enum values: partial_redact

options [required]

object

Controls how partial redaction is applied, including character count and direction.

characters [required]

int64

The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptions characters.

direction [required]

enum

Indicates whether to redact characters from the first or last part of the matched value. Allowed enum values: first,last

pattern [required]

 <oneOf>

Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.

Option 1

object

Defines a custom regex-based pattern for identifying sensitive data in logs.

options [required]

object

Options for defining a custom regex pattern.

description

string

Human-readable description providing context about a sensitive data scanner rule

rule [required]

string

A regular expression used to detect sensitive values. Must be a valid regex.

type [required]

enum

Indicates a custom regular expression is used for matching. Allowed enum values: custom

Option 2

object

Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.

options [required]

object

Options for selecting a predefined library pattern and enabling keyword support.

description

string

Human-readable description providing context about a sensitive data scanner rule

id [required]

string

Identifier for a predefined pattern from the sensitive data scanner pattern library.

use_recommended_keywords

boolean

Whether to augment the pattern with recommended keywords (optional).

type [required]

enum

Indicates that a predefined library pattern is used. Allowed enum values: library

scope [required]

 <oneOf>

Determines which parts of the log the pattern-matching rule should be applied to.

Option 1

object

Includes only specific fields for sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Applies the rule only to included fields. Allowed enum values: include

Option 2

object

Excludes specific fields from sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Excludes specific fields from processing. Allowed enum values: exclude

Option 3

object

Applies scanning across all available fields.

target [required]

enum

Applies the rule to all fields. Allowed enum values: all

tags [required]

[string]

Tags assigned to this rule for filtering and classification.

type [required]

enum

The processor type. The value should always be sensitive_data_scanner. Allowed enum values: sensitive_data_scanner

default: sensitive_data_scanner

Option 20

object

The split_array processor splits array fields into separate events based on configured rules.

Supported pipeline types: logs

arrays [required]

[object]

A list of array split configurations.

field [required]

string

The path to the array field to split.

include [required]

string

A Datadog search query used to determine which logs this array split operation targets.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets. For split_array, this should typically be *.

type [required]

enum

The processor type. The value should always be split_array. Allowed enum values: split_array

default: split_array

Option 21

object

The throttle processor limits the number of events that pass through over a given time window.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by

[string]

Optional list of fields used to group events before the threshold has been reached.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

threshold [required]

int64

the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.

type [required]

enum

The processor type. The value should always be throttle. Allowed enum values: throttle

default: throttle

window [required]

double

The time window in seconds over which the threshold applies.

Option 22

object

The metric_tags processor filters metrics based on their tags using Datadog tag key patterns.

Supported pipeline types: metrics

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query that determines which metrics the processor targets.

rules [required]

[object]

A list of rules for filtering metric tags.

action [required]

enum

The action to take on tags with matching keys. Allowed enum values: include,exclude

include [required]

string

A Datadog search query used to determine which metrics this rule targets.

keys [required]

[string]

A list of tag keys to include or exclude.

mode [required]

enum

The processing mode for tag filtering. Allowed enum values: filter

type [required]

enum

The processor type. The value should always be metric_tags. Allowed enum values: metric_tags

default: metric_tags

processors

[object]

DEPRECATED: A list of processor groups that transform or enrich log data.

Deprecated: This field is deprecated, you should now use the processor_groups field.

display_name

string

The display name for a component.

enabled [required]

boolean

Whether this processor group is enabled.

id [required]

string

The unique identifier for the processor group.

include [required]

string

Conditional expression for when this processor group should execute.

inputs [required]

[string]

A list of IDs for components whose output is used as the input for this processor group.

processors [required]

[ <oneOf>]

Processors applied sequentially within this group. Events flow through each processor in order.

Option 1

object

The filter processor allows conditional processing of logs/metrics based on a Datadog search query. Logs/metrics that match the include query are passed through; others are discarded.

Supported pipeline types: logs, metrics

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs/metrics should pass through the filter. Logs/metrics that match this query continue to downstream components; others are dropped.

type [required]

enum

The processor type. The value should always be filter. Allowed enum values: filter

default: filter

Option 2

object

The add_env_vars processor adds environment variable values to log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this processor in the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_env_vars. Allowed enum values: add_env_vars

default: add_env_vars

variables [required]

[object]

A list of environment variable mappings to apply to log fields.

field [required]

string

The target field in the log event.

name [required]

string

The name of the environment variable to read.

Option 3

object

The add_fields processor adds static key-value fields to logs.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[object]

A list of static fields (key-value pairs) that is added to each log event processed by this component.

name [required]

string

The field name.

value [required]

string

The field value.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_fields. Allowed enum values: add_fields

default: add_fields

Option 4

object

The add_hostname processor adds the hostname to log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be add_hostname. Allowed enum values: add_hostname

default: add_hostname

Option 5

object

The custom_processor processor transforms events using Vector Remap Language (VRL) scripts with advanced filtering capabilities.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets. This field should always be set to * for the custom_processor processor.

default: *

remaps [required]

[object]

Array of VRL remap rules.

drop_on_error [required]

boolean

Whether to drop events that caused errors during processing.

enabled

boolean

Whether this remap rule is enabled.

include [required]

string

A Datadog search query used to filter events for this specific remap rule.

name [required]

string

A descriptive name for this remap rule.

source [required]

string

The VRL script source code that defines the processing logic.

type [required]

enum

The processor type. The value should always be custom_processor. Allowed enum values: custom_processor

default: custom_processor

Option 6

object

The datadog_tags processor includes or excludes specific Datadog tags in your logs.

Supported pipeline types: logs

action [required]

enum

The action to take on tags with matching keys. Allowed enum values: include,exclude

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

keys [required]

[string]

A list of tag keys.

mode [required]

enum

The processing mode. Allowed enum values: filter

type [required]

enum

The processor type. The value should always be datadog_tags. Allowed enum values: datadog_tags

default: datadog_tags

Option 7

object

The dedupe processor removes duplicate fields in log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[string]

A list of log field paths to check for duplicates.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

mode [required]

enum

The deduplication mode to apply to the fields. Allowed enum values: match,ignore

type [required]

enum

The processor type. The value should always be dedupe. Allowed enum values: dedupe

default: dedupe

Option 8

object

The enrichment_table processor enriches logs using a static CSV file, GeoIP database, or reference table. Exactly one of file, geoip, or reference_table must be configured.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

file

object

Defines a static enrichment table loaded from a CSV file.

encoding [required]

object

File encoding format.

delimiter [required]

string

The encoding delimiter.

includes_headers [required]

boolean

The encoding includes_headers.

type [required]

enum

Specifies the encoding format (e.g., CSV) used for enrichment tables. Allowed enum values: csv

key [required]

[object]

Key fields used to look up enrichment values.

column [required]

string

The items column.

comparison [required]

enum

Defines how to compare key fields for enrichment table lookups. Allowed enum values: equals

field [required]

string

The items field.

path [required]

string

Path to the CSV file.

schema [required]

[object]

Schema defining column names and their types.

column [required]

string

The items column.

type [required]

enum

Declares allowed data types for enrichment table columns. Allowed enum values: string,boolean,integer,float,date,timestamp

geoip

object

Uses a GeoIP database to enrich logs based on an IP field.

key_field [required]

string

Path to the IP field in the log.

locale [required]

string

Locale used to resolve geographical names.

path [required]

string

Path to the GeoIP database file.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

reference_table

object

Uses a Datadog reference table to enrich logs.

columns

[string]

List of column names to include from the reference table. If not provided, all columns are included.

key_field [required]

string

Path to the field in the log event to match against the reference table.

table_id [required]

string

The unique identifier of the reference table.

target [required]

string

Path where enrichment results should be stored in the log.

type [required]

enum

The processor type. The value should always be enrichment_table. Allowed enum values: enrichment_table

default: enrichment_table

Option 9

object

The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog. Metrics can be counters, gauges, or distributions and optionally grouped by log fields.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include

string

A Datadog search query used to determine which logs this processor targets.

metrics

[object]

Configuration for generating individual metrics.

group_by

[string]

Optional fields used to group the metric series.

include [required]

string

Datadog filter query to match logs for metric generation.

metric_type [required]

enum

Type of metric to create. Allowed enum values: count,gauge,distribution

name [required]

string

Name of the custom metric to be created.

value [required]

 <oneOf>

Specifies how the value of the generated metric is computed.

Option 1

object

Strategy that increments a generated metric by one for each matching event.

strategy [required]

enum

Increments the metric by 1 for each matching event. Allowed enum values: increment_by_one

Option 2

object

Strategy that increments a generated metric based on the value of a log field.

field [required]

string

Name of the log field containing the numeric value to increment the metric by.

strategy [required]

enum

Uses a numeric field in the log event as the metric increment. Allowed enum values: increment_by_field

type [required]

enum

The processor type. Always generate_datadog_metrics. Allowed enum values: generate_datadog_metrics

default: generate_datadog_metrics

Option 10

object

The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

mappings [required]

[object]

A list of mapping rules to convert events to the OCSF format.

include [required]

string

A Datadog search query used to select the logs that this mapping should apply to.

mapping [required]

 <oneOf>

Defines a single mapping rule for transforming logs into the OCSF schema.

Option 1

enum

Predefined library mappings for common log formats. Allowed enum values: CloudTrail Account Change,GCP Cloud Audit CreateBucket,GCP Cloud Audit CreateSink,GCP Cloud Audit SetIamPolicy,GCP Cloud Audit UpdateSink,Github Audit Log API Activity,Google Workspace Admin Audit addPrivilege,Microsoft 365 Defender Incident,Microsoft 365 Defender UserLoggedIn,Okta System Log Authentication,Palo Alto Networks Firewall Traffic

type [required]

enum

The processor type. The value should always be ocsf_mapper. Allowed enum values: ocsf_mapper

default: ocsf_mapper

Option 11

object

The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.

Supported pipeline types: logs

disable_library_rules

boolean

If set to true, disables the default Grok rules provided by Datadog.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

A unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

rules [required]

[object]

The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.

match_rules [required]

[object]

A list of Grok parsing rules that define how to extract fields from the source field. Each rule must contain a name and a valid Grok pattern.

name [required]

string

The name of the rule.

rule [required]

string

The definition of the Grok rule.

source [required]

string

The name of the field in the log event to apply the Grok rules to.

support_rules

[object]

A list of Grok helper rules that can be referenced by the parsing rules.

name [required]

string

The name of the Grok helper rule.

rule [required]

string

The definition of the Grok helper rule.

type [required]

enum

The processor type. The value should always be parse_grok. Allowed enum values: parse_grok

default: parse_grok

Option 12

object

The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

field [required]

string

The name of the log field that contains a JSON string.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be parse_json. Allowed enum values: parse_json

default: parse_json

Option 13

object

The parse_xml processor parses XML from a specified field and extracts it into the event.

Supported pipeline types: logs

always_use_text_key

boolean

Whether to always use a text key for element content.

attr_prefix

string

The prefix to use for XML attributes in the parsed output.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

field [required]

string

The name of the log field that contains an XML string.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

include_attr

boolean

Whether to include XML attributes in the parsed output.

parse_bool

boolean

Whether to parse boolean values from strings.

parse_null

boolean

Whether to parse null values.

parse_number

boolean

Whether to parse numeric values from strings.

text_key

string

The key name to use for text content within XML elements. Must be at least 1 character if specified.

type [required]

enum

The processor type. The value should always be parse_xml. Allowed enum values: parse_xml

default: parse_xml

Option 14

object

The quota processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.

Supported pipeline types: logs

display_name

string

The display name for a component.

drop_events

boolean

If set to true, logs that match the quota filter and are sent after the quota is exceeded are dropped. Logs that do not match the filter continue through the pipeline. Note: You can set either drop_events or overflow_action, but not both.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

ignore_when_missing_partitions

boolean

If true, the processor skips quota checks when partition fields are missing from the logs.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

name [required]

string

Name of the quota.

overflow_action

enum

The action to take when the quota or bucket limit is exceeded. Options:

  • drop: Drop the event.
  • no_action: Let the event pass through.
  • overflow_routing: Route to an overflow destination. Allowed enum values: drop,no_action,overflow_routing

overrides

[object]

A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.

fields [required]

[object]

A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.

name [required]

string

The field name.

value [required]

string

The field value.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

partition_fields

[string]

A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.

too_many_buckets_action

enum

The action to take when the quota or bucket limit is exceeded. Options:

  • drop: Drop the event.
  • no_action: Let the event pass through.
  • overflow_routing: Route to an overflow destination. Allowed enum values: drop,no_action,overflow_routing

type [required]

enum

The processor type. The value should always be quota. Allowed enum values: quota

default: quota

Option 15

object

The reduce processor aggregates and merges logs based on matching keys and merge strategies.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by [required]

[string]

A list of fields used to group log events for merging.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

merge_strategies [required]

[object]

List of merge strategies defining how values from grouped events should be combined.

path [required]

string

The field path in the log event.

strategy [required]

enum

The merge strategy to apply. Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique

type [required]

enum

The processor type. The value should always be reduce. Allowed enum values: reduce

default: reduce

Option 16

object

The remove_fields processor deletes specified fields from logs.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[string]

A list of field names to be removed from each log event.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be remove_fields. Allowed enum values: remove_fields

default: remove_fields

Option 17

object

The rename_fields processor changes field names.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

fields [required]

[object]

A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.

destination [required]

string

The field name to assign the renamed value to.

preserve_source [required]

boolean

Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.

source [required]

string

The original field name in the log event that should be renamed.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

type [required]

enum

The processor type. The value should always be rename_fields. Allowed enum values: rename_fields

default: rename_fields

Option 18

object

The sample processor allows probabilistic sampling of logs at a fixed rate.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by

[string]

Optional list of fields to group events by. Each group is sampled independently.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

percentage [required]

double

The percentage of logs to sample.

type [required]

enum

The processor type. The value should always be sample. Allowed enum values: sample

default: sample

Option 19

object

The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

rules [required]

[object]

A list of rules for identifying and acting on sensitive data patterns.

keyword_options

object

Configuration for keywords used to reinforce sensitive data pattern detection.

keywords [required]

[string]

A list of keywords to match near the sensitive pattern.

proximity [required]

int64

Maximum number of tokens between a keyword and a sensitive value match.

name [required]

string

A name identifying the rule.

on_match [required]

 <oneOf>

Defines what action to take when sensitive data is matched.

Option 1

object

Configuration for completely redacting matched sensitive data.

action [required]

enum

Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility. Allowed enum values: redact

options [required]

object

Configuration for fully redacting sensitive data.

replace [required]

string

The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptions replace.

Option 2

object

Configuration for hashing matched sensitive values.

action [required]

enum

Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content. Allowed enum values: hash

options

object

The ObservabilityPipelineSensitiveDataScannerProcessorActionHash options.

Option 3

object

Configuration for partially redacting matched sensitive data.

action [required]

enum

Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card). Allowed enum values: partial_redact

options [required]

object

Controls how partial redaction is applied, including character count and direction.

characters [required]

int64

The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptions characters.

direction [required]

enum

Indicates whether to redact characters from the first or last part of the matched value. Allowed enum values: first,last

pattern [required]

 <oneOf>

Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.

Option 1

object

Defines a custom regex-based pattern for identifying sensitive data in logs.

options [required]

object

Options for defining a custom regex pattern.

description

string

Human-readable description providing context about a sensitive data scanner rule

rule [required]

string

A regular expression used to detect sensitive values. Must be a valid regex.

type [required]

enum

Indicates a custom regular expression is used for matching. Allowed enum values: custom

Option 2

object

Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.

options [required]

object

Options for selecting a predefined library pattern and enabling keyword support.

description

string

Human-readable description providing context about a sensitive data scanner rule

id [required]

string

Identifier for a predefined pattern from the sensitive data scanner pattern library.

use_recommended_keywords

boolean

Whether to augment the pattern with recommended keywords (optional).

type [required]

enum

Indicates that a predefined library pattern is used. Allowed enum values: library

scope [required]

 <oneOf>

Determines which parts of the log the pattern-matching rule should be applied to.

Option 1

object

Includes only specific fields for sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Applies the rule only to included fields. Allowed enum values: include

Option 2

object

Excludes specific fields from sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Excludes specific fields from processing. Allowed enum values: exclude

Option 3

object

Applies scanning across all available fields.

target [required]

enum

Applies the rule to all fields. Allowed enum values: all

tags [required]

[string]

Tags assigned to this rule for filtering and classification.

type [required]

enum

The processor type. The value should always be sensitive_data_scanner. Allowed enum values: sensitive_data_scanner

default: sensitive_data_scanner

Option 20

object

The split_array processor splits array fields into separate events based on configured rules.

Supported pipeline types: logs

arrays [required]

[object]

A list of array split configurations.

field [required]

string

The path to the array field to split.

include [required]

string

A Datadog search query used to determine which logs this array split operation targets.

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets. For split_array, this should typically be *.

type [required]

enum

The processor type. The value should always be split_array. Allowed enum values: split_array

default: split_array

Option 21

object

The throttle processor limits the number of events that pass through over a given time window.

Supported pipeline types: logs

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

group_by

[string]

Optional list of fields used to group events before the threshold has been reached.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

threshold [required]

int64

the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.

type [required]

enum

The processor type. The value should always be throttle. Allowed enum values: throttle

default: throttle

window [required]

double

The time window in seconds over which the threshold applies.

Option 22

object

The metric_tags processor filters metrics based on their tags using Datadog tag key patterns.

Supported pipeline types: metrics

display_name

string

The display name for a component.

enabled [required]

boolean

Indicates whether the processor is enabled.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

include [required]

string

A Datadog search query that determines which metrics the processor targets.

rules [required]

[object]

A list of rules for filtering metric tags.

action [required]

enum

The action to take on tags with matching keys. Allowed enum values: include,exclude

include [required]

string

A Datadog search query used to determine which metrics this rule targets.

keys [required]

[string]

A list of tag keys to include or exclude.

mode [required]

enum

The processing mode for tag filtering. Allowed enum values: filter

type [required]

enum

The processor type. The value should always be metric_tags. Allowed enum values: metric_tags

default: metric_tags

sources [required]

[ <oneOf>]

A list of configured data sources for the pipeline.

Option 1

object

The datadog_agent source collects logs/metrics from the Datadog Agent.

Supported pipeline types: logs, metrics

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be datadog_agent. Allowed enum values: datadog_agent

default: datadog_agent

Option 2

object

The amazon_data_firehose source ingests logs from AWS Data Firehose.

Supported pipeline types: logs

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be amazon_data_firehose. Allowed enum values: amazon_data_firehose

default: amazon_data_firehose

Option 3

object

The amazon_s3 source ingests logs from an Amazon S3 bucket. It supports AWS authentication and TLS encryption.

Supported pipeline types: logs

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

region [required]

string

AWS region where the S3 bucket resides.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always amazon_s3. Allowed enum values: amazon_s3

default: amazon_s3

Option 4

object

The fluent_bit source ingests logs from Fluent Bit.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be fluent_bit. Allowed enum values: fluent_bit

default: fluent_bit

Option 5

object

The fluentd source ingests logs from a Fluentd-compatible service.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be `fluentd. Allowed enum values: fluentd

default: fluentd

Option 6

object

The google_pubsub source ingests logs from a Google Cloud Pub/Sub subscription.

Supported pipeline types: logs

auth

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

project [required]

string

The GCP project ID that owns the Pub/Sub subscription.

subscription [required]

string

The Pub/Sub subscription name from which messages are consumed.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be google_pubsub. Allowed enum values: google_pubsub

default: google_pubsub

Option 7

object

The http_client source scrapes logs from HTTP endpoints at regular intervals.

Supported pipeline types: logs

auth_strategy

enum

Optional authentication strategy for HTTP requests. Allowed enum values: none,basic,bearer,custom

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

scrape_interval_secs

int64

The interval (in seconds) between HTTP scrape requests.

scrape_timeout_secs

int64

The timeout (in seconds) for each scrape request.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be http_client. Allowed enum values: http_client

default: http_client

Option 8

object

The http_server source collects logs over HTTP POST from external services.

Supported pipeline types: logs

auth_strategy [required]

enum

HTTP authentication method. Allowed enum values: none,plain

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

Unique ID for the HTTP server source.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be http_server. Allowed enum values: http_server

default: http_server

Option 9

object

The kafka source ingests data from Apache Kafka topics.

Supported pipeline types: logs

group_id [required]

string

Consumer group ID used by the Kafka client.

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

librdkafka_options

[object]

Optional list of advanced Kafka client configuration options, defined as key-value pairs.

name [required]

string

The name of the librdkafka configuration option to set.

value [required]

string

The value assigned to the specified librdkafka configuration option.

sasl

object

Specifies the SASL mechanism for authenticating with a Kafka cluster.

mechanism

enum

SASL mechanism used for Kafka authentication. Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topics [required]

[string]

A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.

type [required]

enum

The source type. The value should always be kafka. Allowed enum values: kafka

default: kafka

Option 10

object

The logstash source ingests logs from a Logstash forwarder.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be logstash. Allowed enum values: logstash

default: logstash

Option 11

object

The rsyslog source listens for logs over TCP or UDP from an rsyslog server using the syslog protocol.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

mode [required]

enum

Protocol used by the syslog source to receive messages. Allowed enum values: tcp,udp

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be rsyslog. Allowed enum values: rsyslog

default: rsyslog

Option 12

object

The socket source ingests logs over TCP or UDP.

Supported pipeline types: logs

framing [required]

 <oneOf>

Framing method configuration for the socket source.

Option 1

object

Byte frames which are delimited by a newline character.

method [required]

enum

Byte frames which are delimited by a newline character. Allowed enum values: newline_delimited

Option 2

object

Byte frames are passed through as-is according to the underlying I/O boundaries (for example, split between messages or stream segments).

method [required]

enum

Byte frames are passed through as-is according to the underlying I/O boundaries (for example, split between messages or stream segments). Allowed enum values: bytes

Option 3

object

Byte frames which are delimited by a chosen character.

delimiter [required]

string

A single ASCII character used to delimit events.

method [required]

enum

Byte frames which are delimited by a chosen character. Allowed enum values: character_delimited

Option 4

object

Byte frames according to the octet counting format as per RFC6587.

method [required]

enum

Byte frames according to the octet counting format as per RFC6587. Allowed enum values: octet_counting

Option 5

object

Byte frames which are chunked GELF messages.

method [required]

enum

Byte frames which are chunked GELF messages. Allowed enum values: chunked_gelf

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

mode [required]

enum

Protocol used to receive logs. Allowed enum values: tcp,udp

tls

object

TLS configuration. Relevant only when mode is tcp.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be socket. Allowed enum values: socket

default: socket

Option 13

object

The splunk_hec source implements the Splunk HTTP Event Collector (HEC) API.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always splunk_hec. Allowed enum values: splunk_hec

default: splunk_hec

Option 14

object

The splunk_tcp source receives logs from a Splunk Universal Forwarder over TCP. TLS is supported for secure transmission.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always splunk_tcp. Allowed enum values: splunk_tcp

default: splunk_tcp

Option 15

object

The sumo_logic source receives logs from Sumo Logic collectors.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

type [required]

enum

The source type. The value should always be sumo_logic. Allowed enum values: sumo_logic

default: sumo_logic

Option 16

object

The syslog_ng source listens for logs over TCP or UDP from a syslog-ng server using the syslog protocol.

Supported pipeline types: logs

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

mode [required]

enum

Protocol used by the syslog source to receive messages. Allowed enum values: tcp,udp

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be syslog_ng. Allowed enum values: syslog_ng

default: syslog_ng

Option 17

object

The opentelemetry source receives telemetry data using the OpenTelemetry Protocol (OTLP) over gRPC and HTTP.

Supported pipeline types: logs

grpc_address_key

string

Environment variable name containing the gRPC server address for receiving OTLP data. Must be a valid environment variable name (alphanumeric characters and underscores only).

http_address_key

string

Environment variable name containing the HTTP server address for receiving OTLP data. Must be a valid environment variable name (alphanumeric characters and underscores only).

id [required]

string

The unique identifier for this component. Used in other parts of the pipeline to reference this component (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be opentelemetry. Allowed enum values: opentelemetry

default: opentelemetry

name [required]

string

Name of the pipeline.

type [required]

string

The resource type identifier. For pipeline resources, this should always be set to pipelines.

default: pipelines

{
  "data": {
    "attributes": {
      "config": {
        "destinations": [
          {
            "id": "datadog-logs-destination",
            "inputs": [
              "my-processor-group"
            ],
            "type": "datadog_logs"
          }
        ],
        "processor_groups": [
          {
            "enabled": true,
            "id": "my-processor-group",
            "include": "service:my-service",
            "inputs": [
              "datadog-agent-source"
            ],
            "processors": [
              {
                "enabled": true,
                "id": "filter-processor",
                "include": "status:error",
                "type": "filter"
              }
            ]
          }
        ],
        "sources": [
          {
            "id": "datadog-agent-source",
            "type": "datadog_agent"
          }
        ]
      },
      "name": "Main Observability Pipeline"
    },
    "type": "pipelines"
  }
}

Réponse

OK

Response containing validation errors.

Expand All

Champ

Type

Description

errors

[object]

The ValidationResponse errors.

meta [required]

object

Describes additional metadata for validation errors, including field names and error messages.

field

string

The field name that caused the error.

id

string

The ID of the component in which the error occurred.

message [required]

string

The detailed error message.

title [required]

string

A short, human-readable summary of the error.

{
  "errors": [
    {
      "meta": {
        "field": "region",
        "id": "datadog-agent-source",
        "message": "Field 'region' is required"
      },
      "title": "Field 'region' is required"
    }
  ]
}

Bad Request

API error response.

Expand All

Champ

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Not Authorized

API error response.

Expand All

Champ

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Too many requests

API error response.

Expand All

Champ

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Exemple de code

                          # Curl command
curl -X POST "https://api.ap1.datadoghq.com"https://api.ap2.datadoghq.com"https://api.datadoghq.eu"https://api.ddog-gov.com"https://api.datadoghq.com"https://api.us3.datadoghq.com"https://api.us5.datadoghq.com/api/v2/obs-pipelines/pipelines/validate" \ -H "Accept: application/json" \ -H "Content-Type: application/json" \ -H "DD-API-KEY: ${DD_API_KEY}" \ -H "DD-APPLICATION-KEY: ${DD_APP_KEY}" \ -d @- << EOF { "data": { "attributes": { "config": { "destinations": [ { "id": "datadog-logs-destination", "inputs": [ "my-processor-group" ], "type": "datadog_logs" } ], "processor_groups": [ { "enabled": true, "id": "my-processor-group", "include": "service:my-service", "inputs": [ "datadog-agent-source" ], "processors": [ { "enabled": true, "id": "filter-processor", "include": "status:error", "type": "filter" } ] } ], "sources": [ { "id": "datadog-agent-source", "type": "datadog_agent" } ] }, "name": "Main Observability Pipeline" }, "type": "pipelines" } } EOF
// Validate an observability pipeline returns "OK" response

package main

import (
	"context"
	"encoding/json"
	"fmt"
	"os"

	"github.com/DataDog/datadog-api-client-go/v2/api/datadog"
	"github.com/DataDog/datadog-api-client-go/v2/api/datadogV2"
)

func main() {
	body := datadogV2.ObservabilityPipelineSpec{
		Data: datadogV2.ObservabilityPipelineSpecData{
			Attributes: datadogV2.ObservabilityPipelineDataAttributes{
				Config: datadogV2.ObservabilityPipelineConfig{
					Destinations: []datadogV2.ObservabilityPipelineConfigDestinationItem{
						datadogV2.ObservabilityPipelineConfigDestinationItem{
							ObservabilityPipelineDatadogLogsDestination: &datadogV2.ObservabilityPipelineDatadogLogsDestination{
								Id: "datadog-logs-destination",
								Inputs: []string{
									"my-processor-group",
								},
								Type: datadogV2.OBSERVABILITYPIPELINEDATADOGLOGSDESTINATIONTYPE_DATADOG_LOGS,
							}},
					},
					Processors: []datadogV2.ObservabilityPipelineConfigProcessorGroup{
						{
							Enabled: true,
							Id:      "my-processor-group",
							Include: "service:my-service",
							Inputs: []string{
								"datadog-agent-source",
							},
							Processors: []datadogV2.ObservabilityPipelineConfigProcessorItem{
								datadogV2.ObservabilityPipelineConfigProcessorItem{
									ObservabilityPipelineFilterProcessor: &datadogV2.ObservabilityPipelineFilterProcessor{
										Enabled: true,
										Id:      "filter-processor",
										Include: "status:error",
										Type:    datadogV2.OBSERVABILITYPIPELINEFILTERPROCESSORTYPE_FILTER,
									}},
							},
						},
					},
					Sources: []datadogV2.ObservabilityPipelineConfigSourceItem{
						datadogV2.ObservabilityPipelineConfigSourceItem{
							ObservabilityPipelineDatadogAgentSource: &datadogV2.ObservabilityPipelineDatadogAgentSource{
								Id:   "datadog-agent-source",
								Type: datadogV2.OBSERVABILITYPIPELINEDATADOGAGENTSOURCETYPE_DATADOG_AGENT,
							}},
					},
				},
				Name: "Main Observability Pipeline",
			},
			Type: "pipelines",
		},
	}
	ctx := datadog.NewDefaultContext(context.Background())
	configuration := datadog.NewConfiguration()
	configuration.SetUnstableOperationEnabled("v2.ValidatePipeline", true)
	apiClient := datadog.NewAPIClient(configuration)
	api := datadogV2.NewObservabilityPipelinesApi(apiClient)
	resp, r, err := api.ValidatePipeline(ctx, body)

	if err != nil {
		fmt.Fprintf(os.Stderr, "Error when calling `ObservabilityPipelinesApi.ValidatePipeline`: %v\n", err)
		fmt.Fprintf(os.Stderr, "Full HTTP response: %v\n", r)
	}

	responseContent, _ := json.MarshalIndent(resp, "", "  ")
	fmt.Fprintf(os.Stdout, "Response from `ObservabilityPipelinesApi.ValidatePipeline`:\n%s\n", responseContent)
}

Instructions

First install the library and its dependencies and then save the example to main.go and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" go run "main.go"
// Validate an observability pipeline returns "OK" response

import com.datadog.api.client.ApiClient;
import com.datadog.api.client.ApiException;
import com.datadog.api.client.v2.api.ObservabilityPipelinesApi;
import com.datadog.api.client.v2.model.ObservabilityPipelineConfig;
import com.datadog.api.client.v2.model.ObservabilityPipelineConfigDestinationItem;
import com.datadog.api.client.v2.model.ObservabilityPipelineConfigProcessorGroup;
import com.datadog.api.client.v2.model.ObservabilityPipelineConfigProcessorItem;
import com.datadog.api.client.v2.model.ObservabilityPipelineConfigSourceItem;
import com.datadog.api.client.v2.model.ObservabilityPipelineDataAttributes;
import com.datadog.api.client.v2.model.ObservabilityPipelineDatadogAgentSource;
import com.datadog.api.client.v2.model.ObservabilityPipelineDatadogAgentSourceType;
import com.datadog.api.client.v2.model.ObservabilityPipelineDatadogLogsDestination;
import com.datadog.api.client.v2.model.ObservabilityPipelineDatadogLogsDestinationType;
import com.datadog.api.client.v2.model.ObservabilityPipelineFilterProcessor;
import com.datadog.api.client.v2.model.ObservabilityPipelineFilterProcessorType;
import com.datadog.api.client.v2.model.ObservabilityPipelineSpec;
import com.datadog.api.client.v2.model.ObservabilityPipelineSpecData;
import com.datadog.api.client.v2.model.ValidationResponse;
import java.util.Collections;

public class Example {
  public static void main(String[] args) {
    ApiClient defaultClient = ApiClient.getDefaultApiClient();
    defaultClient.setUnstableOperationEnabled("v2.validatePipeline", true);
    ObservabilityPipelinesApi apiInstance = new ObservabilityPipelinesApi(defaultClient);

    ObservabilityPipelineSpec body =
        new ObservabilityPipelineSpec()
            .data(
                new ObservabilityPipelineSpecData()
                    .attributes(
                        new ObservabilityPipelineDataAttributes()
                            .config(
                                new ObservabilityPipelineConfig()
                                    .destinations(
                                        Collections.singletonList(
                                            new ObservabilityPipelineConfigDestinationItem(
                                                new ObservabilityPipelineDatadogLogsDestination()
                                                    .id("datadog-logs-destination")
                                                    .inputs(
                                                        Collections.singletonList(
                                                            "my-processor-group"))
                                                    .type(
                                                        ObservabilityPipelineDatadogLogsDestinationType
                                                            .DATADOG_LOGS))))
                                    .processors(
                                        Collections.singletonList(
                                            new ObservabilityPipelineConfigProcessorGroup()
                                                .enabled(true)
                                                .id("my-processor-group")
                                                .include("service:my-service")
                                                .inputs(
                                                    Collections.singletonList(
                                                        "datadog-agent-source"))
                                                .processors(
                                                    Collections.singletonList(
                                                        new ObservabilityPipelineConfigProcessorItem(
                                                            new ObservabilityPipelineFilterProcessor()
                                                                .enabled(true)
                                                                .id("filter-processor")
                                                                .include("status:error")
                                                                .type(
                                                                    ObservabilityPipelineFilterProcessorType
                                                                        .FILTER))))))
                                    .sources(
                                        Collections.singletonList(
                                            new ObservabilityPipelineConfigSourceItem(
                                                new ObservabilityPipelineDatadogAgentSource()
                                                    .id("datadog-agent-source")
                                                    .type(
                                                        ObservabilityPipelineDatadogAgentSourceType
                                                            .DATADOG_AGENT)))))
                            .name("Main Observability Pipeline"))
                    .type("pipelines"));

    try {
      ValidationResponse result = apiInstance.validatePipeline(body);
      System.out.println(result);
    } catch (ApiException e) {
      System.err.println("Exception when calling ObservabilityPipelinesApi#validatePipeline");
      System.err.println("Status code: " + e.getCode());
      System.err.println("Reason: " + e.getResponseBody());
      System.err.println("Response headers: " + e.getResponseHeaders());
      e.printStackTrace();
    }
  }
}

Instructions

First install the library and its dependencies and then save the example to Example.java and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" java "Example.java"
"""
Validate an observability pipeline returns "OK" response
"""

from datadog_api_client import ApiClient, Configuration
from datadog_api_client.v2.api.observability_pipelines_api import ObservabilityPipelinesApi
from datadog_api_client.v2.model.observability_pipeline_config import ObservabilityPipelineConfig
from datadog_api_client.v2.model.observability_pipeline_config_processor_group import (
    ObservabilityPipelineConfigProcessorGroup,
)
from datadog_api_client.v2.model.observability_pipeline_data_attributes import ObservabilityPipelineDataAttributes
from datadog_api_client.v2.model.observability_pipeline_datadog_agent_source import (
    ObservabilityPipelineDatadogAgentSource,
)
from datadog_api_client.v2.model.observability_pipeline_datadog_agent_source_type import (
    ObservabilityPipelineDatadogAgentSourceType,
)
from datadog_api_client.v2.model.observability_pipeline_datadog_logs_destination import (
    ObservabilityPipelineDatadogLogsDestination,
)
from datadog_api_client.v2.model.observability_pipeline_datadog_logs_destination_type import (
    ObservabilityPipelineDatadogLogsDestinationType,
)
from datadog_api_client.v2.model.observability_pipeline_filter_processor import ObservabilityPipelineFilterProcessor
from datadog_api_client.v2.model.observability_pipeline_filter_processor_type import (
    ObservabilityPipelineFilterProcessorType,
)
from datadog_api_client.v2.model.observability_pipeline_spec import ObservabilityPipelineSpec
from datadog_api_client.v2.model.observability_pipeline_spec_data import ObservabilityPipelineSpecData

body = ObservabilityPipelineSpec(
    data=ObservabilityPipelineSpecData(
        attributes=ObservabilityPipelineDataAttributes(
            config=ObservabilityPipelineConfig(
                destinations=[
                    ObservabilityPipelineDatadogLogsDestination(
                        id="datadog-logs-destination",
                        inputs=[
                            "my-processor-group",
                        ],
                        type=ObservabilityPipelineDatadogLogsDestinationType.DATADOG_LOGS,
                    ),
                ],
                processors=[
                    ObservabilityPipelineConfigProcessorGroup(
                        enabled=True,
                        id="my-processor-group",
                        include="service:my-service",
                        inputs=[
                            "datadog-agent-source",
                        ],
                        processors=[
                            ObservabilityPipelineFilterProcessor(
                                enabled=True,
                                id="filter-processor",
                                include="status:error",
                                type=ObservabilityPipelineFilterProcessorType.FILTER,
                            ),
                        ],
                    ),
                ],
                sources=[
                    ObservabilityPipelineDatadogAgentSource(
                        id="datadog-agent-source",
                        type=ObservabilityPipelineDatadogAgentSourceType.DATADOG_AGENT,
                    ),
                ],
            ),
            name="Main Observability Pipeline",
        ),
        type="pipelines",
    ),
)

configuration = Configuration()
configuration.unstable_operations["validate_pipeline"] = True
with ApiClient(configuration) as api_client:
    api_instance = ObservabilityPipelinesApi(api_client)
    response = api_instance.validate_pipeline(body=body)

    print(response)

Instructions

First install the library and its dependencies and then save the example to example.py and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" python3 "example.py"
# Validate an observability pipeline returns "OK" response

require "datadog_api_client"
DatadogAPIClient.configure do |config|
  config.unstable_operations["v2.validate_pipeline".to_sym] = true
end
api_instance = DatadogAPIClient::V2::ObservabilityPipelinesAPI.new

body = DatadogAPIClient::V2::ObservabilityPipelineSpec.new({
  data: DatadogAPIClient::V2::ObservabilityPipelineSpecData.new({
    attributes: DatadogAPIClient::V2::ObservabilityPipelineDataAttributes.new({
      config: DatadogAPIClient::V2::ObservabilityPipelineConfig.new({
        destinations: [
          DatadogAPIClient::V2::ObservabilityPipelineDatadogLogsDestination.new({
            id: "datadog-logs-destination",
            inputs: [
              "my-processor-group",
            ],
            type: DatadogAPIClient::V2::ObservabilityPipelineDatadogLogsDestinationType::DATADOG_LOGS,
          }),
        ],
        processors: [
          DatadogAPIClient::V2::ObservabilityPipelineConfigProcessorGroup.new({
            enabled: true,
            id: "my-processor-group",
            include: "service:my-service",
            inputs: [
              "datadog-agent-source",
            ],
            processors: [
              DatadogAPIClient::V2::ObservabilityPipelineFilterProcessor.new({
                enabled: true,
                id: "filter-processor",
                include: "status:error",
                type: DatadogAPIClient::V2::ObservabilityPipelineFilterProcessorType::FILTER,
              }),
            ],
          }),
        ],
        sources: [
          DatadogAPIClient::V2::ObservabilityPipelineDatadogAgentSource.new({
            id: "datadog-agent-source",
            type: DatadogAPIClient::V2::ObservabilityPipelineDatadogAgentSourceType::DATADOG_AGENT,
          }),
        ],
      }),
      name: "Main Observability Pipeline",
    }),
    type: "pipelines",
  }),
})
p api_instance.validate_pipeline(body)

Instructions

First install the library and its dependencies and then save the example to example.rb and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" rb "example.rb"
// Validate an observability pipeline returns "OK" response
use datadog_api_client::datadog;
use datadog_api_client::datadogV2::api_observability_pipelines::ObservabilityPipelinesAPI;
use datadog_api_client::datadogV2::model::ObservabilityPipelineConfig;
use datadog_api_client::datadogV2::model::ObservabilityPipelineConfigDestinationItem;
use datadog_api_client::datadogV2::model::ObservabilityPipelineConfigProcessorGroup;
use datadog_api_client::datadogV2::model::ObservabilityPipelineConfigProcessorItem;
use datadog_api_client::datadogV2::model::ObservabilityPipelineConfigSourceItem;
use datadog_api_client::datadogV2::model::ObservabilityPipelineDataAttributes;
use datadog_api_client::datadogV2::model::ObservabilityPipelineDatadogAgentSource;
use datadog_api_client::datadogV2::model::ObservabilityPipelineDatadogAgentSourceType;
use datadog_api_client::datadogV2::model::ObservabilityPipelineDatadogLogsDestination;
use datadog_api_client::datadogV2::model::ObservabilityPipelineDatadogLogsDestinationType;
use datadog_api_client::datadogV2::model::ObservabilityPipelineFilterProcessor;
use datadog_api_client::datadogV2::model::ObservabilityPipelineFilterProcessorType;
use datadog_api_client::datadogV2::model::ObservabilityPipelineSpec;
use datadog_api_client::datadogV2::model::ObservabilityPipelineSpecData;

#[tokio::main]
async fn main() {
    let body =
        ObservabilityPipelineSpec::new(
            ObservabilityPipelineSpecData::new(
                ObservabilityPipelineDataAttributes::new(
                    ObservabilityPipelineConfig::new(
                        vec![
                            ObservabilityPipelineConfigDestinationItem::ObservabilityPipelineDatadogLogsDestination(
                                Box::new(
                                    ObservabilityPipelineDatadogLogsDestination::new(
                                        "datadog-logs-destination".to_string(),
                                        vec!["my-processor-group".to_string()],
                                        ObservabilityPipelineDatadogLogsDestinationType::DATADOG_LOGS,
                                    ),
                                ),
                            )
                        ],
                        vec![
                            ObservabilityPipelineConfigSourceItem::ObservabilityPipelineDatadogAgentSource(
                                Box::new(
                                    ObservabilityPipelineDatadogAgentSource::new(
                                        "datadog-agent-source".to_string(),
                                        ObservabilityPipelineDatadogAgentSourceType::DATADOG_AGENT,
                                    ),
                                ),
                            )
                        ],
                    ).processors(
                        vec![
                            ObservabilityPipelineConfigProcessorGroup::new(
                                true,
                                "my-processor-group".to_string(),
                                "service:my-service".to_string(),
                                vec!["datadog-agent-source".to_string()],
                                vec![
                                    ObservabilityPipelineConfigProcessorItem::ObservabilityPipelineFilterProcessor(
                                        Box::new(
                                            ObservabilityPipelineFilterProcessor::new(
                                                true,
                                                "filter-processor".to_string(),
                                                "status:error".to_string(),
                                                ObservabilityPipelineFilterProcessorType::FILTER,
                                            ),
                                        ),
                                    )
                                ],
                            )
                        ],
                    ),
                    "Main Observability Pipeline".to_string(),
                ),
                "pipelines".to_string(),
            ),
        );
    let mut configuration = datadog::Configuration::new();
    configuration.set_unstable_operation_enabled("v2.ValidatePipeline", true);
    let api = ObservabilityPipelinesAPI::with_config(configuration);
    let resp = api.validate_pipeline(body).await;
    if let Ok(value) = resp {
        println!("{:#?}", value);
    } else {
        println!("{:#?}", resp.unwrap_err());
    }
}

Instructions

First install the library and its dependencies and then save the example to src/main.rs and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" cargo run
/**
 * Validate an observability pipeline returns "OK" response
 */

import { client, v2 } from "@datadog/datadog-api-client";

const configuration = client.createConfiguration();
configuration.unstableOperations["v2.validatePipeline"] = true;
const apiInstance = new v2.ObservabilityPipelinesApi(configuration);

const params: v2.ObservabilityPipelinesApiValidatePipelineRequest = {
  body: {
    data: {
      attributes: {
        config: {
          destinations: [
            {
              id: "datadog-logs-destination",
              inputs: ["my-processor-group"],
              type: "datadog_logs",
            },
          ],
          processors: [
            {
              enabled: true,
              id: "my-processor-group",
              include: "service:my-service",
              inputs: ["datadog-agent-source"],
              processors: [
                {
                  enabled: true,
                  id: "filter-processor",
                  include: "status:error",
                  type: "filter",
                },
              ],
            },
          ],
          sources: [
            {
              id: "datadog-agent-source",
              type: "datadog_agent",
            },
          ],
        },
        name: "Main Observability Pipeline",
      },
      type: "pipelines",
    },
  },
};

apiInstance
  .validatePipeline(params)
  .then((data: v2.ValidationResponse) => {
    console.log(
      "API called successfully. Returned data: " + JSON.stringify(data)
    );
  })
  .catch((error: any) => console.error(error));

Instructions

First install the library and its dependencies and then save the example to example.ts and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comap2.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" tsc "example.ts"