Contains the pipeline’s ID, type, and configuration attributes.
attributes [required]
object
Defines the pipeline’s name and its components (sources, processors, and destinations).
config [required]
object
Specifies the pipeline's configuration, including its sources, processors, and destinations.
destinations [required]
[ <oneOf>]
A list of destination components where processed logs are sent.
Option 1
object
The datadog_logs destination forwards logs to Datadog Log Management.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be datadog_logs.
Allowed enum values: datadog_logs
default: datadog_logs
processors [required]
[ <oneOf>]
A list of processors that transform or enrich log data.
Option 1
object
The filter processor allows conditional processing of logs based on a Datadog search query. Logs that match the include query are passed through; others are discarded.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs should pass through the filter. Logs that match this query continue to downstream components; others are dropped.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The processor type. The value should always be filter.
Allowed enum values: filter
default: filter
Option 2
object
The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.
field [required]
string
The name of the log field that contains a JSON string.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The processor type. The value should always be parse_json.
Allowed enum values: parse_json
default: parse_json
Option 3
object
The Quota Processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.
drop_events [required]
boolean
If set to true, logs that matched the quota filter and sent after the quota has been met are dropped; only logs that did not match the filter query continue through the pipeline.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).
ignore_when_missing_partitions
boolean
If true, the processor skips quota checks when partition fields are missing from the logs.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
name [required]
string
Name for identifying the processor.
overrides
[object]
A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.
fields [required]
[object]
A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.
name [required]
string
The field name.
value [required]
string
The field value.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
partition_fields
[string]
A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.
type [required]
enum
The processor type. The value should always be quota.
Allowed enum values: quota
default: quota
Option 4
object
The add_fields processor adds static key-value fields to logs.
fields [required]
[object]
A list of static fields (key-value pairs) that is added to each log event processed by this component.
name [required]
string
The field name.
value [required]
string
The field value.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The processor type. The value should always be add_fields.
Allowed enum values: add_fields
default: add_fields
Option 5
object
The remove_fields processor deletes specified fields from logs.
fields [required]
[string]
A list of field names to be removed from each log event.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
inputs [required]
[string]
The PipelineRemoveFieldsProcessorinputs.
type [required]
enum
The processor type. The value should always be remove_fields.
Allowed enum values: remove_fields
default: remove_fields
Option 6
object
The rename_fields processor changes field names.
fields [required]
[object]
A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.
destination [required]
string
The field name to assign the renamed value to.
preserve_source [required]
boolean
Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.
source [required]
string
The original field name in the log event that should be renamed.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The processor type. The value should always be rename_fields.
Allowed enum values: rename_fields
default: rename_fields
sources [required]
[ <oneOf>]
A list of configured data sources for the pipeline.
Option 1
object
The kafka source ingests data from Apache Kafka topics.
group_id [required]
string
Consumer group ID used by the Kafka client.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
librdkafka_options
[object]
Optional list of advanced Kafka client configuration options, defined as key-value pairs.
name [required]
string
The name of the librdkafka configuration option to set.
value [required]
string
The value assigned to the specified librdkafka configuration option.
sasl
object
Specifies the SASL mechanism for authenticating with a Kafka cluster.
mechanism
enum
SASL mechanism used for Kafka authentication.
Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
tls
object
Configuration for enabling TLS encryption.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
topics [required]
[string]
A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.
type [required]
enum
The source type. The value should always be kafka.
Allowed enum values: kafka
default: kafka
Option 2
object
The datadog_agent source collects logs from the Datadog Agent.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
tls
object
Configuration for enabling TLS encryption.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
type [required]
enum
The source type. The value should always be datadog_agent.
Allowed enum values: datadog_agent
default: datadog_agent
name [required]
string
Name of the pipeline.
type [required]
string
The resource type identifier. For pipeline resources, this should always be set to pipelines.
Contains the pipeline’s ID, type, and configuration attributes.
attributes [required]
object
Defines the pipeline’s name and its components (sources, processors, and destinations).
config [required]
object
Specifies the pipeline's configuration, including its sources, processors, and destinations.
destinations [required]
[ <oneOf>]
A list of destination components where processed logs are sent.
Option 1
object
The datadog_logs destination forwards logs to Datadog Log Management.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be datadog_logs.
Allowed enum values: datadog_logs
default: datadog_logs
processors [required]
[ <oneOf>]
A list of processors that transform or enrich log data.
Option 1
object
The filter processor allows conditional processing of logs based on a Datadog search query. Logs that match the include query are passed through; others are discarded.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs should pass through the filter. Logs that match this query continue to downstream components; others are dropped.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The processor type. The value should always be filter.
Allowed enum values: filter
default: filter
Option 2
object
The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.
field [required]
string
The name of the log field that contains a JSON string.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The processor type. The value should always be parse_json.
Allowed enum values: parse_json
default: parse_json
Option 3
object
The Quota Processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.
drop_events [required]
boolean
If set to true, logs that matched the quota filter and sent after the quota has been met are dropped; only logs that did not match the filter query continue through the pipeline.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).
ignore_when_missing_partitions
boolean
If true, the processor skips quota checks when partition fields are missing from the logs.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
name [required]
string
Name for identifying the processor.
overrides
[object]
A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.
fields [required]
[object]
A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.
name [required]
string
The field name.
value [required]
string
The field value.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
partition_fields
[string]
A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.
type [required]
enum
The processor type. The value should always be quota.
Allowed enum values: quota
default: quota
Option 4
object
The add_fields processor adds static key-value fields to logs.
fields [required]
[object]
A list of static fields (key-value pairs) that is added to each log event processed by this component.
name [required]
string
The field name.
value [required]
string
The field value.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The processor type. The value should always be add_fields.
Allowed enum values: add_fields
default: add_fields
Option 5
object
The remove_fields processor deletes specified fields from logs.
fields [required]
[string]
A list of field names to be removed from each log event.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
inputs [required]
[string]
The PipelineRemoveFieldsProcessorinputs.
type [required]
enum
The processor type. The value should always be remove_fields.
Allowed enum values: remove_fields
default: remove_fields
Option 6
object
The rename_fields processor changes field names.
fields [required]
[object]
A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.
destination [required]
string
The field name to assign the renamed value to.
preserve_source [required]
boolean
Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.
source [required]
string
The original field name in the log event that should be renamed.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The processor type. The value should always be rename_fields.
Allowed enum values: rename_fields
default: rename_fields
sources [required]
[ <oneOf>]
A list of configured data sources for the pipeline.
Option 1
object
The kafka source ingests data from Apache Kafka topics.
group_id [required]
string
Consumer group ID used by the Kafka client.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
librdkafka_options
[object]
Optional list of advanced Kafka client configuration options, defined as key-value pairs.
name [required]
string
The name of the librdkafka configuration option to set.
value [required]
string
The value assigned to the specified librdkafka configuration option.
sasl
object
Specifies the SASL mechanism for authenticating with a Kafka cluster.
mechanism
enum
SASL mechanism used for Kafka authentication.
Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
tls
object
Configuration for enabling TLS encryption.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
topics [required]
[string]
A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.
type [required]
enum
The source type. The value should always be kafka.
Allowed enum values: kafka
default: kafka
Option 2
object
The datadog_agent source collects logs from the Datadog Agent.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
tls
object
Configuration for enabling TLS encryption.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
type [required]
enum
The source type. The value should always be datadog_agent.
Allowed enum values: datadog_agent
default: datadog_agent
name [required]
string
Name of the pipeline.
id [required]
string
Unique identifier for the pipeline.
type [required]
string
The resource type identifier. For pipeline resources, this should always be set to pipelines.
Contains the pipeline’s ID, type, and configuration attributes.
attributes [required]
object
Defines the pipeline’s name and its components (sources, processors, and destinations).
config [required]
object
Specifies the pipeline's configuration, including its sources, processors, and destinations.
destinations [required]
[ <oneOf>]
A list of destination components where processed logs are sent.
Option 1
object
The datadog_logs destination forwards logs to Datadog Log Management.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be datadog_logs.
Allowed enum values: datadog_logs
default: datadog_logs
processors [required]
[ <oneOf>]
A list of processors that transform or enrich log data.
Option 1
object
The filter processor allows conditional processing of logs based on a Datadog search query. Logs that match the include query are passed through; others are discarded.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs should pass through the filter. Logs that match this query continue to downstream components; others are dropped.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The processor type. The value should always be filter.
Allowed enum values: filter
default: filter
Option 2
object
The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.
field [required]
string
The name of the log field that contains a JSON string.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The processor type. The value should always be parse_json.
Allowed enum values: parse_json
default: parse_json
Option 3
object
The Quota Processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.
drop_events [required]
boolean
If set to true, logs that matched the quota filter and sent after the quota has been met are dropped; only logs that did not match the filter query continue through the pipeline.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).
ignore_when_missing_partitions
boolean
If true, the processor skips quota checks when partition fields are missing from the logs.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
name [required]
string
Name for identifying the processor.
overrides
[object]
A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.
fields [required]
[object]
A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.
name [required]
string
The field name.
value [required]
string
The field value.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
partition_fields
[string]
A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.
type [required]
enum
The processor type. The value should always be quota.
Allowed enum values: quota
default: quota
Option 4
object
The add_fields processor adds static key-value fields to logs.
fields [required]
[object]
A list of static fields (key-value pairs) that is added to each log event processed by this component.
name [required]
string
The field name.
value [required]
string
The field value.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The processor type. The value should always be add_fields.
Allowed enum values: add_fields
default: add_fields
Option 5
object
The remove_fields processor deletes specified fields from logs.
fields [required]
[string]
A list of field names to be removed from each log event.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
inputs [required]
[string]
The PipelineRemoveFieldsProcessorinputs.
type [required]
enum
The processor type. The value should always be remove_fields.
Allowed enum values: remove_fields
default: remove_fields
Option 6
object
The rename_fields processor changes field names.
fields [required]
[object]
A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.
destination [required]
string
The field name to assign the renamed value to.
preserve_source [required]
boolean
Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.
source [required]
string
The original field name in the log event that should be renamed.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The processor type. The value should always be rename_fields.
Allowed enum values: rename_fields
default: rename_fields
sources [required]
[ <oneOf>]
A list of configured data sources for the pipeline.
Option 1
object
The kafka source ingests data from Apache Kafka topics.
group_id [required]
string
Consumer group ID used by the Kafka client.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
librdkafka_options
[object]
Optional list of advanced Kafka client configuration options, defined as key-value pairs.
name [required]
string
The name of the librdkafka configuration option to set.
value [required]
string
The value assigned to the specified librdkafka configuration option.
sasl
object
Specifies the SASL mechanism for authenticating with a Kafka cluster.
mechanism
enum
SASL mechanism used for Kafka authentication.
Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
tls
object
Configuration for enabling TLS encryption.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
topics [required]
[string]
A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.
type [required]
enum
The source type. The value should always be kafka.
Allowed enum values: kafka
default: kafka
Option 2
object
The datadog_agent source collects logs from the Datadog Agent.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
tls
object
Configuration for enabling TLS encryption.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
type [required]
enum
The source type. The value should always be datadog_agent.
Allowed enum values: datadog_agent
default: datadog_agent
name [required]
string
Name of the pipeline.
id [required]
string
Unique identifier for the pipeline.
type [required]
string
The resource type identifier. For pipeline resources, this should always be set to pipelines.
Contains the pipeline’s ID, type, and configuration attributes.
attributes [required]
object
Defines the pipeline’s name and its components (sources, processors, and destinations).
config [required]
object
Specifies the pipeline's configuration, including its sources, processors, and destinations.
destinations [required]
[ <oneOf>]
A list of destination components where processed logs are sent.
Option 1
object
The datadog_logs destination forwards logs to Datadog Log Management.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be datadog_logs.
Allowed enum values: datadog_logs
default: datadog_logs
processors [required]
[ <oneOf>]
A list of processors that transform or enrich log data.
Option 1
object
The filter processor allows conditional processing of logs based on a Datadog search query. Logs that match the include query are passed through; others are discarded.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs should pass through the filter. Logs that match this query continue to downstream components; others are dropped.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The processor type. The value should always be filter.
Allowed enum values: filter
default: filter
Option 2
object
The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.
field [required]
string
The name of the log field that contains a JSON string.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The processor type. The value should always be parse_json.
Allowed enum values: parse_json
default: parse_json
Option 3
object
The Quota Processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.
drop_events [required]
boolean
If set to true, logs that matched the quota filter and sent after the quota has been met are dropped; only logs that did not match the filter query continue through the pipeline.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).
ignore_when_missing_partitions
boolean
If true, the processor skips quota checks when partition fields are missing from the logs.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
name [required]
string
Name for identifying the processor.
overrides
[object]
A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.
fields [required]
[object]
A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.
name [required]
string
The field name.
value [required]
string
The field value.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
partition_fields
[string]
A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.
type [required]
enum
The processor type. The value should always be quota.
Allowed enum values: quota
default: quota
Option 4
object
The add_fields processor adds static key-value fields to logs.
fields [required]
[object]
A list of static fields (key-value pairs) that is added to each log event processed by this component.
name [required]
string
The field name.
value [required]
string
The field value.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The processor type. The value should always be add_fields.
Allowed enum values: add_fields
default: add_fields
Option 5
object
The remove_fields processor deletes specified fields from logs.
fields [required]
[string]
A list of field names to be removed from each log event.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
inputs [required]
[string]
The PipelineRemoveFieldsProcessorinputs.
type [required]
enum
The processor type. The value should always be remove_fields.
Allowed enum values: remove_fields
default: remove_fields
Option 6
object
The rename_fields processor changes field names.
fields [required]
[object]
A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.
destination [required]
string
The field name to assign the renamed value to.
preserve_source [required]
boolean
Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.
source [required]
string
The original field name in the log event that should be renamed.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The processor type. The value should always be rename_fields.
Allowed enum values: rename_fields
default: rename_fields
sources [required]
[ <oneOf>]
A list of configured data sources for the pipeline.
Option 1
object
The kafka source ingests data from Apache Kafka topics.
group_id [required]
string
Consumer group ID used by the Kafka client.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
librdkafka_options
[object]
Optional list of advanced Kafka client configuration options, defined as key-value pairs.
name [required]
string
The name of the librdkafka configuration option to set.
value [required]
string
The value assigned to the specified librdkafka configuration option.
sasl
object
Specifies the SASL mechanism for authenticating with a Kafka cluster.
mechanism
enum
SASL mechanism used for Kafka authentication.
Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
tls
object
Configuration for enabling TLS encryption.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
topics [required]
[string]
A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.
type [required]
enum
The source type. The value should always be kafka.
Allowed enum values: kafka
default: kafka
Option 2
object
The datadog_agent source collects logs from the Datadog Agent.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
tls
object
Configuration for enabling TLS encryption.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
type [required]
enum
The source type. The value should always be datadog_agent.
Allowed enum values: datadog_agent
default: datadog_agent
name [required]
string
Name of the pipeline.
id [required]
string
Unique identifier for the pipeline.
type [required]
string
The resource type identifier. For pipeline resources, this should always be set to pipelines.
Contains the pipeline’s ID, type, and configuration attributes.
attributes [required]
object
Defines the pipeline’s name and its components (sources, processors, and destinations).
config [required]
object
Specifies the pipeline's configuration, including its sources, processors, and destinations.
destinations [required]
[ <oneOf>]
A list of destination components where processed logs are sent.
Option 1
object
The datadog_logs destination forwards logs to Datadog Log Management.
id [required]
string
The unique identifier for this component.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The destination type. The value should always be datadog_logs.
Allowed enum values: datadog_logs
default: datadog_logs
processors [required]
[ <oneOf>]
A list of processors that transform or enrich log data.
Option 1
object
The filter processor allows conditional processing of logs based on a Datadog search query. Logs that match the include query are passed through; others are discarded.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs should pass through the filter. Logs that match this query continue to downstream components; others are dropped.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The processor type. The value should always be filter.
Allowed enum values: filter
default: filter
Option 2
object
The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.
field [required]
string
The name of the log field that contains a JSON string.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The processor type. The value should always be parse_json.
Allowed enum values: parse_json
default: parse_json
Option 3
object
The Quota Processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.
drop_events [required]
boolean
If set to true, logs that matched the quota filter and sent after the quota has been met are dropped; only logs that did not match the filter query continue through the pipeline.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).
ignore_when_missing_partitions
boolean
If true, the processor skips quota checks when partition fields are missing from the logs.
include [required]
string
A Datadog search query used to determine which logs this processor targets.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
name [required]
string
Name for identifying the processor.
overrides
[object]
A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.
fields [required]
[object]
A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.
name [required]
string
The field name.
value [required]
string
The field value.
limit [required]
object
The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.
enforce [required]
enum
Unit for quota enforcement in bytes for data size or events for count.
Allowed enum values: bytes,events
limit [required]
int64
The limit for quota enforcement.
partition_fields
[string]
A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.
type [required]
enum
The processor type. The value should always be quota.
Allowed enum values: quota
default: quota
Option 4
object
The add_fields processor adds static key-value fields to logs.
fields [required]
[object]
A list of static fields (key-value pairs) that is added to each log event processed by this component.
name [required]
string
The field name.
value [required]
string
The field value.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The processor type. The value should always be add_fields.
Allowed enum values: add_fields
default: add_fields
Option 5
object
The remove_fields processor deletes specified fields from logs.
fields [required]
[string]
A list of field names to be removed from each log event.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
inputs [required]
[string]
The PipelineRemoveFieldsProcessorinputs.
type [required]
enum
The processor type. The value should always be remove_fields.
Allowed enum values: remove_fields
default: remove_fields
Option 6
object
The rename_fields processor changes field names.
fields [required]
[object]
A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.
destination [required]
string
The field name to assign the renamed value to.
preserve_source [required]
boolean
Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.
source [required]
string
The original field name in the log event that should be renamed.
id [required]
string
A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
include [required]
string
A Datadog search query used to determine which logs this processor targets.
inputs [required]
[string]
A list of component IDs whose output is used as the input for this component.
type [required]
enum
The processor type. The value should always be rename_fields.
Allowed enum values: rename_fields
default: rename_fields
sources [required]
[ <oneOf>]
A list of configured data sources for the pipeline.
Option 1
object
The kafka source ingests data from Apache Kafka topics.
group_id [required]
string
Consumer group ID used by the Kafka client.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
librdkafka_options
[object]
Optional list of advanced Kafka client configuration options, defined as key-value pairs.
name [required]
string
The name of the librdkafka configuration option to set.
value [required]
string
The value assigned to the specified librdkafka configuration option.
sasl
object
Specifies the SASL mechanism for authenticating with a Kafka cluster.
mechanism
enum
SASL mechanism used for Kafka authentication.
Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
tls
object
Configuration for enabling TLS encryption.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
topics [required]
[string]
A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.
type [required]
enum
The source type. The value should always be kafka.
Allowed enum values: kafka
default: kafka
Option 2
object
The datadog_agent source collects logs from the Datadog Agent.
id [required]
string
The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).
tls
object
Configuration for enabling TLS encryption.
ca_file
string
Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.
crt_file [required]
string
Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.
key_file
string
Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.
type [required]
enum
The source type. The value should always be datadog_agent.
Allowed enum values: datadog_agent
default: datadog_agent
name [required]
string
Name of the pipeline.
id [required]
string
Unique identifier for the pipeline.
type [required]
string
The resource type identifier. For pipeline resources, this should always be set to pipelines.