Observability Pipelines

Observability Pipelines allows you to collect and process logs within your own infrastructure, and then route them to downstream integrations.

Note: This endpoint is in Preview.

POST https://api.ap1.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelineshttps://api.datadoghq.eu/api/v2/remote_config/products/obs_pipelines/pipelineshttps://api.ddog-gov.com/api/v2/remote_config/products/obs_pipelines/pipelineshttps://api.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelineshttps://api.us3.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelineshttps://api.us5.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines

Overview

Create a new pipeline.

Request

Body Data (required)

Expand All

Field

Type

Description

data [required]

object

Contains the pipeline’s ID, type, and configuration attributes.

attributes [required]

object

Defines the pipeline’s name and its components (sources, processors, and destinations).

config [required]

object

Specifies the pipeline's configuration, including its sources, processors, and destinations.

destinations [required]

[ <oneOf>]

A list of destination components where processed logs are sent.

Option 1

object

The datadog_logs destination forwards logs to Datadog Log Management.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be datadog_logs. Allowed enum values: datadog_logs

default: datadog_logs

processors [required]

[ <oneOf>]

A list of processors that transform or enrich log data.

Option 1

object

The filter processor allows conditional processing of logs based on a Datadog search query. Logs that match the include query are passed through; others are discarded.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs should pass through the filter. Logs that match this query continue to downstream components; others are dropped.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be filter. Allowed enum values: filter

default: filter

Option 2

object

The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.

field [required]

string

The name of the log field that contains a JSON string.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be parse_json. Allowed enum values: parse_json

default: parse_json

Option 3

object

The Quota Processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.

drop_events [required]

boolean

If set to true, logs that matched the quota filter and sent after the quota has been met are dropped; only logs that did not match the filter query continue through the pipeline.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

ignore_when_missing_partitions

boolean

If true, the processor skips quota checks when partition fields are missing from the logs.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

name [required]

string

Name for identifying the processor.

overrides

[object]

A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.

fields [required]

[object]

A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.

name [required]

string

The field name.

value [required]

string

The field value.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

partition_fields

[string]

A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.

type [required]

enum

The processor type. The value should always be quota. Allowed enum values: quota

default: quota

Option 4

object

The add_fields processor adds static key-value fields to logs.

fields [required]

[object]

A list of static fields (key-value pairs) that is added to each log event processed by this component.

name [required]

string

The field name.

value [required]

string

The field value.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be add_fields. Allowed enum values: add_fields

default: add_fields

Option 5

object

The remove_fields processor deletes specified fields from logs.

fields [required]

[string]

A list of field names to be removed from each log event.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

The PipelineRemoveFieldsProcessor inputs.

type [required]

enum

The processor type. The value should always be remove_fields. Allowed enum values: remove_fields

default: remove_fields

Option 6

object

The rename_fields processor changes field names.

fields [required]

[object]

A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.

destination [required]

string

The field name to assign the renamed value to.

preserve_source [required]

boolean

Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.

source [required]

string

The original field name in the log event that should be renamed.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be rename_fields. Allowed enum values: rename_fields

default: rename_fields

sources [required]

[ <oneOf>]

A list of configured data sources for the pipeline.

Option 1

object

The kafka source ingests data from Apache Kafka topics.

group_id [required]

string

Consumer group ID used by the Kafka client.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

librdkafka_options

[object]

Optional list of advanced Kafka client configuration options, defined as key-value pairs.

name [required]

string

The name of the librdkafka configuration option to set.

value [required]

string

The value assigned to the specified librdkafka configuration option.

sasl

object

Specifies the SASL mechanism for authenticating with a Kafka cluster.

mechanism

enum

SASL mechanism used for Kafka authentication. Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512

tls

object

Configuration for enabling TLS encryption.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topics [required]

[string]

A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.

type [required]

enum

The source type. The value should always be kafka. Allowed enum values: kafka

default: kafka

Option 2

object

The datadog_agent source collects logs from the Datadog Agent.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be datadog_agent. Allowed enum values: datadog_agent

default: datadog_agent

name [required]

string

Name of the pipeline.

type [required]

string

The resource type identifier. For pipeline resources, this should always be set to pipelines.

default: pipelines

{
  "data": {
    "attributes": {
      "config": {
        "destinations": [
          {
            "id": "datadog-logs-destination",
            "inputs": [
              "filter-processor"
            ],
            "type": "datadog_logs"
          }
        ],
        "processors": [
          {
            "id": "filter-processor",
            "include": "service:my-service",
            "inputs": [
              "datadog-agent-source"
            ],
            "type": "filter"
          }
        ],
        "sources": [
          {
            "id": "datadog-agent-source",
            "type": "datadog_agent"
          }
        ]
      },
      "name": "Main Observability Pipeline"
    },
    "type": "pipelines"
  }
}

Response

OK

Top-level schema representing a pipeline.

Expand All

Field

Type

Description

data [required]

object

Contains the pipeline’s ID, type, and configuration attributes.

attributes [required]

object

Defines the pipeline’s name and its components (sources, processors, and destinations).

config [required]

object

Specifies the pipeline's configuration, including its sources, processors, and destinations.

destinations [required]

[ <oneOf>]

A list of destination components where processed logs are sent.

Option 1

object

The datadog_logs destination forwards logs to Datadog Log Management.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be datadog_logs. Allowed enum values: datadog_logs

default: datadog_logs

processors [required]

[ <oneOf>]

A list of processors that transform or enrich log data.

Option 1

object

The filter processor allows conditional processing of logs based on a Datadog search query. Logs that match the include query are passed through; others are discarded.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs should pass through the filter. Logs that match this query continue to downstream components; others are dropped.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be filter. Allowed enum values: filter

default: filter

Option 2

object

The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.

field [required]

string

The name of the log field that contains a JSON string.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be parse_json. Allowed enum values: parse_json

default: parse_json

Option 3

object

The Quota Processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.

drop_events [required]

boolean

If set to true, logs that matched the quota filter and sent after the quota has been met are dropped; only logs that did not match the filter query continue through the pipeline.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

ignore_when_missing_partitions

boolean

If true, the processor skips quota checks when partition fields are missing from the logs.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

name [required]

string

Name for identifying the processor.

overrides

[object]

A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.

fields [required]

[object]

A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.

name [required]

string

The field name.

value [required]

string

The field value.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

partition_fields

[string]

A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.

type [required]

enum

The processor type. The value should always be quota. Allowed enum values: quota

default: quota

Option 4

object

The add_fields processor adds static key-value fields to logs.

fields [required]

[object]

A list of static fields (key-value pairs) that is added to each log event processed by this component.

name [required]

string

The field name.

value [required]

string

The field value.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be add_fields. Allowed enum values: add_fields

default: add_fields

Option 5

object

The remove_fields processor deletes specified fields from logs.

fields [required]

[string]

A list of field names to be removed from each log event.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

The PipelineRemoveFieldsProcessor inputs.

type [required]

enum

The processor type. The value should always be remove_fields. Allowed enum values: remove_fields

default: remove_fields

Option 6

object

The rename_fields processor changes field names.

fields [required]

[object]

A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.

destination [required]

string

The field name to assign the renamed value to.

preserve_source [required]

boolean

Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.

source [required]

string

The original field name in the log event that should be renamed.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be rename_fields. Allowed enum values: rename_fields

default: rename_fields

sources [required]

[ <oneOf>]

A list of configured data sources for the pipeline.

Option 1

object

The kafka source ingests data from Apache Kafka topics.

group_id [required]

string

Consumer group ID used by the Kafka client.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

librdkafka_options

[object]

Optional list of advanced Kafka client configuration options, defined as key-value pairs.

name [required]

string

The name of the librdkafka configuration option to set.

value [required]

string

The value assigned to the specified librdkafka configuration option.

sasl

object

Specifies the SASL mechanism for authenticating with a Kafka cluster.

mechanism

enum

SASL mechanism used for Kafka authentication. Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512

tls

object

Configuration for enabling TLS encryption.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topics [required]

[string]

A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.

type [required]

enum

The source type. The value should always be kafka. Allowed enum values: kafka

default: kafka

Option 2

object

The datadog_agent source collects logs from the Datadog Agent.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be datadog_agent. Allowed enum values: datadog_agent

default: datadog_agent

name [required]

string

Name of the pipeline.

id [required]

string

Unique identifier for the pipeline.

type [required]

string

The resource type identifier. For pipeline resources, this should always be set to pipelines.

default: pipelines

{
  "data": {
    "attributes": {
      "config": {
        "destinations": [
          {
            "id": "datadog-logs-destination",
            "inputs": [
              "filter-processor"
            ],
            "type": "datadog_logs"
          }
        ],
        "processors": [
          {
            "id": "filter-processor",
            "include": "service:my-service",
            "inputs": [
              "datadog-agent-source"
            ],
            "type": "filter"
          }
        ],
        "sources": [
          {
            "group_id": "consumer-group-0",
            "id": "kafka-source",
            "librdkafka_options": [
              {
                "name": "fetch.message.max.bytes",
                "value": "1048576"
              }
            ],
            "sasl": {
              "mechanism": "string"
            },
            "tls": {
              "ca_file": "string",
              "crt_file": "/path/to/cert.crt",
              "key_file": "string"
            },
            "topics": [
              "topic1",
              "topic2"
            ],
            "type": "kafka"
          }
        ]
      },
      "name": "Main Observability Pipeline"
    },
    "id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
    "type": "pipelines"
  }
}

Bad Request

API error response.

Expand All

Field

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Forbidden

API error response.

Expand All

Field

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Conflict

API error response.

Expand All

Field

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Too many requests

API error response.

Expand All

Field

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Code Example

                          # Curl command
curl -X POST "https://api.ap1.datadoghq.com"https://api.datadoghq.eu"https://api.ddog-gov.com"https://api.datadoghq.com"https://api.us3.datadoghq.com"https://api.us5.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines" \ -H "Accept: application/json" \ -H "Content-Type: application/json" \ -H "DD-API-KEY: ${DD_API_KEY}" \ -H "DD-APPLICATION-KEY: ${DD_APP_KEY}" \ -d @- << EOF { "data": { "attributes": { "config": { "destinations": [ { "id": "datadog-logs-destination", "inputs": [ "filter-processor" ], "type": "datadog_logs" } ], "processors": [ { "id": "filter-processor", "include": "service:my-service", "inputs": [ "datadog-agent-source" ], "type": "filter" } ], "sources": [ { "id": "datadog-agent-source", "type": "datadog_agent" } ] }, "name": "Main Observability Pipeline" }, "type": "pipelines" } } EOF

Note: This endpoint is in Preview.

GET https://api.ap1.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.datadoghq.eu/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.ddog-gov.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.us3.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.us5.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}

Overview

Get a specific pipeline by its ID.

Arguments

Path Parameters

Name

Type

Description

pipeline_id [required]

string

The ID of the pipeline to retrieve.

Response

OK

Top-level schema representing a pipeline.

Expand All

Field

Type

Description

data [required]

object

Contains the pipeline’s ID, type, and configuration attributes.

attributes [required]

object

Defines the pipeline’s name and its components (sources, processors, and destinations).

config [required]

object

Specifies the pipeline's configuration, including its sources, processors, and destinations.

destinations [required]

[ <oneOf>]

A list of destination components where processed logs are sent.

Option 1

object

The datadog_logs destination forwards logs to Datadog Log Management.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be datadog_logs. Allowed enum values: datadog_logs

default: datadog_logs

processors [required]

[ <oneOf>]

A list of processors that transform or enrich log data.

Option 1

object

The filter processor allows conditional processing of logs based on a Datadog search query. Logs that match the include query are passed through; others are discarded.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs should pass through the filter. Logs that match this query continue to downstream components; others are dropped.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be filter. Allowed enum values: filter

default: filter

Option 2

object

The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.

field [required]

string

The name of the log field that contains a JSON string.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be parse_json. Allowed enum values: parse_json

default: parse_json

Option 3

object

The Quota Processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.

drop_events [required]

boolean

If set to true, logs that matched the quota filter and sent after the quota has been met are dropped; only logs that did not match the filter query continue through the pipeline.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

ignore_when_missing_partitions

boolean

If true, the processor skips quota checks when partition fields are missing from the logs.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

name [required]

string

Name for identifying the processor.

overrides

[object]

A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.

fields [required]

[object]

A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.

name [required]

string

The field name.

value [required]

string

The field value.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

partition_fields

[string]

A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.

type [required]

enum

The processor type. The value should always be quota. Allowed enum values: quota

default: quota

Option 4

object

The add_fields processor adds static key-value fields to logs.

fields [required]

[object]

A list of static fields (key-value pairs) that is added to each log event processed by this component.

name [required]

string

The field name.

value [required]

string

The field value.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be add_fields. Allowed enum values: add_fields

default: add_fields

Option 5

object

The remove_fields processor deletes specified fields from logs.

fields [required]

[string]

A list of field names to be removed from each log event.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

The PipelineRemoveFieldsProcessor inputs.

type [required]

enum

The processor type. The value should always be remove_fields. Allowed enum values: remove_fields

default: remove_fields

Option 6

object

The rename_fields processor changes field names.

fields [required]

[object]

A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.

destination [required]

string

The field name to assign the renamed value to.

preserve_source [required]

boolean

Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.

source [required]

string

The original field name in the log event that should be renamed.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be rename_fields. Allowed enum values: rename_fields

default: rename_fields

sources [required]

[ <oneOf>]

A list of configured data sources for the pipeline.

Option 1

object

The kafka source ingests data from Apache Kafka topics.

group_id [required]

string

Consumer group ID used by the Kafka client.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

librdkafka_options

[object]

Optional list of advanced Kafka client configuration options, defined as key-value pairs.

name [required]

string

The name of the librdkafka configuration option to set.

value [required]

string

The value assigned to the specified librdkafka configuration option.

sasl

object

Specifies the SASL mechanism for authenticating with a Kafka cluster.

mechanism

enum

SASL mechanism used for Kafka authentication. Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512

tls

object

Configuration for enabling TLS encryption.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topics [required]

[string]

A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.

type [required]

enum

The source type. The value should always be kafka. Allowed enum values: kafka

default: kafka

Option 2

object

The datadog_agent source collects logs from the Datadog Agent.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be datadog_agent. Allowed enum values: datadog_agent

default: datadog_agent

name [required]

string

Name of the pipeline.

id [required]

string

Unique identifier for the pipeline.

type [required]

string

The resource type identifier. For pipeline resources, this should always be set to pipelines.

default: pipelines

{
  "data": {
    "attributes": {
      "config": {
        "destinations": [
          {
            "id": "datadog-logs-destination",
            "inputs": [
              "filter-processor"
            ],
            "type": "datadog_logs"
          }
        ],
        "processors": [
          {
            "id": "filter-processor",
            "include": "service:my-service",
            "inputs": [
              "datadog-agent-source"
            ],
            "type": "filter"
          }
        ],
        "sources": [
          {
            "group_id": "consumer-group-0",
            "id": "kafka-source",
            "librdkafka_options": [
              {
                "name": "fetch.message.max.bytes",
                "value": "1048576"
              }
            ],
            "sasl": {
              "mechanism": "string"
            },
            "tls": {
              "ca_file": "string",
              "crt_file": "/path/to/cert.crt",
              "key_file": "string"
            },
            "topics": [
              "topic1",
              "topic2"
            ],
            "type": "kafka"
          }
        ]
      },
      "name": "Main Observability Pipeline"
    },
    "id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
    "type": "pipelines"
  }
}

Forbidden

API error response.

Expand All

Field

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Too many requests

API error response.

Expand All

Field

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Code Example

                  # Path parameters
export pipeline_id="CHANGE_ME"
# Curl command
curl -X GET "https://api.ap1.datadoghq.com"https://api.datadoghq.eu"https://api.ddog-gov.com"https://api.datadoghq.com"https://api.us3.datadoghq.com"https://api.us5.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/${pipeline_id}" \ -H "Accept: application/json" \ -H "DD-API-KEY: ${DD_API_KEY}" \ -H "DD-APPLICATION-KEY: ${DD_APP_KEY}"

Note: This endpoint is in Preview.

PUT https://api.ap1.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.datadoghq.eu/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.ddog-gov.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.us3.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.us5.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}

Overview

Update a pipeline.

Arguments

Path Parameters

Name

Type

Description

pipeline_id [required]

string

The ID of the pipeline to update.

Request

Body Data (required)

Expand All

Field

Type

Description

data [required]

object

Contains the pipeline’s ID, type, and configuration attributes.

attributes [required]

object

Defines the pipeline’s name and its components (sources, processors, and destinations).

config [required]

object

Specifies the pipeline's configuration, including its sources, processors, and destinations.

destinations [required]

[ <oneOf>]

A list of destination components where processed logs are sent.

Option 1

object

The datadog_logs destination forwards logs to Datadog Log Management.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be datadog_logs. Allowed enum values: datadog_logs

default: datadog_logs

processors [required]

[ <oneOf>]

A list of processors that transform or enrich log data.

Option 1

object

The filter processor allows conditional processing of logs based on a Datadog search query. Logs that match the include query are passed through; others are discarded.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs should pass through the filter. Logs that match this query continue to downstream components; others are dropped.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be filter. Allowed enum values: filter

default: filter

Option 2

object

The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.

field [required]

string

The name of the log field that contains a JSON string.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be parse_json. Allowed enum values: parse_json

default: parse_json

Option 3

object

The Quota Processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.

drop_events [required]

boolean

If set to true, logs that matched the quota filter and sent after the quota has been met are dropped; only logs that did not match the filter query continue through the pipeline.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

ignore_when_missing_partitions

boolean

If true, the processor skips quota checks when partition fields are missing from the logs.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

name [required]

string

Name for identifying the processor.

overrides

[object]

A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.

fields [required]

[object]

A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.

name [required]

string

The field name.

value [required]

string

The field value.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

partition_fields

[string]

A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.

type [required]

enum

The processor type. The value should always be quota. Allowed enum values: quota

default: quota

Option 4

object

The add_fields processor adds static key-value fields to logs.

fields [required]

[object]

A list of static fields (key-value pairs) that is added to each log event processed by this component.

name [required]

string

The field name.

value [required]

string

The field value.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be add_fields. Allowed enum values: add_fields

default: add_fields

Option 5

object

The remove_fields processor deletes specified fields from logs.

fields [required]

[string]

A list of field names to be removed from each log event.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

The PipelineRemoveFieldsProcessor inputs.

type [required]

enum

The processor type. The value should always be remove_fields. Allowed enum values: remove_fields

default: remove_fields

Option 6

object

The rename_fields processor changes field names.

fields [required]

[object]

A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.

destination [required]

string

The field name to assign the renamed value to.

preserve_source [required]

boolean

Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.

source [required]

string

The original field name in the log event that should be renamed.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be rename_fields. Allowed enum values: rename_fields

default: rename_fields

sources [required]

[ <oneOf>]

A list of configured data sources for the pipeline.

Option 1

object

The kafka source ingests data from Apache Kafka topics.

group_id [required]

string

Consumer group ID used by the Kafka client.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

librdkafka_options

[object]

Optional list of advanced Kafka client configuration options, defined as key-value pairs.

name [required]

string

The name of the librdkafka configuration option to set.

value [required]

string

The value assigned to the specified librdkafka configuration option.

sasl

object

Specifies the SASL mechanism for authenticating with a Kafka cluster.

mechanism

enum

SASL mechanism used for Kafka authentication. Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512

tls

object

Configuration for enabling TLS encryption.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topics [required]

[string]

A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.

type [required]

enum

The source type. The value should always be kafka. Allowed enum values: kafka

default: kafka

Option 2

object

The datadog_agent source collects logs from the Datadog Agent.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be datadog_agent. Allowed enum values: datadog_agent

default: datadog_agent

name [required]

string

Name of the pipeline.

id [required]

string

Unique identifier for the pipeline.

type [required]

string

The resource type identifier. For pipeline resources, this should always be set to pipelines.

default: pipelines

{
  "data": {
    "attributes": {
      "config": {
        "destinations": [
          {
            "id": "updated-datadog-logs-destination-id",
            "inputs": [
              "filter-processor"
            ],
            "type": "datadog_logs"
          }
        ],
        "processors": [
          {
            "id": "filter-processor",
            "include": "service:my-service",
            "inputs": [
              "datadog-agent-source"
            ],
            "type": "filter"
          }
        ],
        "sources": [
          {
            "id": "datadog-agent-source",
            "type": "datadog_agent"
          }
        ]
      },
      "name": "Updated Pipeline Name"
    },
    "id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
    "type": "pipelines"
  }
}

Response

OK

Top-level schema representing a pipeline.

Expand All

Field

Type

Description

data [required]

object

Contains the pipeline’s ID, type, and configuration attributes.

attributes [required]

object

Defines the pipeline’s name and its components (sources, processors, and destinations).

config [required]

object

Specifies the pipeline's configuration, including its sources, processors, and destinations.

destinations [required]

[ <oneOf>]

A list of destination components where processed logs are sent.

Option 1

object

The datadog_logs destination forwards logs to Datadog Log Management.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be datadog_logs. Allowed enum values: datadog_logs

default: datadog_logs

processors [required]

[ <oneOf>]

A list of processors that transform or enrich log data.

Option 1

object

The filter processor allows conditional processing of logs based on a Datadog search query. Logs that match the include query are passed through; others are discarded.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs should pass through the filter. Logs that match this query continue to downstream components; others are dropped.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be filter. Allowed enum values: filter

default: filter

Option 2

object

The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.

field [required]

string

The name of the log field that contains a JSON string.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be parse_json. Allowed enum values: parse_json

default: parse_json

Option 3

object

The Quota Processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.

drop_events [required]

boolean

If set to true, logs that matched the quota filter and sent after the quota has been met are dropped; only logs that did not match the filter query continue through the pipeline.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

ignore_when_missing_partitions

boolean

If true, the processor skips quota checks when partition fields are missing from the logs.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

name [required]

string

Name for identifying the processor.

overrides

[object]

A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.

fields [required]

[object]

A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.

name [required]

string

The field name.

value [required]

string

The field value.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

partition_fields

[string]

A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.

type [required]

enum

The processor type. The value should always be quota. Allowed enum values: quota

default: quota

Option 4

object

The add_fields processor adds static key-value fields to logs.

fields [required]

[object]

A list of static fields (key-value pairs) that is added to each log event processed by this component.

name [required]

string

The field name.

value [required]

string

The field value.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be add_fields. Allowed enum values: add_fields

default: add_fields

Option 5

object

The remove_fields processor deletes specified fields from logs.

fields [required]

[string]

A list of field names to be removed from each log event.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

The PipelineRemoveFieldsProcessor inputs.

type [required]

enum

The processor type. The value should always be remove_fields. Allowed enum values: remove_fields

default: remove_fields

Option 6

object

The rename_fields processor changes field names.

fields [required]

[object]

A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.

destination [required]

string

The field name to assign the renamed value to.

preserve_source [required]

boolean

Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.

source [required]

string

The original field name in the log event that should be renamed.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be rename_fields. Allowed enum values: rename_fields

default: rename_fields

sources [required]

[ <oneOf>]

A list of configured data sources for the pipeline.

Option 1

object

The kafka source ingests data from Apache Kafka topics.

group_id [required]

string

Consumer group ID used by the Kafka client.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

librdkafka_options

[object]

Optional list of advanced Kafka client configuration options, defined as key-value pairs.

name [required]

string

The name of the librdkafka configuration option to set.

value [required]

string

The value assigned to the specified librdkafka configuration option.

sasl

object

Specifies the SASL mechanism for authenticating with a Kafka cluster.

mechanism

enum

SASL mechanism used for Kafka authentication. Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512

tls

object

Configuration for enabling TLS encryption.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topics [required]

[string]

A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.

type [required]

enum

The source type. The value should always be kafka. Allowed enum values: kafka

default: kafka

Option 2

object

The datadog_agent source collects logs from the Datadog Agent.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be datadog_agent. Allowed enum values: datadog_agent

default: datadog_agent

name [required]

string

Name of the pipeline.

id [required]

string

Unique identifier for the pipeline.

type [required]

string

The resource type identifier. For pipeline resources, this should always be set to pipelines.

default: pipelines

{
  "data": {
    "attributes": {
      "config": {
        "destinations": [
          {
            "id": "datadog-logs-destination",
            "inputs": [
              "filter-processor"
            ],
            "type": "datadog_logs"
          }
        ],
        "processors": [
          {
            "id": "filter-processor",
            "include": "service:my-service",
            "inputs": [
              "datadog-agent-source"
            ],
            "type": "filter"
          }
        ],
        "sources": [
          {
            "group_id": "consumer-group-0",
            "id": "kafka-source",
            "librdkafka_options": [
              {
                "name": "fetch.message.max.bytes",
                "value": "1048576"
              }
            ],
            "sasl": {
              "mechanism": "string"
            },
            "tls": {
              "ca_file": "string",
              "crt_file": "/path/to/cert.crt",
              "key_file": "string"
            },
            "topics": [
              "topic1",
              "topic2"
            ],
            "type": "kafka"
          }
        ]
      },
      "name": "Main Observability Pipeline"
    },
    "id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
    "type": "pipelines"
  }
}

Bad Request

API error response.

Expand All

Field

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Forbidden

API error response.

Expand All

Field

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Not Found

API error response.

Expand All

Field

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Conflict

API error response.

Expand All

Field

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Too many requests

API error response.

Expand All

Field

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Code Example

                          # Path parameters
export pipeline_id="CHANGE_ME"
# Curl command
curl -X PUT "https://api.ap1.datadoghq.com"https://api.datadoghq.eu"https://api.ddog-gov.com"https://api.datadoghq.com"https://api.us3.datadoghq.com"https://api.us5.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/${pipeline_id}" \ -H "Accept: application/json" \ -H "Content-Type: application/json" \ -H "DD-API-KEY: ${DD_API_KEY}" \ -H "DD-APPLICATION-KEY: ${DD_APP_KEY}" \ -d @- << EOF { "data": { "attributes": { "config": { "destinations": [ { "id": "updated-datadog-logs-destination-id", "inputs": [ "filter-processor" ], "type": "datadog_logs" } ], "processors": [ { "id": "filter-processor", "include": "service:my-service", "inputs": [ "datadog-agent-source" ], "type": "filter" } ], "sources": [ { "id": "datadog-agent-source", "type": "datadog_agent" } ] }, "name": "Updated Pipeline Name" }, "id": "3fa85f64-5717-4562-b3fc-2c963f66afa6", "type": "pipelines" } } EOF

Note: This endpoint is in Preview.

DELETE https://api.ap1.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.datadoghq.eu/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.ddog-gov.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.us3.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.us5.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}

Overview

Delete a pipeline.

Arguments

Path Parameters

Name

Type

Description

pipeline_id [required]

string

The ID of the pipeline to delete.

Response

OK

Forbidden

API error response.

Expand All

Field

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Not Found

API error response.

Expand All

Field

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Conflict

API error response.

Expand All

Field

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Too many requests

API error response.

Expand All

Field

Type

Description

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Code Example

                  # Path parameters
export pipeline_id="CHANGE_ME"
# Curl command
curl -X DELETE "https://api.ap1.datadoghq.com"https://api.datadoghq.eu"https://api.ddog-gov.com"https://api.datadoghq.com"https://api.us3.datadoghq.com"https://api.us5.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/${pipeline_id}" \ -H "DD-API-KEY: ${DD_API_KEY}" \ -H "DD-APPLICATION-KEY: ${DD_APP_KEY}"