---
isPrivate: true
title: Sinks
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: >-
  Docs > Observability Pipelines > (LEGACY) Observability Pipelines
  Documentation > Reference > Sinks
---

# (LEGACY) Sinks

A sink is a destination for events. Each sink's design and transmission method is determined by the downstream service with which it interacts. For example, the `socket` sink streams individual events, while the `aws_s3` sink buffers and flushes data.

### AMQP{% #amqpsink %}

Supports AMQP version 0.9.1
OptionsSchema
{% tab title="amqpsink-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null encoding
required



Configures how events are encoded into raw bytes.
 Avro
optional

object

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 avro
required

object

Apache Avro-specific encoder options.
schema
required

string

The Avro schema.
codec
required

avro

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 Csv
optional



Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 csv
required

object

The CSV Serializer Options.
capacity
optional

integer

Set the capacity (in bytes) of the internal buffer used in the CSV writer. This defaults to a reasonable setting.
delimiter
optional

integer

The field delimiter to use when writing CSV.
double_quote
optional

boolean

Enable double quote escapes.

This is enabled by default, but it may be disabled. When disabled, quotes in field data are escaped instead of doubled.
escape
optional

integer

The escape character to use when writing CSV.

In some variants of CSV, quotes are escaped using a special escape character like \ (instead of escaping quotes by doubling them).

To use this `double_quotes` needs to be disabled as well otherwise it is ignored
fields
required

[string]

Configures the fields that will be encoded, as well as the order in which they appear in the output.

If a field is not present in the event, the output will be an empty string.

Values of type `Array`, `Object`, and `Regex` are not supported and the output will be an empty string.
quote
optional

integer

The quote character to use when writing CSV.
 quote_style
optional

 <oneOf>

The quoting style to use when writing CSV data.
always
optional

always

This puts quotes around every field. Always.
necessary
optional

necessary

This puts quotes around fields only when necessary. They are necessary when fields contain a quote, delimiter or record terminator. Quotes are also necessary when writing an empty record (which is indistinguishable from a record with one empty field).
non_numeric
optional

non_numeric

This puts quotes around all fields that are non-numeric. Namely, when writing a field that does not parse as a valid float or integer, then quotes will be used even if they aren't strictly necessary.
never
optional

never

This never writes quotes, even if it would produce invalid CSV data.
codec
required

csv

Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 Gelf
optional

object

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
codec
required

gelf

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
 Json
optional



Encodes an event as [JSON](https://www.json.org/).
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

json

Encodes an event as [JSON](https://www.json.org/).
 Logfmt
optional

object

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
codec
required

logfmt

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
 Native
optional

object

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 NativeJson
optional

object

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native_json

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 RawMessage
optional

object

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
codec
required

raw_message

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 Text
optional



Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

text

Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
exchange
required

string

A templated field.

The exchange to publish messages to.
 properties
optional

 <oneOf>

Configure the AMQP message properties.

AMQP message properties.
 Option 1
optional

object

Configure the AMQP message properties.

AMQP properties configuration.
content_encoding
optional

string,​null

Content-Encoding for the AMQP messages.
content_type
optional

string,​null

Content-Type for the AMQP messages.
 routing_key
optional

 <oneOf>

A templated field.

Template used to generate a routing key which corresponds to a queue binding.
Option 1
optional

string

A templated field.

In many cases, components can be configured so that part of the component's functionality can be customized on a per-event basis. For example, you have a sink that writes events to a file and you want to specify which file an event should go to by using an event field as part of the input to the filename used.

By using `Template`, users can specify either fixed strings or templated strings. Templated strings use a common syntax to refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string is `my-file.log`. An example of a template string is `my-file-{{key}}.log`, where `{{key}}` is the key's value when the template is rendered into a string.
connection_string
required

string

URI for the AMQP server.

The URI has the format of `amqp://<user>:<password>@<host>:<port>/<vhost>?timeout=<seconds>`.

The default vhost can be specified by using a value of `%2f`.

To connect over TLS, a scheme of `amqps` can be specified instead. For example, `amqps://...`. Additional TLS settings, such as client certificate verification, can be configured under the `tls` section.
 tls
optional

 <oneOf>

TLS configuration.
 Option 1
optional

object

TLS configuration.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
{% /tab %}

{% tab title="amqpsink-request-example" %}

```yaml
acknowledgements:
  enabled: null
encoding: ''
exchange: string
properties: ''
routing_key: ''
type: amqp
```

{% /tab %}

### AppSignal{% #appsignal %}

Configuration for the `appsignal` sink.
OptionsSchema
{% tab title="appsignal-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: null compression
optional

 <oneOf>

Compression configuration.

All compression algorithms use the default compression level unless otherwise specified.
 Option 1
optional

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
 Option 2
optional

object

Compression algorithm and compression level.
 algorithm
required

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
level
optional

enum

Compression level. Allowed enum values: `none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21`
 encoding
optional

object

Transformations to prepare an event for serialization.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
endpoint
optional

uri

The URI for the AppSignal API to send data to.
push_api_key
required

string

A valid app-level AppSignal Push API key.
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60 tls
optional

 <oneOf>

Configures the TLS options for incoming/outgoing connections.
 Option 1
optional



Configures the TLS options for incoming/outgoing connections.
enabled
optional

boolean,​null

Whether or not to require TLS for incoming or outgoing connections.

When enabled and used for incoming connections, an identity certificate is also required. See `tls.crt_file` for more information.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
{% /tab %}

{% tab title="appsignal-request-example" %}

```yaml
acknowledgements:
  enabled: null
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
compression: gzip
encoding: {}
endpoint: 'https://appsignal-endpoint.net'
push_api_key: string
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
tls: ''
type: appsignal
```

{% /tab %}

### AWS CloudWatch Logs{% #cloudwatchlogssink %}

Configuration for the `aws_cloudwatch_logs` sink.
OptionsSchema
{% tab title="cloudwatchlogssink-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: nullassume_role
optional

string,​null

The ARN of an [IAM role][iam_role] to assume at startup.

**DEPRECATED**: [iam_role]: [https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html)
auth
optional



Configuration of the authentication strategy for interacting with AWS services.
 batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: null compression
optional

 <oneOf>

Compression configuration.

All compression algorithms use the default compression level unless otherwise specified.
 Option 1
optional

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
 Option 2
optional

object

Compression algorithm and compression level.
 algorithm
required

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
level
optional

enum

Compression level. Allowed enum values: `none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21`
create_missing_group
optional

boolean

Dynamically create a [log group](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html) if it does not already exist.

This ignores `create_missing_stream` directly after creating the group and creates the first stream.
create_missing_stream
optional

boolean

Dynamically create a [log stream](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html) if it does not already exist.
 encoding
required



Configures how events are encoded into raw bytes.
 Avro
optional

object

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 avro
required

object

Apache Avro-specific encoder options.
schema
required

string

The Avro schema.
codec
required

avro

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 Csv
optional



Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 csv
required

object

The CSV Serializer Options.
capacity
optional

integer

Set the capacity (in bytes) of the internal buffer used in the CSV writer. This defaults to a reasonable setting.
delimiter
optional

integer

The field delimiter to use when writing CSV.
double_quote
optional

boolean

Enable double quote escapes.

This is enabled by default, but it may be disabled. When disabled, quotes in field data are escaped instead of doubled.
escape
optional

integer

The escape character to use when writing CSV.

In some variants of CSV, quotes are escaped using a special escape character like \ (instead of escaping quotes by doubling them).

To use this `double_quotes` needs to be disabled as well otherwise it is ignored
fields
required

[string]

Configures the fields that will be encoded, as well as the order in which they appear in the output.

If a field is not present in the event, the output will be an empty string.

Values of type `Array`, `Object`, and `Regex` are not supported and the output will be an empty string.
quote
optional

integer

The quote character to use when writing CSV.
 quote_style
optional

 <oneOf>

The quoting style to use when writing CSV data.
always
optional

always

This puts quotes around every field. Always.
necessary
optional

necessary

This puts quotes around fields only when necessary. They are necessary when fields contain a quote, delimiter or record terminator. Quotes are also necessary when writing an empty record (which is indistinguishable from a record with one empty field).
non_numeric
optional

non_numeric

This puts quotes around all fields that are non-numeric. Namely, when writing a field that does not parse as a valid float or integer, then quotes will be used even if they aren't strictly necessary.
never
optional

never

This never writes quotes, even if it would produce invalid CSV data.
codec
required

csv

Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 Gelf
optional

object

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
codec
required

gelf

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
 Json
optional



Encodes an event as [JSON](https://www.json.org/).
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

json

Encodes an event as [JSON](https://www.json.org/).
 Logfmt
optional

object

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
codec
required

logfmt

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
 Native
optional

object

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 NativeJson
optional

object

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native_json

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 RawMessage
optional

object

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
codec
required

raw_message

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 Text
optional



Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

text

Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
group_name
required

string

The [group name](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html) of the target CloudWatch Logs stream.
 request
optional



Outbound HTTP request settings.
headers
optional

object

Additional HTTP headers to add to every HTTP request.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
stream_name
required

string

The [stream name](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html) of the target CloudWatch Logs stream.

There can only be one writer to a log stream at a time. If multiple instances are writing to the same log group, the stream name must include an identifier that is guaranteed to be unique per instance.
 tls
optional

 <oneOf>

TLS configuration.
 Option 1
optional

object

TLS configuration.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
endpoint
optional

string,​null

Custom endpoint for use with AWS-compatible services.
region
optional

string,​null

The [AWS region](https://docs.aws.amazon.com/general/latest/gr/rande.html#regional-endpoints) of the target service.
{% /tab %}

{% tab title="cloudwatchlogssink-request-example" %}

```yaml
acknowledgements:
  enabled: null
assume_role: string
auth:
  imds:
    connect_timeout_seconds: 1
    max_attempts: 4
    read_timeout_seconds: 1
  load_timeout_secs: null
  region: null
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
compression: none
create_missing_group: true
create_missing_stream: true
encoding: ''
group_name: string
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  headers: {}
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
stream_name: string
tls: ''
type: aws_cloudwatch_logs
```

{% /tab %}

### AWS CloudWatch Metrics{% #cloudwatchmetricssink %}

Configuration for the `aws_cloudwatch_metrics` sink.
OptionsSchema
{% tab title="cloudwatchmetricssink-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: nullassume_role
optional

string,​null

The ARN of an [IAM role][iam_role] to assume at startup.

**DEPRECATED**: [iam_role]: [https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html)
auth
optional



Configuration of the authentication strategy for interacting with AWS services.
 batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: null compression
optional

 <oneOf>

Compression configuration.

All compression algorithms use the default compression level unless otherwise specified.
 Option 1
optional

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
 Option 2
optional

object

Compression algorithm and compression level.
 algorithm
required

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
level
optional

enum

Compression level. Allowed enum values: `none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21`
default_namespace
required

string

The default [namespace](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html#Namespace) to use for metrics that do not have one.

Metrics with the same name can only be differentiated by their namespace, and not all metrics have their own namespace.
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60 tls
optional

 <oneOf>

TLS configuration.
 Option 1
optional

object

TLS configuration.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
endpoint
optional

string,​null

Custom endpoint for use with AWS-compatible services.
region
optional

string,​null

The [AWS region](https://docs.aws.amazon.com/general/latest/gr/rande.html#regional-endpoints) of the target service.
{% /tab %}

{% tab title="cloudwatchmetricssink-request-example" %}

```yaml
acknowledgements:
  enabled: null
assume_role: string
auth:
  imds:
    connect_timeout_seconds: 1
    max_attempts: 4
    read_timeout_seconds: 1
  load_timeout_secs: null
  region: null
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
compression: none
default_namespace: string
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
tls: ''
type: aws_cloudwatch_metrics
```

{% /tab %}

### AWS Kinesis Firehose{% #kinesisfirehosesink %}

Configuration for the `aws_kinesis_firehose` sink.
OptionsSchema
{% tab title="kinesisfirehosesink-request-model" %}
FieldrequiredTypeDescription batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: null acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: nullauth
optional



Configuration of the authentication strategy for interacting with AWS services.
 compression
optional

 <oneOf>

Compression configuration.

All compression algorithms use the default compression level unless otherwise specified.
 Option 1
optional

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
 Option 2
optional

object

Compression algorithm and compression level.
 algorithm
required

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
level
optional

enum

Compression level. Allowed enum values: `none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21`
 encoding
optional



Configures how events are encoded into raw bytes.
 Avro
optional

object

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 avro
required

object

Apache Avro-specific encoder options.
schema
required

string

The Avro schema.
codec
required

avro

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 Csv
optional



Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 csv
required

object

The CSV Serializer Options.
capacity
optional

integer

Set the capacity (in bytes) of the internal buffer used in the CSV writer. This defaults to a reasonable setting.
delimiter
optional

integer

The field delimiter to use when writing CSV.
double_quote
optional

boolean

Enable double quote escapes.

This is enabled by default, but it may be disabled. When disabled, quotes in field data are escaped instead of doubled.
escape
optional

integer

The escape character to use when writing CSV.

In some variants of CSV, quotes are escaped using a special escape character like \ (instead of escaping quotes by doubling them).

To use this `double_quotes` needs to be disabled as well otherwise it is ignored
fields
required

[string]

Configures the fields that will be encoded, as well as the order in which they appear in the output.

If a field is not present in the event, the output will be an empty string.

Values of type `Array`, `Object`, and `Regex` are not supported and the output will be an empty string.
quote
optional

integer

The quote character to use when writing CSV.
 quote_style
optional

 <oneOf>

The quoting style to use when writing CSV data.
always
optional

always

This puts quotes around every field. Always.
necessary
optional

necessary

This puts quotes around fields only when necessary. They are necessary when fields contain a quote, delimiter or record terminator. Quotes are also necessary when writing an empty record (which is indistinguishable from a record with one empty field).
non_numeric
optional

non_numeric

This puts quotes around all fields that are non-numeric. Namely, when writing a field that does not parse as a valid float or integer, then quotes will be used even if they aren't strictly necessary.
never
optional

never

This never writes quotes, even if it would produce invalid CSV data.
codec
required

csv

Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 Gelf
optional

object

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
codec
required

gelf

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
 Json
optional



Encodes an event as [JSON](https://www.json.org/).
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

json

Encodes an event as [JSON](https://www.json.org/).
 Logfmt
optional

object

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
codec
required

logfmt

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
 Native
optional

object

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 NativeJson
optional

object

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native_json

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 RawMessage
optional

object

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
codec
required

raw_message

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 Text
optional



Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

text

Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60request_retry_partial
optional

boolean

Whether or not to retry successful requests containing partial failures.
stream_name
optional

string

The [stream name](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html) of the target Kinesis Firehose delivery stream.
 tls
optional

 <oneOf>

TLS configuration.
 Option 1
optional

object

TLS configuration.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
endpoint
optional

string,​null

Custom endpoint for use with AWS-compatible services.
region
optional

string,​null

The [AWS region](https://docs.aws.amazon.com/general/latest/gr/rande.html#regional-endpoints) of the target service.
{% /tab %}

{% tab title="kinesisfirehosesink-request-example" %}

```yaml
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
type: aws_kinesis_firehose
```

{% /tab %}

### AWS Kinesis Streams{% #kinesisstreamssink %}

Configuration for the `aws_kinesis_streams` sink.
OptionsSchema
{% tab title="kinesisstreamssink-request-model" %}
FieldrequiredTypeDescription batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: null partition_key_field
optional

 <oneOf>

The log field used as the Kinesis record's partition key value.

If not specified, a unique partition key is generated for each Kinesis record.
Option 1
optional

string

A wrapper around `OwnedValuePath` that allows it to be used in Vector config. This requires a valid path to be used. If you want to allow optional paths, use [optional_path::OptionalValuePath].
 acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: nullauth
optional



Configuration of the authentication strategy for interacting with AWS services.
 compression
optional

 <oneOf>

Compression configuration.

All compression algorithms use the default compression level unless otherwise specified.
 Option 1
optional

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
 Option 2
optional

object

Compression algorithm and compression level.
 algorithm
required

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
level
optional

enum

Compression level. Allowed enum values: `none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21`
 encoding
optional



Configures how events are encoded into raw bytes.
 Avro
optional

object

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 avro
required

object

Apache Avro-specific encoder options.
schema
required

string

The Avro schema.
codec
required

avro

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 Csv
optional



Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 csv
required

object

The CSV Serializer Options.
capacity
optional

integer

Set the capacity (in bytes) of the internal buffer used in the CSV writer. This defaults to a reasonable setting.
delimiter
optional

integer

The field delimiter to use when writing CSV.
double_quote
optional

boolean

Enable double quote escapes.

This is enabled by default, but it may be disabled. When disabled, quotes in field data are escaped instead of doubled.
escape
optional

integer

The escape character to use when writing CSV.

In some variants of CSV, quotes are escaped using a special escape character like \ (instead of escaping quotes by doubling them).

To use this `double_quotes` needs to be disabled as well otherwise it is ignored
fields
required

[string]

Configures the fields that will be encoded, as well as the order in which they appear in the output.

If a field is not present in the event, the output will be an empty string.

Values of type `Array`, `Object`, and `Regex` are not supported and the output will be an empty string.
quote
optional

integer

The quote character to use when writing CSV.
 quote_style
optional

 <oneOf>

The quoting style to use when writing CSV data.
always
optional

always

This puts quotes around every field. Always.
necessary
optional

necessary

This puts quotes around fields only when necessary. They are necessary when fields contain a quote, delimiter or record terminator. Quotes are also necessary when writing an empty record (which is indistinguishable from a record with one empty field).
non_numeric
optional

non_numeric

This puts quotes around all fields that are non-numeric. Namely, when writing a field that does not parse as a valid float or integer, then quotes will be used even if they aren't strictly necessary.
never
optional

never

This never writes quotes, even if it would produce invalid CSV data.
codec
required

csv

Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 Gelf
optional

object

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
codec
required

gelf

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
 Json
optional



Encodes an event as [JSON](https://www.json.org/).
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

json

Encodes an event as [JSON](https://www.json.org/).
 Logfmt
optional

object

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
codec
required

logfmt

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
 Native
optional

object

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 NativeJson
optional

object

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native_json

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 RawMessage
optional

object

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
codec
required

raw_message

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 Text
optional



Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

text

Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60request_retry_partial
optional

boolean

Whether or not to retry successful requests containing partial failures.
stream_name
optional

string

The [stream name](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html) of the target Kinesis Firehose delivery stream.
 tls
optional

 <oneOf>

TLS configuration.
 Option 1
optional

object

TLS configuration.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
endpoint
optional

string,​null

Custom endpoint for use with AWS-compatible services.
region
optional

string,​null

The [AWS region](https://docs.aws.amazon.com/general/latest/gr/rande.html#regional-endpoints) of the target service.
{% /tab %}

{% tab title="kinesisstreamssink-request-example" %}

```yaml
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
partition_key_field: ''
type: aws_kinesis_streams
```

{% /tab %}

### AWS S3{% #s3sink %}

Configuration for the `aws_s3` sink.
OptionsSchema
{% tab title="s3sink-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: nullauth
optional



Configuration of the authentication strategy for interacting with AWS services.
 batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: nullbucket
required

string

The S3 bucket name.

This must not include a leading `s3://` or a trailing `/`.
 compression
optional

 <oneOf>

Compression configuration.

All compression algorithms use the default compression level unless otherwise specified.

Some cloud storage API clients and browsers handle decompression transparently, so depending on how they are accessed, files may not always appear to be compressed.
 Option 1
optional

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
 Option 2
optional

object

Compression algorithm and compression level.
 algorithm
required

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
level
optional

enum

Compression level. Allowed enum values: `none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21`
filename_append_uuid
optional

boolean

Whether or not to append a UUID v4 token to the end of the object key.

The UUID is appended to the timestamp portion of the object key, such that if the object key generated is `date=2022-07-18/1658176486`, setting this field to `true` results in an object key that looks like `date=2022-07-18/1658176486-30f6652c-71da-4f9f-800d-a1189c47c547`.

This ensures there are no name collisions, and can be useful in high-volume workloads where object keys must be unique.
filename_extension
optional

string,​null

The filename extension to use in the object key.

This overrides setting the extension based on the configured `compression`.
filename_time_format
optional

string

The timestamp format for the time component of the object key.

By default, object keys are appended with a timestamp that reflects when the objects are sent to S3, such that the resulting object key is functionally equivalent to joining the key prefix with the formatted timestamp, such as `date=2022-07-18/1658176486`.

This would represent a `key_prefix` set to `date=%F/` and the timestamp of Mon Jul 18 2022 20:34:44 GMT+0000, with the `filename_time_format` being set to `%s`, which renders timestamps in seconds since the Unix epoch.

Supports the common [`strftime`](https://docs.rs/chrono/latest/chrono/format/strftime/index.html#specifiers) specifiers found in most languages.

When set to an empty string, no timestamp is appended to the key prefix.
key_prefix
optional

string

A prefix to apply to all object keys.

Prefixes are useful for partitioning objects, such as by creating an object key that stores objects under a particular directory. If using a prefix for this purpose, it must end in `/` to act as a directory path. A trailing `/` is **not** automatically added.
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60 tls
optional

 <oneOf>

TLS configuration.
 Option 1
optional

object

TLS configuration.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
 acl
optional

 <oneOf>

Canned ACL to apply to the created objects.

For more information, see [Canned ACL](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl).
 Option 1
optional

 <oneOf>

S3 Canned ACLs.

For more information, see [Canned ACL](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl).
private
optional

private

Bucket/object are private.

The bucket/object owner is granted the `FULL_CONTROL` permission, and no one else has access.

This is the default.
public-read
optional

public-read

Bucket/object can be read publicly.

The bucket/object owner is granted the `FULL_CONTROL` permission, and anyone in the `AllUsers` grantee group is granted the `READ` permission.
public-read-write
optional

public-read-write

Bucket/object can be read and written publicly.

The bucket/object owner is granted the `FULL_CONTROL` permission, and anyone in the `AllUsers` grantee group is granted the `READ` and `WRITE` permissions.

This is generally not recommended.
aws-exec-read
optional

aws-exec-read

Bucket/object are private, and readable by EC2.

The bucket/object owner is granted the `FULL_CONTROL` permission, and the AWS EC2 service is granted the `READ` permission for the purpose of reading Amazon Machine Image (AMI) bundles from the given bucket.
authenticated-read
optional

authenticated-read

Bucket/object can be read by authenticated users.

The bucket/object owner is granted the `FULL_CONTROL` permission, and anyone in the `AuthenticatedUsers` grantee group is granted the `READ` permission.
bucket-owner-read
optional

bucket-owner-read

Object is private, except to the bucket owner.

The object owner is granted the `FULL_CONTROL` permission, and the bucket owner is granted the `READ` permission.

Only relevant when specified for an object: this canned ACL is otherwise ignored when specified for a bucket.
bucket-owner-full-control
optional

bucket-owner-full-control

Object is semi-private.

Both the object owner and bucket owner are granted the `FULL_CONTROL` permission.

Only relevant when specified for an object: this canned ACL is otherwise ignored when specified for a bucket.
log-delivery-write
optional

log-delivery-write

Bucket can have logs written.

The `LogDelivery` grantee group is granted `WRITE` and `READ_ACP` permissions.

Only relevant when specified for a bucket: this canned ACL is otherwise ignored when specified for an object.

For more information about logs, see [Amazon S3 Server Access Logging](https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html).
content_encoding
optional

string,​null

Overrides what content encoding has been applied to the object.

Directly comparable to the `Content-Encoding` HTTP header.

If not specified, the compression scheme used dictates this value.
content_type
optional

string,​null

Overrides the MIME type of the object.

Directly comparable to the `Content-Type` HTTP header.

If not specified, the compression scheme used dictates this value. When `compression` is set to `none`, the value `text/x-log` is used.
grant_full_control
optional

string,​null

Grants `READ`, `READ_ACP`, and `WRITE_ACP` permissions on the created objects to the named [grantee](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#specifying-grantee).

This allows the grantee to read the created objects and their metadata, as well as read and modify the ACL on the created objects.
grant_read
optional

string,​null

Grants `READ` permissions on the created objects to the named [grantee](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#specifying-grantee).

This allows the grantee to read the created objects and their metadata.
grant_read_acp
optional

string,​null

Grants `READ_ACP` permissions on the created objects to the named [grantee](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#specifying-grantee).

This allows the grantee to read the ACL on the created objects.
grant_write_acp
optional

string,​null

Grants `WRITE_ACP` permissions on the created objects to the named [grantee](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#specifying-grantee).

This allows the grantee to modify the ACL on the created objects.
 server_side_encryption
optional

 <oneOf>

AWS S3 Server-Side Encryption algorithms.

The Server-side Encryption algorithm used when storing these objects.
 Option 1
optional

 <oneOf>

AWS S3 Server-Side Encryption algorithms.

More information on each algorithm can be found in the [AWS documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/serv-side-encryption.html).
AES256
optional

AES256

Each object is encrypted with AES-256 using a unique key.

This corresponds to the `SSE-S3` option.
aws:kms
optional

aws:kms

Each object is encrypted with AES-256 using keys managed by AWS KMS.

Depending on whether or not a KMS key ID is specified, this corresponds either to the `SSE-KMS` option (keys generated/managed by KMS) or the `SSE-C` option (keys generated by the customer, managed by KMS).
ssekms_key_id
optional

string,​null

Specifies the ID of the AWS Key Management Service (AWS KMS) symmetrical customer managed customer master key (CMK) that is used for the created objects.

Only applies when `server_side_encryption` is configured to use KMS.

If not specified, Amazon S3 uses the AWS managed CMK in AWS to protect the data.
 storage_class
optional

 <oneOf>

The storage class for the created objects.

See the [S3 Storage Classes](https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html) for more details.
STANDARD
optional

STANDARD

Standard Redundancy.
REDUCED_REDUNDANCY
optional

REDUCED_REDUNDANCY

Reduced Redundancy.
INTELLIGENT_TIERING
optional

INTELLIGENT_TIERING

Intelligent Tiering.
STANDARD_IA
optional

STANDARD_IA

Infrequently Accessed.
ONEZONE_IA
optional

ONEZONE_IA

Infrequently Accessed (single Availability zone).
GLACIER
optional

GLACIER

Glacier Flexible Retrieval.
DEEP_ARCHIVE
optional

DEEP_ARCHIVE

Glacier Deep Archive.
tags
optional

object,​null

The tag-set for the object.
endpoint
optional

string,​null

Custom endpoint for use with AWS-compatible services.
region
optional

string,​null

The [AWS region](https://docs.aws.amazon.com/general/latest/gr/rande.html#regional-endpoints) of the target service.
 encoding
required



Configures how events are encoded into raw bytes.
 Avro
optional

object

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 avro
required

object

Apache Avro-specific encoder options.
schema
required

string

The Avro schema.
codec
required

avro

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 Csv
optional



Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 csv
required

object

The CSV Serializer Options.
capacity
optional

integer

Set the capacity (in bytes) of the internal buffer used in the CSV writer. This defaults to a reasonable setting.
delimiter
optional

integer

The field delimiter to use when writing CSV.
double_quote
optional

boolean

Enable double quote escapes.

This is enabled by default, but it may be disabled. When disabled, quotes in field data are escaped instead of doubled.
escape
optional

integer

The escape character to use when writing CSV.

In some variants of CSV, quotes are escaped using a special escape character like \ (instead of escaping quotes by doubling them).

To use this `double_quotes` needs to be disabled as well otherwise it is ignored
fields
required

[string]

Configures the fields that will be encoded, as well as the order in which they appear in the output.

If a field is not present in the event, the output will be an empty string.

Values of type `Array`, `Object`, and `Regex` are not supported and the output will be an empty string.
quote
optional

integer

The quote character to use when writing CSV.
 quote_style
optional

 <oneOf>

The quoting style to use when writing CSV data.
always
optional

always

This puts quotes around every field. Always.
necessary
optional

necessary

This puts quotes around fields only when necessary. They are necessary when fields contain a quote, delimiter or record terminator. Quotes are also necessary when writing an empty record (which is indistinguishable from a record with one empty field).
non_numeric
optional

non_numeric

This puts quotes around all fields that are non-numeric. Namely, when writing a field that does not parse as a valid float or integer, then quotes will be used even if they aren't strictly necessary.
never
optional

never

This never writes quotes, even if it would produce invalid CSV data.
codec
required

csv

Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 Gelf
optional

object

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
codec
required

gelf

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
 Json
optional



Encodes an event as [JSON](https://www.json.org/).
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

json

Encodes an event as [JSON](https://www.json.org/).
 Logfmt
optional

object

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
codec
required

logfmt

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
 Native
optional

object

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 NativeJson
optional

object

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native_json

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 RawMessage
optional

object

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
codec
required

raw_message

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 Text
optional



Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

text

Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
 framing
optional

 <oneOf>

Framing configuration.
 Option 1
optional

 <oneOf>

Framing configuration.
 Bytes
optional

object

Event data is not delimited at all.
method
required

bytes

Event data is not delimited at all.
 CharacterDelimited
optional



Event data is delimited by a single ASCII (7-bit) character.
 character_delimited
required

object

Options for the character delimited encoder.
delimiter
required

integer

The ASCII (7-bit) character that delimits byte sequences.
method
required

character_delimited

Event data is delimited by a single ASCII (7-bit) character.
 LengthDelimited
optional

object

Event data is prefixed with its length in bytes.

The prefix is a 32-bit unsigned integer, little endian.
method
required

length_delimited

Event data is prefixed with its length in bytes.

The prefix is a 32-bit unsigned integer, little endian.
 NewlineDelimited
optional

object

Event data is delimited by a newline (LF) character.
method
required

newline_delimited

Event data is delimited by a newline (LF) character.
{% /tab %}

{% tab title="s3sink-request-example" %}

```yaml
acknowledgements:
  enabled: null
auth:
  imds:
    connect_timeout_seconds: 1
    max_attempts: 4
    read_timeout_seconds: 1
  load_timeout_secs: null
  region: null
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
bucket: string
compression: gzip
filename_append_uuid: true
filename_extension: string
filename_time_format: '%s'
key_prefix: date=%F
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
tls: ''
type: aws_s3
```

{% /tab %}

### AWS SQS{% #sqssink %}

Configuration for the `aws_sqs` sink.
OptionsSchema
{% tab title="sqssink-request-model" %}
FieldrequiredTypeDescriptionqueue_url
required

uri

The URL of the Amazon SQS queue to which messages are sent.
endpoint
optional

string,​null

Custom endpoint for use with AWS-compatible services.
region
optional

string,​null

The [AWS region](https://docs.aws.amazon.com/general/latest/gr/rande.html#regional-endpoints) of the target service.
 acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: nullassume_role
optional

string,​null

The ARN of an [IAM role][iam_role] to assume at startup.

**DEPRECATED**: [iam_role]: [https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html)
auth
optional



Configuration of the authentication strategy for interacting with AWS services.
 encoding
required



Configures how events are encoded into raw bytes.
 Avro
optional

object

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 avro
required

object

Apache Avro-specific encoder options.
schema
required

string

The Avro schema.
codec
required

avro

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 Csv
optional



Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 csv
required

object

The CSV Serializer Options.
capacity
optional

integer

Set the capacity (in bytes) of the internal buffer used in the CSV writer. This defaults to a reasonable setting.
delimiter
optional

integer

The field delimiter to use when writing CSV.
double_quote
optional

boolean

Enable double quote escapes.

This is enabled by default, but it may be disabled. When disabled, quotes in field data are escaped instead of doubled.
escape
optional

integer

The escape character to use when writing CSV.

In some variants of CSV, quotes are escaped using a special escape character like \ (instead of escaping quotes by doubling them).

To use this `double_quotes` needs to be disabled as well otherwise it is ignored
fields
required

[string]

Configures the fields that will be encoded, as well as the order in which they appear in the output.

If a field is not present in the event, the output will be an empty string.

Values of type `Array`, `Object`, and `Regex` are not supported and the output will be an empty string.
quote
optional

integer

The quote character to use when writing CSV.
 quote_style
optional

 <oneOf>

The quoting style to use when writing CSV data.
always
optional

always

This puts quotes around every field. Always.
necessary
optional

necessary

This puts quotes around fields only when necessary. They are necessary when fields contain a quote, delimiter or record terminator. Quotes are also necessary when writing an empty record (which is indistinguishable from a record with one empty field).
non_numeric
optional

non_numeric

This puts quotes around all fields that are non-numeric. Namely, when writing a field that does not parse as a valid float or integer, then quotes will be used even if they aren't strictly necessary.
never
optional

never

This never writes quotes, even if it would produce invalid CSV data.
codec
required

csv

Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 Gelf
optional

object

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
codec
required

gelf

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
 Json
optional



Encodes an event as [JSON](https://www.json.org/).
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

json

Encodes an event as [JSON](https://www.json.org/).
 Logfmt
optional

object

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
codec
required

logfmt

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
 Native
optional

object

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 NativeJson
optional

object

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native_json

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 RawMessage
optional

object

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
codec
required

raw_message

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 Text
optional



Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

text

Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
message_deduplication_id
optional

string,​null

The message deduplication ID value to allow AWS to identify duplicate messages.

This value is a template which should result in a unique string for each event. See the [AWS documentation](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/using-messagededuplicationid-property.html) for more about how AWS does message deduplication.
message_group_id
optional

string,​null

The tag that specifies that a message belongs to a specific message group.

Can be applied only to FIFO queues.
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60 tls
optional

 <oneOf>

TLS configuration.
 Option 1
optional

object

TLS configuration.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
{% /tab %}

{% tab title="sqssink-request-example" %}

```yaml
queue_url: string
type: aws_sqs
```

{% /tab %}

### Axiom{% #axiom %}

Configuration for the `axiom` sink.
OptionsSchema
{% tab title="axiom-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null compression
optional

 <oneOf>

Compression configuration.

All compression algorithms use the default compression level unless otherwise specified.
 Option 1
optional

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
 Option 2
optional

object

Compression algorithm and compression level.
 algorithm
required

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
level
optional

enum

Compression level. Allowed enum values: `none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21`
dataset
required

string

The Axiom dataset to write to.
org_id
optional

string,​null

The Axiom organization ID.

Only required when using personal tokens.
 request
optional



Outbound HTTP request settings.
headers
optional

object

Additional HTTP headers to add to every HTTP request.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
 tls
optional

 <oneOf>

TLS configuration.
 Option 1
optional

object

TLS configuration.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
token
required

string

The Axiom API token.
url
optional

uri

URI of the Axiom endpoint to send data to.

Only required if not using Axiom Cloud.
{% /tab %}

{% tab title="axiom-request-example" %}

```yaml
acknowledgements:
  enabled: null
compression: none
dataset: string
org_id: string
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  headers: {}
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
tls: ''
token: string
url: string
type: axiom
```

{% /tab %}

### Azure Blob{% #azureblobsink %}

Configuration for the `azure_blob` sink.
OptionsSchema
{% tab title="azureblobsink-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: nullblob_append_uuid
optional

boolean,​null

Whether or not to append a UUID v4 token to the end of the blob key.

The UUID is appended to the timestamp portion of the object key, such that if the blob key generated is `date=2022-07-18/1658176486`, setting this field to `true` results in an blob key that looks like `date=2022-07-18/1658176486-30f6652c-71da-4f9f-800d-a1189c47c547`.

This ensures there are no name collisions, and can be useful in high-volume workloads where blob keys must be unique.
blob_prefix
optional

string

A prefix to apply to all blob keys.

Prefixes are useful for partitioning objects, such as by creating a blob key that stores blobs under a particular directory. If using a prefix for this purpose, it must end in `/` to act as a directory path. A trailing `/` is **not** automatically added.
blob_time_format
optional

string,​null

The timestamp format for the time component of the blob key.

By default, blob keys are appended with a timestamp that reflects when the blob are sent to Azure Blob Storage, such that the resulting blob key is functionally equivalent to joining the blob prefix with the formatted timestamp, such as `date=2022-07-18/1658176486`.

This would represent a `blob_prefix` set to `date=%F/` and the timestamp of Mon Jul 18 2022 20:34:44 GMT+0000, with the `filename_time_format` being set to `%s`, which renders timestamps in seconds since the Unix epoch.

Supports the common [`strftime`](https://docs.rs/chrono/latest/chrono/format/strftime/index.html#specifiers) specifiers found in most languages.

When set to an empty string, no timestamp is appended to the blob prefix.
 compression
optional

 <oneOf>

Compression configuration.

All compression algorithms use the default compression level unless otherwise specified.
 Option 1
optional

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
 Option 2
optional

object

Compression algorithm and compression level.
 algorithm
required

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
level
optional

enum

Compression level. Allowed enum values: `none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21`
 connection_string
optional

 <oneOf>

The Azure Blob Storage Account connection string.

Authentication with access key is the only supported authentication method.

Either `storage_account`, or this field, must be specified.
Option 1
optional

string

Wrapper for sensitive strings containing credentials
container_name
required

string

The Azure Blob Storage Account container name.
endpoint
optional

string,​null

The Azure Blob Storage Endpoint URL.

This is used to override the default blob storage endpoint URL in cases where you are using credentials read from the environment/managed identities or access tokens without using an explicit connection_string (which already explicitly supports overriding the blob endpoint URL).

This may only be used with `storage_account` and is ignored when used with `connection_string`.
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60storage_account
optional

string,​null

The Azure Blob Storage Account name.

Attempts to load credentials for the account in the following ways, in order:

- read from environment variables ([more information](https://docs.rs/azure_identity/latest/azure_identity/struct.EnvironmentCredential.html))
- looks for a [Managed Identity](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview)
- uses the `az` CLI tool to get an access token ([more information](https://docs.microsoft.com/en-us/cli/azure/account?view=azure-cli-latest#az-account-get-access-token))

Either `connection_string`, or this field, must be specified.
 encoding
required



Configures how events are encoded into raw bytes.
 Avro
optional

object

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 avro
required

object

Apache Avro-specific encoder options.
schema
required

string

The Avro schema.
codec
required

avro

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 Csv
optional



Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 csv
required

object

The CSV Serializer Options.
capacity
optional

integer

Set the capacity (in bytes) of the internal buffer used in the CSV writer. This defaults to a reasonable setting.
delimiter
optional

integer

The field delimiter to use when writing CSV.
double_quote
optional

boolean

Enable double quote escapes.

This is enabled by default, but it may be disabled. When disabled, quotes in field data are escaped instead of doubled.
escape
optional

integer

The escape character to use when writing CSV.

In some variants of CSV, quotes are escaped using a special escape character like \ (instead of escaping quotes by doubling them).

To use this `double_quotes` needs to be disabled as well otherwise it is ignored
fields
required

[string]

Configures the fields that will be encoded, as well as the order in which they appear in the output.

If a field is not present in the event, the output will be an empty string.

Values of type `Array`, `Object`, and `Regex` are not supported and the output will be an empty string.
quote
optional

integer

The quote character to use when writing CSV.
 quote_style
optional

 <oneOf>

The quoting style to use when writing CSV data.
always
optional

always

This puts quotes around every field. Always.
necessary
optional

necessary

This puts quotes around fields only when necessary. They are necessary when fields contain a quote, delimiter or record terminator. Quotes are also necessary when writing an empty record (which is indistinguishable from a record with one empty field).
non_numeric
optional

non_numeric

This puts quotes around all fields that are non-numeric. Namely, when writing a field that does not parse as a valid float or integer, then quotes will be used even if they aren't strictly necessary.
never
optional

never

This never writes quotes, even if it would produce invalid CSV data.
codec
required

csv

Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 Gelf
optional

object

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
codec
required

gelf

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
 Json
optional



Encodes an event as [JSON](https://www.json.org/).
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

json

Encodes an event as [JSON](https://www.json.org/).
 Logfmt
optional

object

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
codec
required

logfmt

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
 Native
optional

object

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 NativeJson
optional

object

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native_json

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 RawMessage
optional

object

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
codec
required

raw_message

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 Text
optional



Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

text

Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
 framing
optional

 <oneOf>

Framing configuration.
 Option 1
optional

 <oneOf>

Framing configuration.
 Bytes
optional

object

Event data is not delimited at all.
method
required

bytes

Event data is not delimited at all.
 CharacterDelimited
optional



Event data is delimited by a single ASCII (7-bit) character.
 character_delimited
required

object

Options for the character delimited encoder.
delimiter
required

integer

The ASCII (7-bit) character that delimits byte sequences.
method
required

character_delimited

Event data is delimited by a single ASCII (7-bit) character.
 LengthDelimited
optional

object

Event data is prefixed with its length in bytes.

The prefix is a 32-bit unsigned integer, little endian.
method
required

length_delimited

Event data is prefixed with its length in bytes.

The prefix is a 32-bit unsigned integer, little endian.
 NewlineDelimited
optional

object

Event data is delimited by a newline (LF) character.
method
required

newline_delimited

Event data is delimited by a newline (LF) character.
{% /tab %}

{% tab title="azureblobsink-request-example" %}

```yaml
acknowledgements:
  enabled: null
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
blob_append_uuid: boolean
blob_prefix: blob/%F/
blob_time_format: string
compression: gzip
connection_string: ''
container_name: string
endpoint: string
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
storage_account: string
type: azure_blob
```

{% /tab %}

### Azure Monitor Logs{% #azuremonitorlogs %}

Configuration for the `azure_monitor_logs` sink.
OptionsSchema
{% tab title="azuremonitorlogs-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: nullazure_resource_id
optional

string,​null

The [Resource ID](https://docs.microsoft.com/en-us/azure/azure-monitor/platform/data-collector-api#request-headers) of the Azure resource the data should be associated with.
 batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: nullcustomer_id
required

string

The [unique identifier](https://docs.microsoft.com/en-us/azure/azure-monitor/platform/data-collector-api#request-uri-parameters) for the Log Analytics workspace.
 encoding
optional

object

Transformations to prepare an event for serialization.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
host
optional

string

[Alternative host](https://docs.azure.cn/en-us/articles/guidance/developerdifferences#check-endpoints-in-azure) for dedicated Azure regions.
log_type
required

string

The [record type](https://docs.microsoft.com/en-us/azure/azure-monitor/platform/data-collector-api#request-headers) of the data that is being submitted.

Can only contain letters, numbers, and underscores (_), and may not exceed 100 characters.
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60shared_key
required

string

The [primary or the secondary key](https://docs.microsoft.com/en-us/azure/azure-monitor/platform/data-collector-api#authorization) for the Log Analytics workspace.
 time_generated_key
optional

 <oneOf>

Use this option to customize the log field used as [`TimeGenerated`](https://learn.microsoft.com/en-us/azure/azure-monitor/logs/log-standard-columns#timegenerated) in Azure.

The setting of `log_schema.timestamp_key`, usually `timestamp`, is used here by default. This field should be used in rare cases where `TimeGenerated` should point to a specific log field. For example, use this field to set the log field `source_timestamp` as holding the value that should be used as `TimeGenerated` on the Azure side.
Option 1
optional

string

An optional path that deserializes an empty string to `None`.
 tls
optional

 <oneOf>

TLS configuration.
 Option 1
optional

object

TLS configuration.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
{% /tab %}

{% tab title="azuremonitorlogs-request-example" %}

```yaml
acknowledgements:
  enabled: null
azure_resource_id: string
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
customer_id: string
encoding: {}
host: ods.opinsights.azure.com
log_type: string
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
shared_key: string
time_generated_key: ''
tls: ''
type: azure_monitor_logs
```

{% /tab %}

### Blackhole{% #blackhole %}

Configuration for the `blackhole` sink.
OptionsSchema
{% tab title="blackhole-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: nullprint_interval_secs
optional

integer

The interval between reporting a summary of activity.

Set to `0` to disable reporting.
rate
optional

integer,​null

The number of events, per second, that the sink is allowed to consume.

By default, there is no limit.
{% /tab %}

{% tab title="blackhole-request-example" %}

```yaml
acknowledgements:
  enabled: null
print_interval_secs: 1
rate: integer
type: blackhole
```

{% /tab %}

### ClickHouse{% #clickhouse %}

Configuration for the `clickhouse` sink.
OptionsSchema
{% tab title="clickhouse-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null auth
optional

 <oneOf>

Configuration of the authentication strategy for HTTP requests.

HTTP authentication should be used with HTTPS only, as the authentication credentials are passed as an HTTP header without any additional encryption beyond what is provided by the transport itself.
 Option 1
optional

 <oneOf>

Configuration of the authentication strategy for HTTP requests.

HTTP authentication should be used with HTTPS only, as the authentication credentials are passed as an HTTP header without any additional encryption beyond what is provided by the transport itself.
 Basic
optional

object

Basic authentication.

The username and password are concatenated and encoded via [base64](https://en.wikipedia.org/wiki/Base64).
password
required

string

The basic authentication password.
strategy
required

basic

Basic authentication.

The username and password are concatenated and encoded via [base64](https://en.wikipedia.org/wiki/Base64).
user
required

string

The basic authentication username.
 Bearer
optional

object

Bearer authentication.

The bearer token value (OAuth2, JWT, etc.) is passed as-is.
strategy
required

bearer

Bearer authentication.

The bearer token value (OAuth2, JWT, etc.) is passed as-is.
token
required

string

The bearer authentication token.
 batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: null compression
optional

 <oneOf>

Compression configuration.

All compression algorithms use the default compression level unless otherwise specified.
 Option 1
optional

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
 Option 2
optional

object

Compression algorithm and compression level.
 algorithm
required

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
level
optional

enum

Compression level. Allowed enum values: `none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21`
 database
optional

 <oneOf>

A templated field.

The database that contains the table that data is inserted into.
Option 1
optional

string

A templated field.

In many cases, components can be configured so that part of the component's functionality can be customized on a per-event basis. For example, you have a sink that writes events to a file and you want to specify which file an event should go to by using an event field as part of the input to the filename used.

By using `Template`, users can specify either fixed strings or templated strings. Templated strings use a common syntax to refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string is `my-file.log`. An example of a template string is `my-file-{{key}}.log`, where `{{key}}` is the key's value when the template is rendered into a string.
date_time_best_effort
optional

boolean

Sets `date_time_input_format` to `best_effort`, allowing ClickHouse to properly parse RFC3339/ISO 8601.
 encoding
optional

object

Transformations to prepare an event for serialization.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
endpoint
required

string

The URI component of a request.

The endpoint of the ClickHouse server.
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60skip_unknown_fields
optional

boolean

Sets `input_format_skip_unknown_fields`, allowing ClickHouse to discard fields not present in the table schema.
table
required

string

A templated field.

The table that data is inserted into.
 tls
optional

 <oneOf>

TLS configuration.
 Option 1
optional

object

TLS configuration.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
{% /tab %}

{% tab title="clickhouse-request-example" %}

```yaml
acknowledgements:
  enabled: null
auth: ''
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
compression: gzip
database: ''
date_time_best_effort: boolean
encoding: {}
endpoint: string
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
skip_unknown_fields: boolean
table: string
tls: ''
type: clickhouse
```

{% /tab %}

### Console{% #consolesink %}

Configuration for the `console` sink.
OptionsSchema
{% tab title="consolesink-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null target
optional

 <oneOf>

The [standard stream](https://en.wikipedia.org/wiki/Standard_streams) to write to.
stdout
optional

stdout

Write output to [STDOUT](https://en.wikipedia.org/wiki/Standard_streams#Standard_output_\(stdout\)).
stderr
optional

stderr

Write output to [STDERR](https://en.wikipedia.org/wiki/Standard_streams#Standard_error_\(stderr\)).
 encoding
required



Configures how events are encoded into raw bytes.
 Avro
optional

object

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 avro
required

object

Apache Avro-specific encoder options.
schema
required

string

The Avro schema.
codec
required

avro

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 Csv
optional



Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 csv
required

object

The CSV Serializer Options.
capacity
optional

integer

Set the capacity (in bytes) of the internal buffer used in the CSV writer. This defaults to a reasonable setting.
delimiter
optional

integer

The field delimiter to use when writing CSV.
double_quote
optional

boolean

Enable double quote escapes.

This is enabled by default, but it may be disabled. When disabled, quotes in field data are escaped instead of doubled.
escape
optional

integer

The escape character to use when writing CSV.

In some variants of CSV, quotes are escaped using a special escape character like \ (instead of escaping quotes by doubling them).

To use this `double_quotes` needs to be disabled as well otherwise it is ignored
fields
required

[string]

Configures the fields that will be encoded, as well as the order in which they appear in the output.

If a field is not present in the event, the output will be an empty string.

Values of type `Array`, `Object`, and `Regex` are not supported and the output will be an empty string.
quote
optional

integer

The quote character to use when writing CSV.
 quote_style
optional

 <oneOf>

The quoting style to use when writing CSV data.
always
optional

always

This puts quotes around every field. Always.
necessary
optional

necessary

This puts quotes around fields only when necessary. They are necessary when fields contain a quote, delimiter or record terminator. Quotes are also necessary when writing an empty record (which is indistinguishable from a record with one empty field).
non_numeric
optional

non_numeric

This puts quotes around all fields that are non-numeric. Namely, when writing a field that does not parse as a valid float or integer, then quotes will be used even if they aren't strictly necessary.
never
optional

never

This never writes quotes, even if it would produce invalid CSV data.
codec
required

csv

Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 Gelf
optional

object

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
codec
required

gelf

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
 Json
optional



Encodes an event as [JSON](https://www.json.org/).
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

json

Encodes an event as [JSON](https://www.json.org/).
 Logfmt
optional

object

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
codec
required

logfmt

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
 Native
optional

object

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 NativeJson
optional

object

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native_json

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 RawMessage
optional

object

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
codec
required

raw_message

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 Text
optional



Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

text

Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
 framing
optional

 <oneOf>

Framing configuration.
 Option 1
optional

 <oneOf>

Framing configuration.
 Bytes
optional

object

Event data is not delimited at all.
method
required

bytes

Event data is not delimited at all.
 CharacterDelimited
optional



Event data is delimited by a single ASCII (7-bit) character.
 character_delimited
required

object

Options for the character delimited encoder.
delimiter
required

integer

The ASCII (7-bit) character that delimits byte sequences.
method
required

character_delimited

Event data is delimited by a single ASCII (7-bit) character.
 LengthDelimited
optional

object

Event data is prefixed with its length in bytes.

The prefix is a 32-bit unsigned integer, little endian.
method
required

length_delimited

Event data is prefixed with its length in bytes.

The prefix is a 32-bit unsigned integer, little endian.
 NewlineDelimited
optional

object

Event data is delimited by a newline (LF) character.
method
required

newline_delimited

Event data is delimited by a newline (LF) character.
{% /tab %}

{% tab title="consolesink-request-example" %}

```yaml
acknowledgements:
  enabled: null
target: stdout
type: console
```

{% /tab %}

### Databend{% #databend %}

Configuration for the `databend` sink.
OptionsSchema
{% tab title="databend-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null auth
optional

 <oneOf>

Configuration of the authentication strategy for HTTP requests.

HTTP authentication should be used with HTTPS only, as the authentication credentials are passed as an HTTP header without any additional encryption beyond what is provided by the transport itself.
 Option 1
optional

 <oneOf>

Configuration of the authentication strategy for HTTP requests.

HTTP authentication should be used with HTTPS only, as the authentication credentials are passed as an HTTP header without any additional encryption beyond what is provided by the transport itself.
 Basic
optional

object

Basic authentication.

The username and password are concatenated and encoded via [base64](https://en.wikipedia.org/wiki/Base64).
password
required

string

The basic authentication password.
strategy
required

basic

Basic authentication.

The username and password are concatenated and encoded via [base64](https://en.wikipedia.org/wiki/Base64).
user
required

string

The basic authentication username.
 Bearer
optional

object

Bearer authentication.

The bearer token value (OAuth2, JWT, etc.) is passed as-is.
strategy
required

bearer

Bearer authentication.

The bearer token value (OAuth2, JWT, etc.) is passed as-is.
token
required

string

The bearer authentication token.
 batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: null compression
optional

 <oneOf>

Compression configuration.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
database
optional

string

The database that contains the table that data is inserted into.
 encoding
optional



Configures how events are encoded into raw bytes.
 Csv
optional



Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 csv
required

object

The CSV Serializer Options.
capacity
optional

integer

Set the capacity (in bytes) of the internal buffer used in the CSV writer. This defaults to a reasonable setting.
delimiter
optional

integer

The field delimiter to use when writing CSV.
double_quote
optional

boolean

Enable double quote escapes.

This is enabled by default, but it may be disabled. When disabled, quotes in field data are escaped instead of doubled.
escape
optional

integer

The escape character to use when writing CSV.

In some variants of CSV, quotes are escaped using a special escape character like \ (instead of escaping quotes by doubling them).

To use this `double_quotes` needs to be disabled as well otherwise it is ignored
fields
required

[string]

Configures the fields that will be encoded, as well as the order in which they appear in the output.

If a field is not present in the event, the output will be an empty string.

Values of type `Array`, `Object`, and `Regex` are not supported and the output will be an empty string.
quote
optional

integer

The quote character to use when writing CSV.
 quote_style
optional

 <oneOf>

The quoting style to use when writing CSV data.
always
optional

always

This puts quotes around every field. Always.
necessary
optional

necessary

This puts quotes around fields only when necessary. They are necessary when fields contain a quote, delimiter or record terminator. Quotes are also necessary when writing an empty record (which is indistinguishable from a record with one empty field).
non_numeric
optional

non_numeric

This puts quotes around all fields that are non-numeric. Namely, when writing a field that does not parse as a valid float or integer, then quotes will be used even if they aren't strictly necessary.
never
optional

never

This never writes quotes, even if it would produce invalid CSV data.
codec
required

csv

Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 Json
optional



Encodes an event as [JSON](https://www.json.org/).
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

json

Encodes an event as [JSON](https://www.json.org/).
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
endpoint
required

string

The URI component of a request.

The endpoint of the Databend server.
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60table
required

string

The table that data is inserted into.
 tls
optional

 <oneOf>

TLS configuration.
 Option 1
optional

object

TLS configuration.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
{% /tab %}

{% tab title="databend-request-example" %}

```yaml
acknowledgements:
  enabled: null
auth: ''
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
compression: none
database: default
encoding:
  codec: json
endpoint: string
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
table: string
tls: ''
type: databend
```

{% /tab %}

### Datadog Archives{% #datadogarchivessink %}

Configuration for the `datadog_archives` sink.
OptionsSchema
{% tab title="datadogarchivessink-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null aws_s3
optional

 <oneOf>

S3-specific configuration options.
 Option 1
optional



S3-specific configuration options.
auth
optional



Configuration of the authentication strategy for interacting with AWS services.
 acl
optional

 <oneOf>

Canned ACL to apply to the created objects.

For more information, see [Canned ACL](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl).
 Option 1
optional

 <oneOf>

S3 Canned ACLs.

For more information, see [Canned ACL](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl).
private
optional

private

Bucket/object are private.

The bucket/object owner is granted the `FULL_CONTROL` permission, and no one else has access.

This is the default.
public-read
optional

public-read

Bucket/object can be read publicly.

The bucket/object owner is granted the `FULL_CONTROL` permission, and anyone in the `AllUsers` grantee group is granted the `READ` permission.
public-read-write
optional

public-read-write

Bucket/object can be read and written publicly.

The bucket/object owner is granted the `FULL_CONTROL` permission, and anyone in the `AllUsers` grantee group is granted the `READ` and `WRITE` permissions.

This is generally not recommended.
aws-exec-read
optional

aws-exec-read

Bucket/object are private, and readable by EC2.

The bucket/object owner is granted the `FULL_CONTROL` permission, and the AWS EC2 service is granted the `READ` permission for the purpose of reading Amazon Machine Image (AMI) bundles from the given bucket.
authenticated-read
optional

authenticated-read

Bucket/object can be read by authenticated users.

The bucket/object owner is granted the `FULL_CONTROL` permission, and anyone in the `AuthenticatedUsers` grantee group is granted the `READ` permission.
bucket-owner-read
optional

bucket-owner-read

Object is private, except to the bucket owner.

The object owner is granted the `FULL_CONTROL` permission, and the bucket owner is granted the `READ` permission.

Only relevant when specified for an object: this canned ACL is otherwise ignored when specified for a bucket.
bucket-owner-full-control
optional

bucket-owner-full-control

Object is semi-private.

Both the object owner and bucket owner are granted the `FULL_CONTROL` permission.

Only relevant when specified for an object: this canned ACL is otherwise ignored when specified for a bucket.
log-delivery-write
optional

log-delivery-write

Bucket can have logs written.

The `LogDelivery` grantee group is granted `WRITE` and `READ_ACP` permissions.

Only relevant when specified for a bucket: this canned ACL is otherwise ignored when specified for an object.

For more information about logs, see [Amazon S3 Server Access Logging](https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html).
grant_full_control
optional

string,​null

Grants `READ`, `READ_ACP`, and `WRITE_ACP` permissions on the created objects to the named [grantee](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#specifying-grantee).

This allows the grantee to read the created objects and their metadata, as well as read and modify the ACL on the created objects.
grant_read
optional

string,​null

Grants `READ` permissions on the created objects to the named [grantee](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#specifying-grantee).

This allows the grantee to read the created objects and their metadata.
grant_read_acp
optional

string,​null

Grants `READ_ACP` permissions on the created objects to the named [grantee](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#specifying-grantee).

This allows the grantee to read the ACL on the created objects.
grant_write_acp
optional

string,​null

Grants `WRITE_ACP` permissions on the created objects to the named [grantee](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#specifying-grantee).

This allows the grantee to modify the ACL on the created objects.
 server_side_encryption
optional

 <oneOf>

AWS S3 Server-Side Encryption algorithms.

The Server-side Encryption algorithm used when storing these objects.
 Option 1
optional

 <oneOf>

AWS S3 Server-Side Encryption algorithms.

More information on each algorithm can be found in the [AWS documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/serv-side-encryption.html).
AES256
optional

AES256

Each object is encrypted with AES-256 using a unique key.

This corresponds to the `SSE-S3` option.
aws:kms
optional

aws:kms

Each object is encrypted with AES-256 using keys managed by AWS KMS.

Depending on whether or not a KMS key ID is specified, this corresponds either to the `SSE-KMS` option (keys generated/managed by KMS) or the `SSE-C` option (keys generated by the customer, managed by KMS).
ssekms_key_id
optional

string,​null

Specifies the ID of the AWS Key Management Service (AWS KMS) symmetrical customer managed customer master key (CMK) that is used for the created objects.

Only applies when `server_side_encryption` is configured to use KMS.

If not specified, Amazon S3 uses the AWS managed CMK in AWS to protect the data.
 storage_class
required

 <oneOf>

The storage class for the created objects.

For more information, see [Using Amazon S3 storage classes](https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html).
STANDARD
optional

STANDARD

Standard Redundancy.
REDUCED_REDUNDANCY
optional

REDUCED_REDUNDANCY

Reduced Redundancy.
INTELLIGENT_TIERING
optional

INTELLIGENT_TIERING

Intelligent Tiering.
STANDARD_IA
optional

STANDARD_IA

Infrequently Accessed.
ONEZONE_IA
optional

ONEZONE_IA

Infrequently Accessed (single Availability zone).
GLACIER
optional

GLACIER

Glacier Flexible Retrieval.
DEEP_ARCHIVE
optional

DEEP_ARCHIVE

Glacier Deep Archive.
tags
optional

object,​null

The tag-set for the object.
endpoint
optional

string,​null

Custom endpoint for use with AWS-compatible services.
default: nullregion
optional

string,​null

The [AWS region](https://docs.aws.amazon.com/general/latest/gr/rande.html#regional-endpoints) of the target service.
default: null azure_blob
optional

 <oneOf>

ABS-specific configuration options.
 Option 1
optional

object

ABS-specific configuration options.
connection_string
required

string

The Azure Blob Storage Account connection string.

Authentication with access key is the only supported authentication method.
bucket
required

string

The name of the bucket to store the archives in.
 encoding
optional

object

Transformations to prepare an event for serialization.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
 gcp_cloud_storage
optional

 <oneOf>

GCS-specific configuration options.
 Option 1
optional



GCS-specific configuration options.
 acl
optional

 <oneOf>

GCS Predefined ACLs.

For more information, see [Predefined ACLs](https://cloud.google.com/storage/docs/access-control/lists#predefined-acl).
 Option 1
optional

 <oneOf>

GCS Predefined ACLs.

For more information, see [Predefined ACLs](https://cloud.google.com/storage/docs/access-control/lists#predefined-acl).
authenticated-read
optional

authenticated-read

Bucket/object can be read by authenticated users.

The bucket/object owner is granted the `OWNER` permission, and anyone authenticated Google account holder is granted the `READER` permission.
bucket-owner-full-control
optional

bucket-owner-full-control

Object is semi-private.

Both the object owner and bucket owner are granted the `OWNER` permission.

Only relevant when specified for an object: this predefined ACL is otherwise ignored when specified for a bucket.
bucket-owner-read
optional

bucket-owner-read

Object is private, except to the bucket owner.

The object owner is granted the `OWNER` permission, and the bucket owner is granted the `READER` permission.

Only relevant when specified for an object: this predefined ACL is otherwise ignored when specified for a bucket.
private
optional

private

Bucket/object are private.

The bucket/object owner is granted the `OWNER` permission, and no one else has access.
project-private
optional

project-private

Bucket/object are private within the project.

Project owners and project editors are granted the `OWNER` permission, and anyone who is part of the project team is granted the `READER` permission.

This is the default.
public-read
optional

public-read

Bucket/object can be read publically.

The bucket/object owner is granted the `OWNER` permission, and all other users, whether authenticated or anonymous, are granted the `READER` permission.
metadata
optional

object,​null

The set of metadata `key:value` pairs for the created objects.

For more information, see [Custom metadata](https://cloud.google.com/storage/docs/metadata#custom-metadata).
 storage_class
optional

 <oneOf>

GCS storage classes.

For more information, see [Storage classes](https://cloud.google.com/storage/docs/storage-classes).
 Option 1
optional

 <oneOf>

GCS storage classes.

For more information, see [Storage classes](https://cloud.google.com/storage/docs/storage-classes).
STANDARD
optional

STANDARD

Standard storage.

This is the default.
NEARLINE
optional

NEARLINE

Nearline storage.
COLDLINE
optional

COLDLINE

Coldline storage.
ARCHIVE
optional

ARCHIVE

Archive storage.
 api_key
optional

 <oneOf>

An [API key](https://cloud.google.com/docs/authentication/api-keys).

Either an API key or a path to a service account credentials JSON file can be specified.

If both are unset, the `GOOGLE_APPLICATION_CREDENTIALS` environment variable is checked for a filename. If no filename is named, an attempt is made to fetch an instance service account for the compute instance the program is running on. If this is not on a GCE instance, then you must define it with an API key or service account credentials JSON file.
Option 1
optional

string

Wrapper for sensitive strings containing credentials
credentials_path
optional

string,​null

Path to a [service account](https://cloud.google.com/docs/authentication/production#manually) credentials JSON file.

Either an API key or a path to a service account credentials JSON file can be specified.

If both are unset, the `GOOGLE_APPLICATION_CREDENTIALS` environment variable is checked for a filename. If no filename is named, an attempt is made to fetch an instance service account for the compute instance the program is running on. If this is not on a GCE instance, then you must define it with an API key or service account credentials JSON file.
skip_authentication
optional

boolean

Skip all authentication handling. For use with integration tests only.
key_prefix
optional

string,​null

A prefix to apply to all object keys.

Prefixes are useful for partitioning objects, such as by creating an object key that stores objects under a particular directory. If using a prefix for this purpose, it must end in `/` to act as a directory path. A trailing `/` is **not** automatically added.
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60service
required

string

The name of the object storage service to use.
 tls
optional

 <oneOf>

TLS configuration.
 Option 1
optional

object

TLS configuration.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
{% /tab %}

{% tab title="datadogarchivessink-request-example" %}

```yaml
acknowledgements:
  enabled: null
aws_s3: ''
azure_blob: ''
bucket: string
encoding: {}
gcp_cloud_storage: ''
key_prefix: string
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
service: string
tls: ''
type: datadog_archives
```

{% /tab %}

### Datadog Events{% #datadogevents %}

Configuration for the `datadog_events` sink.
OptionsSchema
{% tab title="datadogevents-request-model" %}
FieldrequiredTypeDescription region
optional

 <oneOf>

**DEPRECATED**: The Datadog region to send events to.
 Option 1
optional

 <oneOf>

A Datadog region.
us
optional

us

US region.
eu
optional

eu

EU region.
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60 acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: nulldefault_api_key
required

string

The default Datadog [API key](https://docs.datadoghq.com/api/?lang=bash#authentication) to use in authentication of HTTP requests.

If an event has a Datadog [API key](https://docs.datadoghq.com/api/?lang=bash#authentication) set explicitly in its metadata, it takes precedence over this setting.
endpoint
optional

string,​null

The endpoint to send observability data to.

The endpoint must contain an HTTP scheme, and may specify a hostname or IP address and port.

If set, overrides the `site` option.
site
optional

string

The Datadog [site](https://docs.datadoghq.com/getting_started/site) to send observability data to.
 tls
optional

 <oneOf>

Configures the TLS options for incoming/outgoing connections.
 Option 1
optional



Configures the TLS options for incoming/outgoing connections.
enabled
optional

boolean,​null

Whether or not to require TLS for incoming or outgoing connections.

When enabled and used for incoming connections, an identity certificate is also required. See `tls.crt_file` for more information.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
{% /tab %}

{% tab title="datadogevents-request-example" %}

```yaml
region: ''
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
type: datadog_events
```

{% /tab %}

### Datadog Logs{% #datadoglogs %}

Configuration for the `datadog_logs` sink.
OptionsSchema
{% tab title="datadoglogs-request-model" %}
FieldrequiredTypeDescription batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: null compression
optional

 <oneOf>

Compression configuration.

All compression algorithms use the default compression level unless otherwise specified.
 Option 1
optional

 <oneOf>

Compression configuration.

All compression algorithms use the default compression level unless otherwise specified.
 Option 1
optional

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
 Option 2
optional

object

Compression algorithm and compression level.
 algorithm
required

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
level
optional

enum

Compression level. Allowed enum values: `none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21`
 encoding
optional

object

Transformations to prepare an event for serialization.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
 region
optional

 <oneOf>

**DEPRECATED**: The Datadog region to send logs to.
 Option 1
optional

 <oneOf>

A Datadog region.
us
optional

us

US region.
eu
optional

eu

EU region.
 request
optional



Outbound HTTP request settings.
headers
optional

object

Additional HTTP headers to add to every HTTP request.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
 acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: nulldefault_api_key
required

string

The default Datadog [API key](https://docs.datadoghq.com/api/?lang=bash#authentication) to use in authentication of HTTP requests.

If an event has a Datadog [API key](https://docs.datadoghq.com/api/?lang=bash#authentication) set explicitly in its metadata, it takes precedence over this setting.
endpoint
optional

string,​null

The endpoint to send observability data to.

The endpoint must contain an HTTP scheme, and may specify a hostname or IP address and port.

If set, overrides the `site` option.
site
optional

string

The Datadog [site](https://docs.datadoghq.com/getting_started/site) to send observability data to.
 tls
optional

 <oneOf>

Configures the TLS options for incoming/outgoing connections.
 Option 1
optional



Configures the TLS options for incoming/outgoing connections.
enabled
optional

boolean,​null

Whether or not to require TLS for incoming or outgoing connections.

When enabled and used for incoming connections, an identity certificate is also required. See `tls.crt_file` for more information.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
{% /tab %}

{% tab title="datadoglogs-request-example" %}

```yaml
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
compression: ''
encoding: {}
region: ''
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  headers: {}
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
type: datadog_logs
```

{% /tab %}

### Datadog Metrics{% #datadogmetrics %}

Configuration for the `datadog_metrics` sink.
OptionsSchema
{% tab title="datadogmetrics-request-model" %}
FieldrequiredTypeDescription batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: nulldefault_namespace
optional

string,​null

Sets the default namespace for any metrics sent.

This namespace is only used if a metric has no existing namespace. When a namespace is present, it is used as a prefix to the metric name, and separated with a period (`.`).
 region
optional

 <oneOf>

**DEPRECATED**: The Datadog region to send metrics to.
 Option 1
optional

 <oneOf>

A Datadog region.
us
optional

us

US region.
eu
optional

eu

EU region.
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60 acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: nulldefault_api_key
required

string

The default Datadog [API key](https://docs.datadoghq.com/api/?lang=bash#authentication) to use in authentication of HTTP requests.

If an event has a Datadog [API key](https://docs.datadoghq.com/api/?lang=bash#authentication) set explicitly in its metadata, it takes precedence over this setting.
endpoint
optional

string,​null

The endpoint to send observability data to.

The endpoint must contain an HTTP scheme, and may specify a hostname or IP address and port.

If set, overrides the `site` option.
site
optional

string

The Datadog [site](https://docs.datadoghq.com/getting_started/site) to send observability data to.
 tls
optional

 <oneOf>

Configures the TLS options for incoming/outgoing connections.
 Option 1
optional



Configures the TLS options for incoming/outgoing connections.
enabled
optional

boolean,​null

Whether or not to require TLS for incoming or outgoing connections.

When enabled and used for incoming connections, an identity certificate is also required. See `tls.crt_file` for more information.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
{% /tab %}

{% tab title="datadogmetrics-request-example" %}

```yaml
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
default_namespace: string
region: ''
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
type: datadog_metrics
```

{% /tab %}

### Datadog Traces{% #datadogtraces %}

Configuration for the `datadog_traces` sink.
OptionsSchema
{% tab title="datadogtraces-request-model" %}
FieldrequiredTypeDescription batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: null compression
optional

 <oneOf>

Compression configuration.

All compression algorithms use the default compression level unless otherwise specified.
 Option 1
optional

 <oneOf>

Compression configuration.

All compression algorithms use the default compression level unless otherwise specified.
 Option 1
optional

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
 Option 2
optional

object

Compression algorithm and compression level.
 algorithm
required

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
level
optional

enum

Compression level. Allowed enum values: `none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21`
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60 acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: nulldefault_api_key
required

string

The default Datadog [API key](https://docs.datadoghq.com/api/?lang=bash#authentication) to use in authentication of HTTP requests.

If an event has a Datadog [API key](https://docs.datadoghq.com/api/?lang=bash#authentication) set explicitly in its metadata, it takes precedence over this setting.
endpoint
optional

string,​null

The endpoint to send observability data to.

The endpoint must contain an HTTP scheme, and may specify a hostname or IP address and port.

If set, overrides the `site` option.
site
optional

string

The Datadog [site](https://docs.datadoghq.com/getting_started/site) to send observability data to.
 tls
optional

 <oneOf>

Configures the TLS options for incoming/outgoing connections.
 Option 1
optional



Configures the TLS options for incoming/outgoing connections.
enabled
optional

boolean,​null

Whether or not to require TLS for incoming or outgoing connections.

When enabled and used for incoming connections, an identity certificate is also required. See `tls.crt_file` for more information.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
{% /tab %}

{% tab title="datadogtraces-request-example" %}

```yaml
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
compression: ''
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
type: datadog_traces
```

{% /tab %}

### Elasticsearch{% #elasticsearch %}

Configuration for the `elasticsearch` sink.
OptionsSchema
{% tab title="elasticsearch-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null api_version
optional

 <oneOf>

The API version of Elasticsearch.
auto
optional

auto

Auto-detect the API version.

If the [cluster state version endpoint](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-state.html#cluster-state-api-path-params) isn't reachable, a warning is logged to stdout, and the version is assumed to be V6 if the `suppress_type_name` option is set to `true`. Otherwise, the version is assumed to be V8. In the future, the sink instead returns an error during configuration parsing, since a wrongly assumed version could lead to incorrect API calls.
v6
optional

v6

Use the Elasticsearch 6.x API.
v7
optional

v7

Use the Elasticsearch 7.x API.
v8
optional

v8

Use the Elasticsearch 8.x API.
 auth
optional

 <oneOf>

Elasticsearch Authentication strategies.
 Option 1
optional

 <oneOf>

Elasticsearch Authentication strategies.
 Basic
optional

object

HTTP Basic Authentication.
password
required

string

Basic authentication password.
strategy
required

basic

HTTP Basic Authentication.
user
required

string

Basic authentication username.
 Aws
optional



Amazon OpenSearch Service-specific authentication.
strategy
required

aws

Amazon OpenSearch Service-specific authentication.
 aws
optional

 <oneOf>

Configuration of the region/endpoint to use when interacting with an AWS service.
 Option 1
optional

object

Configuration of the region/endpoint to use when interacting with an AWS service.
endpoint
optional

string,​null

Custom endpoint for use with AWS-compatible services.
default: nullregion
optional

string,​null

The [AWS region](https://docs.aws.amazon.com/general/latest/gr/rande.html#regional-endpoints) of the target service.
default: null batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: null bulk
optional

object

Elasticsearch bulk mode configuration.
action
optional

string

Action to use when making requests to the [Elasticsearch Bulk API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html).

Only `index` and `create` actions are supported.
default: indexindex
optional

string

A templated field.

The name of the index to write events to.
default: vector-%Y.%m.%d compression
optional

 <oneOf>

Compression configuration.

All compression algorithms use the default compression level unless otherwise specified.
 Option 1
optional

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
 Option 2
optional

object

Compression algorithm and compression level.
 algorithm
required

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
level
optional

enum

Compression level. Allowed enum values: `none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21`
 data_stream
optional

 <oneOf>

Elasticsearch data stream mode configuration.
 Option 1
optional

object

Elasticsearch data stream mode configuration.
auto_routing
optional

boolean

Automatically routes events by deriving the data stream name using specific event fields.

The format of the data stream name is `<type>-<dataset>-<namespace>`, where each value comes from the `data_stream` configuration field of the same name.

If enabled, the value of the `data_stream.type`, `data_stream.dataset`, and `data_stream.namespace` event fields are used if they are present. Otherwise, the values set in this configuration are used.
dataset
optional

string

A templated field.

The data stream dataset used to construct the data stream at index time.
namespace
optional

string

A templated field.

The data stream namespace used to construct the data stream at index time.
sync_fields
optional

boolean

Automatically adds and syncs the `data_stream.*` event fields if they are missing from the event.

This ensures that fields match the name of the data stream that is receiving events.
type
optional

string

A templated field.

The data stream type used to construct the data stream at index time.
 distribution
optional

 <oneOf>

Options for determining the health of an endpoint.
 Option 1
optional

object

Options for determining the health of an endpoint.
retry_initial_backoff_secs
optional

integer

Initial delay between attempts to reactivate endpoints once they become unhealthy.
retry_max_duration_secs
optional

integer

Maximum delay between attempts to reactivate endpoints once they become unhealthy.
doc_type
optional

string

The [`doc_type`](https://www.elastic.co/guide/en/elasticsearch/reference/6.8/actions-index.html) for your index data.

This is only relevant for Elasticsearch <= 6.X. If you are using >= 7.0 you do not need to set this option since Elasticsearch has removed it.
 encoding
optional

object

Transformations to prepare an event for serialization.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
endpoint
optional

string,​null

The Elasticsearch endpoint to send logs to.

**DEPRECATED**: The endpoint must contain an HTTP scheme, and may specify a hostname or IP address and port.
endpoints
optional

[string]

A list of Elasticsearch endpoints to send logs to.

The endpoint must contain an HTTP scheme, and may specify a hostname or IP address and port.
 id_key
optional

 <oneOf>

The name of the event key that should map to Elasticsearch's [`_id` field](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-id-field.html).

By default, the `_id` field is not set, which allows Elasticsearch to set this automatically. Setting your own Elasticsearch IDs can [hinder performance](https://www.elastic.co/guide/en/elasticsearch/reference/master/tune-for-indexing-speed.html#_use_auto_generated_ids).
Option 1
optional

string

A wrapper around `OwnedValuePath` that allows it to be used in Vector config. This requires a valid path to be used. If you want to allow optional paths, use [optional_path::OptionalValuePath].
 metrics
optional

 <oneOf>

Configuration for the `metric_to_log` transform.
 Option 1
optional

object

Configuration for the `metric_to_log` transform.
host_tag
optional

string,​null

Name of the tag in the metric to use for the source host.

If present, the value of the tag is set on the generated log event in the `host` field, where the field key uses the [global `host_key` option](https://vector.dev/docs/reference/configuration//global-options#log_schema.host_key).
log_namespace
optional

boolean,​null

The namespace to use for logs. This overrides the global setting.
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments as described by [the `native_json` codec](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
 timezone
optional

 <oneOf>

The name of the time zone to apply to timestamp conversions that do not contain an explicit time zone.

This overrides the [global `timezone`](https://vector.dev/docs/reference/configuration//global-options#timezone) option. The time zone name may be any name in the [TZ database](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) or `local` to indicate system local time.
 Option 1
optional

 <oneOf>

Timezone reference.

This can refer to any valid timezone as defined in the [TZ database](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones), or "local" which refers to the system local timezone.
Named
optional

string

A named timezone.

Must be a valid name in the [TZ database](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones).
 mode
optional

 <oneOf>

Elasticsearch Indexing mode.
bulk
optional

bulk

Ingests documents in bulk, using the bulk API `index` action.
data_stream
optional

data_stream

Ingests documents in bulk, using the bulk API `create` action.

Elasticsearch Data Streams only support the `create` action.
pipeline
optional

string,​null

The name of the pipeline to apply.
query
optional

object,​null

Custom parameters to add to the query string for each HTTP request sent to Elasticsearch.
 request
optional



Outbound HTTP request settings.
headers
optional

object

Additional HTTP headers to add to every HTTP request.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
request_retry_partial
optional

boolean

Whether or not to retry successful requests containing partial failures.

To avoid duplicates in Elasticsearch, please use option `id_key`.
suppress_type_name
optional

boolean

Whether or not to send the `type` field to Elasticsearch.

**DEPRECATED**: The `type` field was deprecated in Elasticsearch 7.x and removed in Elasticsearch 8.x.

If enabled, the `doc_type` option is ignored.
 tls
optional

 <oneOf>

TLS configuration.
 Option 1
optional

object

TLS configuration.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
{% /tab %}

{% tab title="elasticsearch-request-example" %}

```yaml
acknowledgements:
  enabled: null
api_version: auto
auth: ''
aws: ''
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
bulk:
  action: index
  index: vector-%Y.%m.%d
compression: none
data_stream: ''
distribution: ''
doc_type: _doc
encoding: {}
endpoint: string
endpoints: []
id_key: ''
metrics: ''
mode: bulk
pipeline: string
query: object
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  headers: {}
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
request_retry_partial: boolean
suppress_type_name: boolean
tls: ''
type: elasticsearch
```

{% /tab %}

### GCP Chronicle Unstructured{% #chronicleunstructured %}

Configuration for the `gcp_chronicle_unstructured` sink.
OptionsSchema
{% tab title="chronicleunstructured-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: nullcustomer_id
required

uuid

The Unique identifier (UUID) corresponding to the Chronicle instance.
 encoding
required



Configures how events are encoded into raw bytes.
 Avro
optional

object

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 avro
required

object

Apache Avro-specific encoder options.
schema
required

string

The Avro schema.
codec
required

avro

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 Csv
optional



Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 csv
required

object

The CSV Serializer Options.
capacity
optional

integer

Set the capacity (in bytes) of the internal buffer used in the CSV writer. This defaults to a reasonable setting.
delimiter
optional

integer

The field delimiter to use when writing CSV.
double_quote
optional

boolean

Enable double quote escapes.

This is enabled by default, but it may be disabled. When disabled, quotes in field data are escaped instead of doubled.
escape
optional

integer

The escape character to use when writing CSV.

In some variants of CSV, quotes are escaped using a special escape character like \ (instead of escaping quotes by doubling them).

To use this `double_quotes` needs to be disabled as well otherwise it is ignored
fields
required

[string]

Configures the fields that will be encoded, as well as the order in which they appear in the output.

If a field is not present in the event, the output will be an empty string.

Values of type `Array`, `Object`, and `Regex` are not supported and the output will be an empty string.
quote
optional

integer

The quote character to use when writing CSV.
 quote_style
optional

 <oneOf>

The quoting style to use when writing CSV data.
always
optional

always

This puts quotes around every field. Always.
necessary
optional

necessary

This puts quotes around fields only when necessary. They are necessary when fields contain a quote, delimiter or record terminator. Quotes are also necessary when writing an empty record (which is indistinguishable from a record with one empty field).
non_numeric
optional

non_numeric

This puts quotes around all fields that are non-numeric. Namely, when writing a field that does not parse as a valid float or integer, then quotes will be used even if they aren't strictly necessary.
never
optional

never

This never writes quotes, even if it would produce invalid CSV data.
codec
required

csv

Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 Gelf
optional

object

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
codec
required

gelf

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
 Json
optional



Encodes an event as [JSON](https://www.json.org/).
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

json

Encodes an event as [JSON](https://www.json.org/).
 Logfmt
optional

object

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
codec
required

logfmt

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
 Native
optional

object

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 NativeJson
optional

object

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native_json

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 RawMessage
optional

object

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
codec
required

raw_message

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 Text
optional



Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

text

Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
endpoint
optional

string,​null

The endpoint to send data to.
log_type
required

string

The type of log entries in a request.

This must be one of the [supported log types](https://cloud.google.com/chronicle/docs/ingestion/parser-list/supported-default-parsers), otherwise Chronicle rejects the entry with an error.
 region
optional

 <oneOf>

The GCP region to use.
 Option 1
optional

 <oneOf>

Google Chronicle regions.
eu
optional

eu

EU region.
us
optional

us

US region.
asia
optional

asia

APAC region.
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60 tls
optional

 <oneOf>

TLS configuration.
 Option 1
optional

object

TLS configuration.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
 api_key
optional

 <oneOf>

An [API key](https://cloud.google.com/docs/authentication/api-keys).

Either an API key or a path to a service account credentials JSON file can be specified.

If both are unset, the `GOOGLE_APPLICATION_CREDENTIALS` environment variable is checked for a filename. If no filename is named, an attempt is made to fetch an instance service account for the compute instance the program is running on. If this is not on a GCE instance, then you must define it with an API key or service account credentials JSON file.
Option 1
optional

string

Wrapper for sensitive strings containing credentials
credentials_path
optional

string,​null

Path to a [service account](https://cloud.google.com/docs/authentication/production#manually) credentials JSON file.

Either an API key or a path to a service account credentials JSON file can be specified.

If both are unset, the `GOOGLE_APPLICATION_CREDENTIALS` environment variable is checked for a filename. If no filename is named, an attempt is made to fetch an instance service account for the compute instance the program is running on. If this is not on a GCE instance, then you must define it with an API key or service account credentials JSON file.
skip_authentication
optional

boolean

Skip all authentication handling. For use with integration tests only.
{% /tab %}

{% tab title="chronicleunstructured-request-example" %}

```yaml
acknowledgements:
  enabled: null
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
customer_id: string
encoding: ''
endpoint: string
log_type: string
region: ''
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
tls: ''
type: gcp_chronicle_unstructured
```

{% /tab %}

### GCP Cloud Storage{% #gcssink %}

Configuration for the `gcp_cloud_storage` sink.
OptionsSchema
{% tab title="gcssink-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null acl
optional

 <oneOf>

The Predefined ACL to apply to created objects.

For more information, see [Predefined ACLs](https://cloud.google.com/storage/docs/access-control/lists#predefined-acl).
 Option 1
optional

 <oneOf>

GCS Predefined ACLs.

For more information, see [Predefined ACLs](https://cloud.google.com/storage/docs/access-control/lists#predefined-acl).
authenticated-read
optional

authenticated-read

Bucket/object can be read by authenticated users.

The bucket/object owner is granted the `OWNER` permission, and anyone authenticated Google account holder is granted the `READER` permission.
bucket-owner-full-control
optional

bucket-owner-full-control

Object is semi-private.

Both the object owner and bucket owner are granted the `OWNER` permission.

Only relevant when specified for an object: this predefined ACL is otherwise ignored when specified for a bucket.
bucket-owner-read
optional

bucket-owner-read

Object is private, except to the bucket owner.

The object owner is granted the `OWNER` permission, and the bucket owner is granted the `READER` permission.

Only relevant when specified for an object: this predefined ACL is otherwise ignored when specified for a bucket.
private
optional

private

Bucket/object are private.

The bucket/object owner is granted the `OWNER` permission, and no one else has access.
project-private
optional

project-private

Bucket/object are private within the project.

Project owners and project editors are granted the `OWNER` permission, and anyone who is part of the project team is granted the `READER` permission.

This is the default.
public-read
optional

public-read

Bucket/object can be read publically.

The bucket/object owner is granted the `OWNER` permission, and all other users, whether authenticated or anonymous, are granted the `READER` permission.
 batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: nullbucket
required

string

The GCS bucket name.
 compression
optional

 <oneOf>

Compression configuration.

All compression algorithms use the default compression level unless otherwise specified.
 Option 1
optional

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
 Option 2
optional

object

Compression algorithm and compression level.
 algorithm
required

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
level
optional

enum

Compression level. Allowed enum values: `none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21`
filename_append_uuid
optional

boolean

Whether or not to append a UUID v4 token to the end of the object key.

The UUID is appended to the timestamp portion of the object key, such that if the object key generated is `date=2022-07-18/1658176486`, setting this field to `true` results in an object key that looks like `date=2022-07-18/1658176486-30f6652c-71da-4f9f-800d-a1189c47c547`.

This ensures there are no name collisions, and can be useful in high-volume workloads where object keys must be unique.
filename_extension
optional

string,​null

The filename extension to use in the object key.

If not specified, the extension is determined by the compression scheme used.
filename_time_format
optional

string

The timestamp format for the time component of the object key.

By default, object keys are appended with a timestamp that reflects when the objects are sent to S3, such that the resulting object key is functionally equivalent to joining the key prefix with the formatted timestamp, such as `date=2022-07-18/1658176486`.

This would represent a `key_prefix` set to `date=%F/` and the timestamp of Mon Jul 18 2022 20:34:44 GMT+0000, with the `filename_time_format` being set to `%s`, which renders timestamps in seconds since the Unix epoch.

Supports the common [`strftime`](https://docs.rs/chrono/latest/chrono/format/strftime/index.html#specifiers) specifiers found in most languages.

When set to an empty string, no timestamp is appended to the key prefix.
key_prefix
optional

string,​null

A prefix to apply to all object keys.

Prefixes are useful for partitioning objects, such as by creating an object key that stores objects under a particular directory. If using a prefix for this purpose, it must end in `/` in order to act as a directory path. A trailing `/` is **not** automatically added.
metadata
optional

object,​null

The set of metadata `key:value` pairs for the created objects.

For more information, see the [custom metadata](https://cloud.google.com/storage/docs/metadata#custom-metadata) documentation.
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60 storage_class
optional

 <oneOf>

The storage class for created objects.

For more information, see the [storage classes](https://cloud.google.com/storage/docs/storage-classes) documentation.
 Option 1
optional

 <oneOf>

GCS storage classes.

For more information, see [Storage classes](https://cloud.google.com/storage/docs/storage-classes).
STANDARD
optional

STANDARD

Standard storage.

This is the default.
NEARLINE
optional

NEARLINE

Nearline storage.
COLDLINE
optional

COLDLINE

Coldline storage.
ARCHIVE
optional

ARCHIVE

Archive storage.
 tls
optional

 <oneOf>

TLS configuration.
 Option 1
optional

object

TLS configuration.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
 encoding
required



Configures how events are encoded into raw bytes.
 Avro
optional

object

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 avro
required

object

Apache Avro-specific encoder options.
schema
required

string

The Avro schema.
codec
required

avro

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 Csv
optional



Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 csv
required

object

The CSV Serializer Options.
capacity
optional

integer

Set the capacity (in bytes) of the internal buffer used in the CSV writer. This defaults to a reasonable setting.
delimiter
optional

integer

The field delimiter to use when writing CSV.
double_quote
optional

boolean

Enable double quote escapes.

This is enabled by default, but it may be disabled. When disabled, quotes in field data are escaped instead of doubled.
escape
optional

integer

The escape character to use when writing CSV.

In some variants of CSV, quotes are escaped using a special escape character like \ (instead of escaping quotes by doubling them).

To use this `double_quotes` needs to be disabled as well otherwise it is ignored
fields
required

[string]

Configures the fields that will be encoded, as well as the order in which they appear in the output.

If a field is not present in the event, the output will be an empty string.

Values of type `Array`, `Object`, and `Regex` are not supported and the output will be an empty string.
quote
optional

integer

The quote character to use when writing CSV.
 quote_style
optional

 <oneOf>

The quoting style to use when writing CSV data.
always
optional

always

This puts quotes around every field. Always.
necessary
optional

necessary

This puts quotes around fields only when necessary. They are necessary when fields contain a quote, delimiter or record terminator. Quotes are also necessary when writing an empty record (which is indistinguishable from a record with one empty field).
non_numeric
optional

non_numeric

This puts quotes around all fields that are non-numeric. Namely, when writing a field that does not parse as a valid float or integer, then quotes will be used even if they aren't strictly necessary.
never
optional

never

This never writes quotes, even if it would produce invalid CSV data.
codec
required

csv

Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 Gelf
optional

object

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
codec
required

gelf

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
 Json
optional



Encodes an event as [JSON](https://www.json.org/).
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

json

Encodes an event as [JSON](https://www.json.org/).
 Logfmt
optional

object

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
codec
required

logfmt

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
 Native
optional

object

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 NativeJson
optional

object

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native_json

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 RawMessage
optional

object

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
codec
required

raw_message

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 Text
optional



Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

text

Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
 framing
optional

 <oneOf>

Framing configuration.
 Option 1
optional

 <oneOf>

Framing configuration.
 Bytes
optional

object

Event data is not delimited at all.
method
required

bytes

Event data is not delimited at all.
 CharacterDelimited
optional



Event data is delimited by a single ASCII (7-bit) character.
 character_delimited
required

object

Options for the character delimited encoder.
delimiter
required

integer

The ASCII (7-bit) character that delimits byte sequences.
method
required

character_delimited

Event data is delimited by a single ASCII (7-bit) character.
 LengthDelimited
optional

object

Event data is prefixed with its length in bytes.

The prefix is a 32-bit unsigned integer, little endian.
method
required

length_delimited

Event data is prefixed with its length in bytes.

The prefix is a 32-bit unsigned integer, little endian.
 NewlineDelimited
optional

object

Event data is delimited by a newline (LF) character.
method
required

newline_delimited

Event data is delimited by a newline (LF) character.
 api_key
optional

 <oneOf>

An [API key](https://cloud.google.com/docs/authentication/api-keys).

Either an API key or a path to a service account credentials JSON file can be specified.

If both are unset, the `GOOGLE_APPLICATION_CREDENTIALS` environment variable is checked for a filename. If no filename is named, an attempt is made to fetch an instance service account for the compute instance the program is running on. If this is not on a GCE instance, then you must define it with an API key or service account credentials JSON file.
Option 1
optional

string

Wrapper for sensitive strings containing credentials
credentials_path
optional

string,​null

Path to a [service account](https://cloud.google.com/docs/authentication/production#manually) credentials JSON file.

Either an API key or a path to a service account credentials JSON file can be specified.

If both are unset, the `GOOGLE_APPLICATION_CREDENTIALS` environment variable is checked for a filename. If no filename is named, an attempt is made to fetch an instance service account for the compute instance the program is running on. If this is not on a GCE instance, then you must define it with an API key or service account credentials JSON file.
skip_authentication
optional

boolean

Skip all authentication handling. For use with integration tests only.
{% /tab %}

{% tab title="gcssink-request-example" %}

```yaml
acknowledgements:
  enabled: null
acl: ''
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
bucket: string
compression: none
filename_append_uuid: true
filename_extension: string
filename_time_format: '%s'
key_prefix: string
metadata: object
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
storage_class: ''
tls: ''
type: gcp_cloud_storage
```

{% /tab %}

### GCP Pub/Sub{% #pubsub %}

Configuration for the `gcp_pubsub` sink.
OptionsSchema
{% tab title="pubsub-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: null encoding
required



Configures how events are encoded into raw bytes.
 Avro
optional

object

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 avro
required

object

Apache Avro-specific encoder options.
schema
required

string

The Avro schema.
codec
required

avro

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 Csv
optional



Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 csv
required

object

The CSV Serializer Options.
capacity
optional

integer

Set the capacity (in bytes) of the internal buffer used in the CSV writer. This defaults to a reasonable setting.
delimiter
optional

integer

The field delimiter to use when writing CSV.
double_quote
optional

boolean

Enable double quote escapes.

This is enabled by default, but it may be disabled. When disabled, quotes in field data are escaped instead of doubled.
escape
optional

integer

The escape character to use when writing CSV.

In some variants of CSV, quotes are escaped using a special escape character like \ (instead of escaping quotes by doubling them).

To use this `double_quotes` needs to be disabled as well otherwise it is ignored
fields
required

[string]

Configures the fields that will be encoded, as well as the order in which they appear in the output.

If a field is not present in the event, the output will be an empty string.

Values of type `Array`, `Object`, and `Regex` are not supported and the output will be an empty string.
quote
optional

integer

The quote character to use when writing CSV.
 quote_style
optional

 <oneOf>

The quoting style to use when writing CSV data.
always
optional

always

This puts quotes around every field. Always.
necessary
optional

necessary

This puts quotes around fields only when necessary. They are necessary when fields contain a quote, delimiter or record terminator. Quotes are also necessary when writing an empty record (which is indistinguishable from a record with one empty field).
non_numeric
optional

non_numeric

This puts quotes around all fields that are non-numeric. Namely, when writing a field that does not parse as a valid float or integer, then quotes will be used even if they aren't strictly necessary.
never
optional

never

This never writes quotes, even if it would produce invalid CSV data.
codec
required

csv

Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 Gelf
optional

object

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
codec
required

gelf

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
 Json
optional



Encodes an event as [JSON](https://www.json.org/).
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

json

Encodes an event as [JSON](https://www.json.org/).
 Logfmt
optional

object

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
codec
required

logfmt

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
 Native
optional

object

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 NativeJson
optional

object

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native_json

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 RawMessage
optional

object

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
codec
required

raw_message

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 Text
optional



Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

text

Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
endpoint
optional

string

The endpoint to which to publish events.

The scheme (`http` or `https`) must be specified. No path should be included since the paths defined by the [`GCP Pub/Sub`](https://cloud.google.com/pubsub/docs/reference/rest) API are used.

The trailing slash `/` must not be included.
project
required

string

The project name to which to publish events.
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60 tls
optional

 <oneOf>

TLS configuration.
 Option 1
optional

object

TLS configuration.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
topic
required

string

The topic within the project to which to publish events.
 api_key
optional

 <oneOf>

An [API key](https://cloud.google.com/docs/authentication/api-keys).

Either an API key or a path to a service account credentials JSON file can be specified.

If both are unset, the `GOOGLE_APPLICATION_CREDENTIALS` environment variable is checked for a filename. If no filename is named, an attempt is made to fetch an instance service account for the compute instance the program is running on. If this is not on a GCE instance, then you must define it with an API key or service account credentials JSON file.
Option 1
optional

string

Wrapper for sensitive strings containing credentials
credentials_path
optional

string,​null

Path to a [service account](https://cloud.google.com/docs/authentication/production#manually) credentials JSON file.

Either an API key or a path to a service account credentials JSON file can be specified.

If both are unset, the `GOOGLE_APPLICATION_CREDENTIALS` environment variable is checked for a filename. If no filename is named, an attempt is made to fetch an instance service account for the compute instance the program is running on. If this is not on a GCE instance, then you must define it with an API key or service account credentials JSON file.
skip_authentication
optional

boolean

Skip all authentication handling. For use with integration tests only.
{% /tab %}

{% tab title="pubsub-request-example" %}

```yaml
acknowledgements:
  enabled: null
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
encoding: ''
endpoint: 'https://pubsub.googleapis.com'
project: string
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
tls: ''
topic: string
type: gcp_pubsub
```

{% /tab %}

### GCP Stackdriver Logs{% #stackdriver %}

Configuration for the `gcp_stackdriver_logs` sink.
OptionsSchema
{% tab title="stackdriver-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: null encoding
optional

object

Transformations to prepare an event for serialization.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
log_id
required

string

The log ID to which to publish logs.

This is a name you create to identify this log stream.
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60 resource
required



A monitored resource.

The monitored resource to associate the logs with.
type
required

string

The monitored resource type.

For example, the type of a Compute Engine VM instance is `gce_instance`. See the [Google Cloud Platform monitored resource documentation](https://cloud.google.com/monitoring/api/resources) for more details.
 severity_key
optional

 <oneOf>

The field of the log event from which to take the outgoing log's `severity` field.

The named field is removed from the log event if present, and must be either an integer between 0 and 800 or a string containing one of the [severity level names](https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry#logseverity) (case is ignored) or a common prefix such as `err`.

If no severity key is specified, the severity of outgoing records is set to 0 (`DEFAULT`).

See the [GCP Stackdriver Logging LogSeverity description](https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry#logseverity) for more details on the value of the `severity` field.
Option 1
optional

string

A wrapper around `OwnedValuePath` that allows it to be used in Vector config. This requires a valid path to be used. If you want to allow optional paths, use [optional_path::OptionalValuePath].
 tls
optional

 <oneOf>

TLS configuration.
 Option 1
optional

object

TLS configuration.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
_metadata
optional


description
optional



Logging locations.
oneOf
optional


 api_key
optional

 <oneOf>

An [API key](https://cloud.google.com/docs/authentication/api-keys).

Either an API key or a path to a service account credentials JSON file can be specified.

If both are unset, the `GOOGLE_APPLICATION_CREDENTIALS` environment variable is checked for a filename. If no filename is named, an attempt is made to fetch an instance service account for the compute instance the program is running on. If this is not on a GCE instance, then you must define it with an API key or service account credentials JSON file.
Option 1
optional

string

Wrapper for sensitive strings containing credentials
credentials_path
optional

string,​null

Path to a [service account](https://cloud.google.com/docs/authentication/production#manually) credentials JSON file.

Either an API key or a path to a service account credentials JSON file can be specified.

If both are unset, the `GOOGLE_APPLICATION_CREDENTIALS` environment variable is checked for a filename. If no filename is named, an attempt is made to fetch an instance service account for the compute instance the program is running on. If this is not on a GCE instance, then you must define it with an API key or service account credentials JSON file.
skip_authentication
optional

boolean

Skip all authentication handling. For use with integration tests only.
{% /tab %}

{% tab title="stackdriver-request-example" %}

```yaml
acknowledgements:
  enabled: null
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
encoding: {}
log_id: string
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
resource: ''
severity_key: ''
tls: ''
type: gcp_stackdriver_logs
```

{% /tab %}

### GCP Stackdriver Metrics{% #stackdriver %}

Configuration for the `gcp_stackdriver_metrics` sink.
OptionsSchema
{% tab title="stackdriver-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: nulldefault_namespace
optional

string

The default namespace to use for metrics that do not have one.

Metrics with the same name can only be differentiated by their namespace, and not all metrics have their own namespace.
project_id
required

string

The project ID to which to publish metrics.

See the [Google Cloud Platform project management documentation](https://cloud.google.com/resource-manager/docs/creating-managing-projects) for more details.
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60 resource
required



A monitored resource.

The monitored resource to associate the metrics with.
type
required

string

The monitored resource type.

For example, the type of a Compute Engine VM instance is `gce_instance`.
 tls
optional

 <oneOf>

TLS configuration.
 Option 1
optional

object

TLS configuration.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
 api_key
optional

 <oneOf>

An [API key](https://cloud.google.com/docs/authentication/api-keys).

Either an API key or a path to a service account credentials JSON file can be specified.

If both are unset, the `GOOGLE_APPLICATION_CREDENTIALS` environment variable is checked for a filename. If no filename is named, an attempt is made to fetch an instance service account for the compute instance the program is running on. If this is not on a GCE instance, then you must define it with an API key or service account credentials JSON file.
Option 1
optional

string

Wrapper for sensitive strings containing credentials
credentials_path
optional

string,​null

Path to a [service account](https://cloud.google.com/docs/authentication/production#manually) credentials JSON file.

Either an API key or a path to a service account credentials JSON file can be specified.

If both are unset, the `GOOGLE_APPLICATION_CREDENTIALS` environment variable is checked for a filename. If no filename is named, an attempt is made to fetch an instance service account for the compute instance the program is running on. If this is not on a GCE instance, then you must define it with an API key or service account credentials JSON file.
skip_authentication
optional

boolean

Skip all authentication handling. For use with integration tests only.
{% /tab %}

{% tab title="stackdriver-request-example" %}

```yaml
acknowledgements:
  enabled: null
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
default_namespace: namespace
project_id: string
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
resource: ''
tls: ''
type: gcp_stackdriver_metrics
```

{% /tab %}

### Honeycomb{% #honeycomb %}

Configuration for the `honeycomb` sink.
OptionsSchema
{% tab title="honeycomb-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: nullapi_key
required

string

The API key that is used to authenticate against Honeycomb.
 batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: nulldataset
required

string

The dataset to which logs are sent.
 encoding
optional

object

Transformations to prepare an event for serialization.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60
{% /tab %}

{% tab title="honeycomb-request-example" %}

```yaml
acknowledgements:
  enabled: null
api_key: string
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
dataset: string
encoding: {}
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
type: honeycomb
```

{% /tab %}

### HTTP{% #httpsink %}

Configuration for the `http` sink.
OptionsSchema
{% tab title="httpsink-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null auth
optional

 <oneOf>

Configuration of the authentication strategy for HTTP requests.

HTTP authentication should be used with HTTPS only, as the authentication credentials are passed as an HTTP header without any additional encryption beyond what is provided by the transport itself.
 Option 1
optional

 <oneOf>

Configuration of the authentication strategy for HTTP requests.

HTTP authentication should be used with HTTPS only, as the authentication credentials are passed as an HTTP header without any additional encryption beyond what is provided by the transport itself.
 Basic
optional

object

Basic authentication.

The username and password are concatenated and encoded via [base64](https://en.wikipedia.org/wiki/Base64).
password
required

string

The basic authentication password.
strategy
required

basic

Basic authentication.

The username and password are concatenated and encoded via [base64](https://en.wikipedia.org/wiki/Base64).
user
required

string

The basic authentication username.
 Bearer
optional

object

Bearer authentication.

The bearer token value (OAuth2, JWT, etc.) is passed as-is.
strategy
required

bearer

Bearer authentication.

The bearer token value (OAuth2, JWT, etc.) is passed as-is.
token
required

string

The bearer authentication token.
 batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: null compression
optional

 <oneOf>

Compression configuration.

All compression algorithms use the default compression level unless otherwise specified.
 Option 1
optional

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
 Option 2
optional

object

Compression algorithm and compression level.
 algorithm
required

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
level
optional

enum

Compression level. Allowed enum values: `none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21`
headers
optional

object,​null

**DEPRECATED**: A list of custom headers to add to each request.
 method
optional

 <oneOf>

HTTP method.

The HTTP method to use when making the request.
get
optional

get

GET.
head
optional

head

HEAD.
post
optional

post

POST.
put
optional

put

PUT.
delete
optional

delete

DELETE.
options
optional

options

OPTIONS.
trace
optional

trace

TRACE.
patch
optional

patch

PATCH.
payload_prefix
optional

string

A string to prefix the payload with.

This option is ignored if the encoding is not character delimited JSON.

If specified, the `payload_suffix` must also be specified and together they must produce a valid JSON object.
payload_suffix
optional

string

A string to suffix the payload with.

This option is ignored if the encoding is not character delimited JSON.

If specified, the `payload_prefix` must also be specified and together they must produce a valid JSON object.
 request
optional



Outbound HTTP request settings.
headers
optional

object

Additional HTTP headers to add to every HTTP request.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
 tls
optional

 <oneOf>

TLS configuration.
 Option 1
optional

object

TLS configuration.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
uri
required

string

The full URI to make HTTP requests to.

This should include the protocol and host, but can also include the port, path, and any other valid part of a URI.
 encoding
required



Configures how events are encoded into raw bytes.
 Avro
optional

object

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 avro
required

object

Apache Avro-specific encoder options.
schema
required

string

The Avro schema.
codec
required

avro

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 Csv
optional



Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 csv
required

object

The CSV Serializer Options.
capacity
optional

integer

Set the capacity (in bytes) of the internal buffer used in the CSV writer. This defaults to a reasonable setting.
delimiter
optional

integer

The field delimiter to use when writing CSV.
double_quote
optional

boolean

Enable double quote escapes.

This is enabled by default, but it may be disabled. When disabled, quotes in field data are escaped instead of doubled.
escape
optional

integer

The escape character to use when writing CSV.

In some variants of CSV, quotes are escaped using a special escape character like \ (instead of escaping quotes by doubling them).

To use this `double_quotes` needs to be disabled as well otherwise it is ignored
fields
required

[string]

Configures the fields that will be encoded, as well as the order in which they appear in the output.

If a field is not present in the event, the output will be an empty string.

Values of type `Array`, `Object`, and `Regex` are not supported and the output will be an empty string.
quote
optional

integer

The quote character to use when writing CSV.
 quote_style
optional

 <oneOf>

The quoting style to use when writing CSV data.
always
optional

always

This puts quotes around every field. Always.
necessary
optional

necessary

This puts quotes around fields only when necessary. They are necessary when fields contain a quote, delimiter or record terminator. Quotes are also necessary when writing an empty record (which is indistinguishable from a record with one empty field).
non_numeric
optional

non_numeric

This puts quotes around all fields that are non-numeric. Namely, when writing a field that does not parse as a valid float or integer, then quotes will be used even if they aren't strictly necessary.
never
optional

never

This never writes quotes, even if it would produce invalid CSV data.
codec
required

csv

Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 Gelf
optional

object

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
codec
required

gelf

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
 Json
optional



Encodes an event as [JSON](https://www.json.org/).
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

json

Encodes an event as [JSON](https://www.json.org/).
 Logfmt
optional

object

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
codec
required

logfmt

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
 Native
optional

object

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 NativeJson
optional

object

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native_json

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 RawMessage
optional

object

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
codec
required

raw_message

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 Text
optional



Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

text

Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
 framing
optional

 <oneOf>

Framing configuration.
 Option 1
optional

 <oneOf>

Framing configuration.
 Bytes
optional

object

Event data is not delimited at all.
method
required

bytes

Event data is not delimited at all.
 CharacterDelimited
optional



Event data is delimited by a single ASCII (7-bit) character.
 character_delimited
required

object

Options for the character delimited encoder.
delimiter
required

integer

The ASCII (7-bit) character that delimits byte sequences.
method
required

character_delimited

Event data is delimited by a single ASCII (7-bit) character.
 LengthDelimited
optional

object

Event data is prefixed with its length in bytes.

The prefix is a 32-bit unsigned integer, little endian.
method
required

length_delimited

Event data is prefixed with its length in bytes.

The prefix is a 32-bit unsigned integer, little endian.
 NewlineDelimited
optional

object

Event data is delimited by a newline (LF) character.
method
required

newline_delimited

Event data is delimited by a newline (LF) character.
{% /tab %}

{% tab title="httpsink-request-example" %}

```yaml
acknowledgements:
  enabled: null
auth: ''
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
compression: none
headers: object
method: post
payload_prefix: string
payload_suffix: string
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  headers: {}
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
tls: ''
uri: string
type: http
```

{% /tab %}

### Humio Logs{% #humiologs %}

Configuration for the `humio_logs` sink.
OptionsSchema
{% tab title="humiologs-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: null compression
optional

 <oneOf>

Compression configuration.

All compression algorithms use the default compression level unless otherwise specified.
 Option 1
optional

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
 Option 2
optional

object

Compression algorithm and compression level.
 algorithm
required

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
level
optional

enum

Compression level. Allowed enum values: `none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21`
 encoding
required



Configures how events are encoded into raw bytes.
 Avro
optional

object

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 avro
required

object

Apache Avro-specific encoder options.
schema
required

string

The Avro schema.
codec
required

avro

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 Csv
optional



Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 csv
required

object

The CSV Serializer Options.
capacity
optional

integer

Set the capacity (in bytes) of the internal buffer used in the CSV writer. This defaults to a reasonable setting.
delimiter
optional

integer

The field delimiter to use when writing CSV.
double_quote
optional

boolean

Enable double quote escapes.

This is enabled by default, but it may be disabled. When disabled, quotes in field data are escaped instead of doubled.
escape
optional

integer

The escape character to use when writing CSV.

In some variants of CSV, quotes are escaped using a special escape character like \ (instead of escaping quotes by doubling them).

To use this `double_quotes` needs to be disabled as well otherwise it is ignored
fields
required

[string]

Configures the fields that will be encoded, as well as the order in which they appear in the output.

If a field is not present in the event, the output will be an empty string.

Values of type `Array`, `Object`, and `Regex` are not supported and the output will be an empty string.
quote
optional

integer

The quote character to use when writing CSV.
 quote_style
optional

 <oneOf>

The quoting style to use when writing CSV data.
always
optional

always

This puts quotes around every field. Always.
necessary
optional

necessary

This puts quotes around fields only when necessary. They are necessary when fields contain a quote, delimiter or record terminator. Quotes are also necessary when writing an empty record (which is indistinguishable from a record with one empty field).
non_numeric
optional

non_numeric

This puts quotes around all fields that are non-numeric. Namely, when writing a field that does not parse as a valid float or integer, then quotes will be used even if they aren't strictly necessary.
never
optional

never

This never writes quotes, even if it would produce invalid CSV data.
codec
required

csv

Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 Gelf
optional

object

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
codec
required

gelf

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
 Json
optional



Encodes an event as [JSON](https://www.json.org/).
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

json

Encodes an event as [JSON](https://www.json.org/).
 Logfmt
optional

object

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
codec
required

logfmt

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
 Native
optional

object

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 NativeJson
optional

object

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native_json

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 RawMessage
optional

object

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
codec
required

raw_message

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 Text
optional



Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

text

Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
endpoint
optional

string

The base URL of the Humio instance.

The scheme (`http` or `https`) must be specified. No path should be included since the paths defined by the [`Splunk`](https://docs.splunk.com/Documentation/Splunk/8.0.0/Data/HECRESTendpoints) API are used.
 event_type
optional

 <oneOf>

The type of events sent to this sink. Humio uses this as the name of the parser to use to ingest the data.

If unset, Humio defaults it to none.
Option 1
optional

string

A templated field.

In many cases, components can be configured so that part of the component's functionality can be customized on a per-event basis. For example, you have a sink that writes events to a file and you want to specify which file an event should go to by using an event field as part of the input to the filename used.

By using `Template`, users can specify either fixed strings or templated strings. Templated strings use a common syntax to refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string is `my-file.log`. An example of a template string is `my-file-{{key}}.log`, where `{{key}}` is the key's value when the template is rendered into a string.
host_key
optional

string

Overrides the name of the log field used to retrieve the hostname to send to Humio.

By default, the [global `log_schema.host_key` option](https://vector.dev/docs/reference/configuration/global-options/#log_schema.host_key) is used.
 index
optional

 <oneOf>

Optional name of the repository to ingest into.

In public-facing APIs, this must (if present) be equal to the repository used to create the ingest token used for authentication.

In private cluster setups, Humio can be configured to allow these to be different.

For more information, see [Humio's Format of Data](https://docs.humio.com/integrations/data-shippers/hec/#format-of-data).
Option 1
optional

string

A templated field.

In many cases, components can be configured so that part of the component's functionality can be customized on a per-event basis. For example, you have a sink that writes events to a file and you want to specify which file an event should go to by using an event field as part of the input to the filename used.

By using `Template`, users can specify either fixed strings or templated strings. Templated strings use a common syntax to refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string is `my-file.log`. An example of a template string is `my-file-{{key}}.log`, where `{{key}}` is the key's value when the template is rendered into a string.
indexed_fields
optional

[string]

Event fields to be added to Humio's extra fields.

Can be used to tag events by specifying fields starting with `#`.

For more information, see [Humio's Format of Data](https://docs.humio.com/integrations/data-shippers/hec/#format-of-data).
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60 source
optional

 <oneOf>

The source of events sent to this sink.

Typically the filename the logs originated from. Maps to `@source` in Humio.
Option 1
optional

string

A templated field.

In many cases, components can be configured so that part of the component's functionality can be customized on a per-event basis. For example, you have a sink that writes events to a file and you want to specify which file an event should go to by using an event field as part of the input to the filename used.

By using `Template`, users can specify either fixed strings or templated strings. Templated strings use a common syntax to refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string is `my-file.log`. An example of a template string is `my-file-{{key}}.log`, where `{{key}}` is the key's value when the template is rendered into a string.
timestamp_key
optional

string

Overrides the name of the log field used to retrieve the timestamp to send to Humio.

By default, the [global `log_schema.timestamp_key` option](https://vector.dev/docs/reference/configuration/global-options/#log_schema.timestamp_key) is used.
timestamp_nanos_key
optional

string,​null

Overrides the name of the log field used to retrieve the nanosecond-enabled timestamp to send to Humio.
 tls
optional

 <oneOf>

TLS configuration.
 Option 1
optional

object

TLS configuration.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
token
required

string

The Humio ingestion token.
{% /tab %}

{% tab title="humiologs-request-example" %}

```yaml
acknowledgements:
  enabled: null
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
compression: none
encoding: ''
endpoint: 'https://cloud.humio.com'
event_type: ''
host_key: host
index: ''
indexed_fields: []
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
source: ''
timestamp_key: timestamp
timestamp_nanos_key: '@timestamp.nanos'
tls: ''
token: string
type: humio_logs
```

{% /tab %}

### Humio Metrics{% #humiometrics %}

Configuration for the `humio_metrics` sink.
OptionsSchema
{% tab title="humiometrics-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: null compression
optional

 <oneOf>

Compression configuration.

All compression algorithms use the default compression level unless otherwise specified.
 Option 1
optional

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
 Option 2
optional

object

Compression algorithm and compression level.
 algorithm
required

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
level
optional

enum

Compression level. Allowed enum values: `none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21`
endpoint
optional

string

The base URL of the Humio instance.

The scheme (`http` or `https`) must be specified. No path should be included since the paths defined by the [`Splunk`](https://docs.splunk.com/Documentation/Splunk/8.0.0/Data/HECRESTendpoints) API are used.
 event_type
optional

 <oneOf>

The type of events sent to this sink. Humio uses this as the name of the parser to use to ingest the data.

If unset, Humio defaults it to none.
Option 1
optional

string

A templated field.

In many cases, components can be configured so that part of the component's functionality can be customized on a per-event basis. For example, you have a sink that writes events to a file and you want to specify which file an event should go to by using an event field as part of the input to the filename used.

By using `Template`, users can specify either fixed strings or templated strings. Templated strings use a common syntax to refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string is `my-file.log`. An example of a template string is `my-file-{{key}}.log`, where `{{key}}` is the key's value when the template is rendered into a string.
host_key
optional

string

Overrides the name of the log field used to retrieve the hostname to send to Humio.

By default, the [global `log_schema.host_key` option](https://vector.dev/docs/reference/configuration/global-options/#log_schema.host_key) is used.
 index
optional

 <oneOf>

Optional name of the repository to ingest into.

In public-facing APIs, this must (if present) be equal to the repository used to create the ingest token used for authentication.

In private cluster setups, Humio can be configured to allow these to be different.

For more information, see [Humio's Format of Data](https://docs.humio.com/integrations/data-shippers/hec/#format-of-data).
Option 1
optional

string

A templated field.

In many cases, components can be configured so that part of the component's functionality can be customized on a per-event basis. For example, you have a sink that writes events to a file and you want to specify which file an event should go to by using an event field as part of the input to the filename used.

By using `Template`, users can specify either fixed strings or templated strings. Templated strings use a common syntax to refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string is `my-file.log`. An example of a template string is `my-file-{{key}}.log`, where `{{key}}` is the key's value when the template is rendered into a string.
indexed_fields
optional

[string]

Event fields to be added to Humio's extra fields.

Can be used to tag events by specifying fields starting with `#`.

For more information, see [Humio's Format of Data](https://docs.humio.com/integrations/data-shippers/hec/#format-of-data).
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60 source
optional

 <oneOf>

The source of events sent to this sink.

Typically the filename the metrics originated from. Maps to `@source` in Humio.
Option 1
optional

string

A templated field.

In many cases, components can be configured so that part of the component's functionality can be customized on a per-event basis. For example, you have a sink that writes events to a file and you want to specify which file an event should go to by using an event field as part of the input to the filename used.

By using `Template`, users can specify either fixed strings or templated strings. Templated strings use a common syntax to refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string is `my-file.log`. An example of a template string is `my-file-{{key}}.log`, where `{{key}}` is the key's value when the template is rendered into a string.
 tls
optional

 <oneOf>

TLS configuration.
 Option 1
optional

object

TLS configuration.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
token
required

string

The Humio ingestion token.
host_tag
optional

string,​null

Name of the tag in the metric to use for the source host.

If present, the value of the tag is set on the generated log event in the `host` field, where the field key uses the [global `host_key` option](https://vector.dev/docs/reference/configuration//global-options#log_schema.host_key).
log_namespace
optional

boolean,​null

The namespace to use for logs. This overrides the global setting.
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments as described by [the `native_json` codec](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
 timezone
optional

 <oneOf>

The name of the time zone to apply to timestamp conversions that do not contain an explicit time zone.

This overrides the [global `timezone`](https://vector.dev/docs/reference/configuration//global-options#timezone) option. The time zone name may be any name in the [TZ database](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) or `local` to indicate system local time.
 Option 1
optional

 <oneOf>

Timezone reference.

This can refer to any valid timezone as defined in the [TZ database](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones), or "local" which refers to the system local timezone.
Named
optional

string

A named timezone.

Must be a valid name in the [TZ database](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones).
{% /tab %}

{% tab title="humiometrics-request-example" %}

```yaml
acknowledgements:
  enabled: null
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
compression: none
endpoint: 'https://cloud.humio.com'
event_type: ''
host_key: host
index: ''
indexed_fields: []
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
source: ''
tls: ''
token: string
type: humio_metrics
```

{% /tab %}

### InfluxDB Logs{% #influxdblogs %}

Configuration for the `influxdb_logs` sink.
OptionsSchema
{% tab title="influxdblogs-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: null encoding
optional

object

Transformations to prepare an event for serialization.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
endpoint
required

string

The endpoint to send data to.

This should be a full HTTP URI, including the scheme, host, and port.
 host_key
optional

 <oneOf>

Use this option to customize the key containing the hostname.

The setting of `log_schema.host_key`, usually `host`, is used here by default.
Option 1
optional

string

An optional path that deserializes an empty string to `None`.
measurement
optional

string,​null

The name of the InfluxDB measurement that is written to.
 message_key
optional

 <oneOf>

Use this option to customize the key containing the message.

The setting of `log_schema.message_key`, usually `message`, is used here by default.
Option 1
optional

string

An optional path that deserializes an empty string to `None`.
namespace
optional

string,​null

The namespace of the measurement name to use.

**DEPRECATED**: When specified, the measurement name is `<namespace>.vector`.
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60 source_type_key
optional

 <oneOf>

Use this option to customize the key containing the source_type.

The setting of `log_schema.source_type_key`, usually `source_type`, is used here by default.
Option 1
optional

string

An optional path that deserializes an empty string to `None`.
tags
optional

[string]

The list of names of log fields that should be added as tags to each measurement.

By default Vector adds `metric_type` as well as the configured `log_schema.host_key` and `log_schema.source_type_key` options.
 tls
optional

 <oneOf>

TLS configuration.
 Option 1
optional

object

TLS configuration.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
_metadata
optional


description
optional



Configuration settings for InfluxDB v0.x/v1.x.
oneOf
optional


_metadata
optional


description
optional



Configuration settings for InfluxDB v2.x.
oneOf
optional


{% /tab %}

{% tab title="influxdblogs-request-example" %}

```yaml
acknowledgements:
  enabled: null
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
encoding: {}
endpoint: string
host_key: ''
measurement: string
message_key: ''
namespace: string
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
source_type_key: ''
tags: []
tls: ''
type: influxdb_logs
```

{% /tab %}

### InfluxDB Metrics{% #influxdb %}

Configuration for the `influxdb_metrics` sink.
OptionsSchema
{% tab title="influxdb-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: nulldefault_namespace
optional

string,​null

Sets the default namespace for any metrics sent.

This namespace is only used if a metric has no existing namespace. When a namespace is present, it is used as a prefix to the metric name, and separated with a period (`.`).
endpoint
required

string

The endpoint to send data to.

This should be a full HTTP URI, including the scheme, host, and port.
quantiles
optional

[number]

The list of quantiles to calculate when sending distribution metrics.
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60tags
optional

object,​null

A map of additional tags, in the key/value pair format, to add to each measurement.
 tls
optional

 <oneOf>

TLS configuration.
 Option 1
optional

object

TLS configuration.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
_metadata
optional


description
optional



Configuration settings for InfluxDB v0.x/v1.x.
oneOf
optional


_metadata
optional


description
optional



Configuration settings for InfluxDB v2.x.
oneOf
optional


{% /tab %}

{% tab title="influxdb-request-example" %}

```yaml
acknowledgements:
  enabled: null
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
default_namespace: string
endpoint: string
quantiles:
  - 0.5
  - 0.75
  - 0.9
  - 0.95
  - 0.99
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
tags: object
tls: ''
type: influxdb_metrics
```

{% /tab %}

### Kafka{% #kafkasink %}

Configuration for the `kafka` sink.
OptionsSchema
{% tab title="kafkasink-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: nullbootstrap_servers
required

string

A comma-separated list of Kafka bootstrap servers.

These are the servers in a Kafka cluster that a client should use to bootstrap its connection to the cluster, allowing discovery of all the other hosts in the cluster.

Must be in the form of `host:port`, and comma-separated.
 compression
optional

 <oneOf>

Supported compression types for Kafka.
none
optional

none

No compression.
gzip
optional

gzip

Gzip.
snappy
optional

snappy

Snappy.
lz4
optional

lz4

LZ4.
zstd
optional

zstd

Zstandard.
 encoding
required



Configures how events are encoded into raw bytes.
 Avro
optional

object

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 avro
required

object

Apache Avro-specific encoder options.
schema
required

string

The Avro schema.
codec
required

avro

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 Csv
optional



Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 csv
required

object

The CSV Serializer Options.
capacity
optional

integer

Set the capacity (in bytes) of the internal buffer used in the CSV writer. This defaults to a reasonable setting.
delimiter
optional

integer

The field delimiter to use when writing CSV.
double_quote
optional

boolean

Enable double quote escapes.

This is enabled by default, but it may be disabled. When disabled, quotes in field data are escaped instead of doubled.
escape
optional

integer

The escape character to use when writing CSV.

In some variants of CSV, quotes are escaped using a special escape character like \ (instead of escaping quotes by doubling them).

To use this `double_quotes` needs to be disabled as well otherwise it is ignored
fields
required

[string]

Configures the fields that will be encoded, as well as the order in which they appear in the output.

If a field is not present in the event, the output will be an empty string.

Values of type `Array`, `Object`, and `Regex` are not supported and the output will be an empty string.
quote
optional

integer

The quote character to use when writing CSV.
 quote_style
optional

 <oneOf>

The quoting style to use when writing CSV data.
always
optional

always

This puts quotes around every field. Always.
necessary
optional

necessary

This puts quotes around fields only when necessary. They are necessary when fields contain a quote, delimiter or record terminator. Quotes are also necessary when writing an empty record (which is indistinguishable from a record with one empty field).
non_numeric
optional

non_numeric

This puts quotes around all fields that are non-numeric. Namely, when writing a field that does not parse as a valid float or integer, then quotes will be used even if they aren't strictly necessary.
never
optional

never

This never writes quotes, even if it would produce invalid CSV data.
codec
required

csv

Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 Gelf
optional

object

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
codec
required

gelf

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
 Json
optional



Encodes an event as [JSON](https://www.json.org/).
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

json

Encodes an event as [JSON](https://www.json.org/).
 Logfmt
optional

object

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
codec
required

logfmt

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
 Native
optional

object

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 NativeJson
optional

object

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native_json

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 RawMessage
optional

object

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
codec
required

raw_message

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 Text
optional



Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

text

Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
 headers_key
optional

 <oneOf>

The log field name to use for the Kafka headers.

If omitted, no headers are written.
Option 1
optional

string

A wrapper around `OwnedTargetPath` that allows it to be used in Vector config with prefix default to `PathPrefix::Event`
 key_field
optional

 <oneOf>

The log field name or tag key to use for the topic key.

If the field does not exist in the log or in the tags, a blank value is used. If unspecified, the key is not sent.

Kafka uses a hash of the key to choose the partition or uses round-robin if the record has no key.
Option 1
optional

string

A wrapper around `OwnedTargetPath` that allows it to be used in Vector config with prefix default to `PathPrefix::Event`
librdkafka_options
optional

object

A map of advanced options to pass directly to the underlying `librdkafka` client.

For more information on configuration options, see [Configuration properties](https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md).
message_timeout_ms
optional

integer

Local message timeout, in milliseconds.
socket_timeout_ms
optional

integer

Default timeout, in milliseconds, for network requests.
topic
required

string

A templated field.

The Kafka topic name to write events to.
 sasl
optional

 <oneOf>

Configuration for SASL authentication when interacting with Kafka.
 Option 1
optional

object

Configuration for SASL authentication when interacting with Kafka.
enabled
optional

boolean,​null

Enables SASL authentication.

Only `PLAIN`- and `SCRAM`-based mechanisms are supported when configuring SASL authentication using `sasl.*`. For other mechanisms, `librdkafka_options.*` must be used directly to configure other `librdkafka`-specific values. If using `sasl.kerberos.*` as an example, where `*` is `service.name`, `principal`, `kinit.md`, etc., then `librdkafka_options.*` as a result becomes `librdkafka_options.sasl.kerberos.service.name`, `librdkafka_options.sasl.kerberos.principal`, etc.

See the [librdkafka documentation](https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md) for details.

SASL authentication is not supported on Windows.
mechanism
optional

string,​null

The SASL mechanism to use.
 password
optional

 <oneOf>

The SASL password.
Option 1
optional

string

Wrapper for sensitive strings containing credentials
username
optional

string,​null

The SASL username.
 tls
optional

 <oneOf>

Configures the TLS options for incoming/outgoing connections.
 Option 1
optional



Configures the TLS options for incoming/outgoing connections.
enabled
optional

boolean,​null

Whether or not to require TLS for incoming or outgoing connections.

When enabled and used for incoming connections, an identity certificate is also required. See `tls.crt_file` for more information.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
{% /tab %}

{% tab title="kafkasink-request-example" %}

```yaml
acknowledgements:
  enabled: null
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
bootstrap_servers: string
compression: none
encoding: ''
headers_key: ''
key_field: ''
librdkafka_options: {}
message_timeout_ms: 300000
socket_timeout_ms: 60000
topic: string
type: kafka
```

{% /tab %}

### LogDNA{% #logdna %}

Configuration for the `logdna` sink.
OptionsSchema
{% tab title="logdna-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: nullapi_key
required

string

The Ingestion API key.
 batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: nulldefault_app
optional

string

The default app that is set for events that do not contain a `file` or `app` field.
default_env
optional

string

The default environment that is set for events that do not contain an `env` field.
 encoding
optional

object

Transformations to prepare an event for serialization.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
endpoint
optional

string

The HTTP endpoint to send logs to.

Both IP address and hostname are accepted formats.
hostname
required

string

A templated field.

The hostname that is attached to each batch of events.
ip
optional

string,​null

The IP address that is attached to each batch of events.
mac
optional

string,​null

The MAC address that is attached to each batch of events.
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60tags
optional

array,​null

The tags that are attached to each batch of events.
{% /tab %}

{% tab title="logdna-request-example" %}

```yaml
acknowledgements:
  enabled: null
api_key: string
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
default_app: vector
default_env: production
encoding: {}
endpoint: 'https://logs.mezmo.com/'
hostname: string
ip: string
mac: string
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
tags: array
type: logdna
```

{% /tab %}

### Loki{% #loki %}

Configuration for the `loki` sink.
OptionsSchema
{% tab title="loki-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null auth
optional

 <oneOf>

Configuration of the authentication strategy for HTTP requests.

HTTP authentication should be used with HTTPS only, as the authentication credentials are passed as an HTTP header without any additional encryption beyond what is provided by the transport itself.
 Option 1
optional

 <oneOf>

Configuration of the authentication strategy for HTTP requests.

HTTP authentication should be used with HTTPS only, as the authentication credentials are passed as an HTTP header without any additional encryption beyond what is provided by the transport itself.
 Basic
optional

object

Basic authentication.

The username and password are concatenated and encoded via [base64](https://en.wikipedia.org/wiki/Base64).
password
required

string

The basic authentication password.
strategy
required

basic

Basic authentication.

The username and password are concatenated and encoded via [base64](https://en.wikipedia.org/wiki/Base64).
user
required

string

The basic authentication username.
 Bearer
optional

object

Bearer authentication.

The bearer token value (OAuth2, JWT, etc.) is passed as-is.
strategy
required

bearer

Bearer authentication.

The bearer token value (OAuth2, JWT, etc.) is passed as-is.
token
required

string

The bearer authentication token.
 batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: null compression
optional

 <oneOf>

Compression configuration.
 Original
optional

 <oneOf>

Compression configuration.

Basic compression.
 Option 1
optional

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
 Option 2
optional

object

Compression algorithm and compression level.
 algorithm
required

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
level
optional

enum

Compression level. Allowed enum values: `none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21`
 Extended
optional

 <oneOf>

Loki-specific compression.
snappy
optional

snappy

Snappy compression.

This implies sending push requests as Protocol Buffers.
 encoding
required



Configures how events are encoded into raw bytes.
 Avro
optional

object

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 avro
required

object

Apache Avro-specific encoder options.
schema
required

string

The Avro schema.
codec
required

avro

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 Csv
optional



Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 csv
required

object

The CSV Serializer Options.
capacity
optional

integer

Set the capacity (in bytes) of the internal buffer used in the CSV writer. This defaults to a reasonable setting.
delimiter
optional

integer

The field delimiter to use when writing CSV.
double_quote
optional

boolean

Enable double quote escapes.

This is enabled by default, but it may be disabled. When disabled, quotes in field data are escaped instead of doubled.
escape
optional

integer

The escape character to use when writing CSV.

In some variants of CSV, quotes are escaped using a special escape character like \ (instead of escaping quotes by doubling them).

To use this `double_quotes` needs to be disabled as well otherwise it is ignored
fields
required

[string]

Configures the fields that will be encoded, as well as the order in which they appear in the output.

If a field is not present in the event, the output will be an empty string.

Values of type `Array`, `Object`, and `Regex` are not supported and the output will be an empty string.
quote
optional

integer

The quote character to use when writing CSV.
 quote_style
optional

 <oneOf>

The quoting style to use when writing CSV data.
always
optional

always

This puts quotes around every field. Always.
necessary
optional

necessary

This puts quotes around fields only when necessary. They are necessary when fields contain a quote, delimiter or record terminator. Quotes are also necessary when writing an empty record (which is indistinguishable from a record with one empty field).
non_numeric
optional

non_numeric

This puts quotes around all fields that are non-numeric. Namely, when writing a field that does not parse as a valid float or integer, then quotes will be used even if they aren't strictly necessary.
never
optional

never

This never writes quotes, even if it would produce invalid CSV data.
codec
required

csv

Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 Gelf
optional

object

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
codec
required

gelf

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
 Json
optional



Encodes an event as [JSON](https://www.json.org/).
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

json

Encodes an event as [JSON](https://www.json.org/).
 Logfmt
optional

object

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
codec
required

logfmt

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
 Native
optional

object

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 NativeJson
optional

object

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native_json

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 RawMessage
optional

object

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
codec
required

raw_message

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 Text
optional



Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

text

Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
endpoint
required

string

The base URL of the Loki instance.

The `path` value is appended to this.
labels
optional

object

A set of labels that are attached to each batch of events.

Both keys and values are templateable, which enables you to attach dynamic labels to events.

Valid label keys include `*`, and prefixes ending with `*`, to allow for the expansion of objects into multiple labels. See [Label expansion](https://vector.dev/docs/reference/configuration/sinks/loki/#label-expansion) for more information.

Note: If the set of labels has high cardinality, this can cause drastic performance issues with Loki. To prevent this from happening, reduce the number of unique label keys and values.
 out_of_order_action
optional

 <oneOf>

Out-of-order event behavior.

Some sources may generate events with timestamps that aren't in chronological order. Even though the sink sorts the events before sending them to Loki, there is a chance that another event could come in that is out of order with the latest events sent to Loki. Prior to Loki 2.4.0, this was not supported and would result in an error during the push request.

If you're using Loki 2.4.0 or newer, `Accept` is the preferred action, which lets Loki handle any necessary sorting/reordering. If you're using an earlier version, then you must use `Drop` or `RewriteTimestamp` depending on which option makes the most sense for your use case.
drop
optional

drop

Drop the event.
rewrite_timestamp
optional

rewrite_timestamp

Rewrite the timestamp of the event to the timestamp of the latest event seen by the sink.
accept
optional

accept

Accept the event.

The event is not dropped and is sent without modification.

Requires Loki 2.4.0 or newer.
path
optional

string

The path to use in the URL of the Loki instance.
remove_label_fields
optional

boolean

Whether or not to delete fields from the event when they are used as labels.
remove_timestamp
optional

boolean

Whether or not to remove the timestamp from the event payload.

The timestamp is still sent as event metadata for Loki to use for indexing.
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60 tenant_id
optional

 <oneOf>

The [tenant ID](https://grafana.com/docs/loki/latest/operations/multi-tenancy/) to specify in requests to Loki.

When running Loki locally, a tenant ID is not required.
Option 1
optional

string

A templated field.

In many cases, components can be configured so that part of the component's functionality can be customized on a per-event basis. For example, you have a sink that writes events to a file and you want to specify which file an event should go to by using an event field as part of the input to the filename used.

By using `Template`, users can specify either fixed strings or templated strings. Templated strings use a common syntax to refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string is `my-file.log`. An example of a template string is `my-file-{{key}}.log`, where `{{key}}` is the key's value when the template is rendered into a string.
 tls
optional

 <oneOf>

TLS configuration.
 Option 1
optional

object

TLS configuration.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
{% /tab %}

{% tab title="loki-request-example" %}

```yaml
acknowledgements:
  enabled: null
auth: ''
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
compression: snappy
encoding: ''
endpoint: string
labels: object
out_of_order_action: drop
path: /loki/api/v1/push
remove_label_fields: boolean
remove_timestamp: true
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
tenant_id: ''
tls: ''
type: loki
```

{% /tab %}

### Mezmo{% #mezmo %}

Configuration for the `mezmo` (formerly `logdna`) sink.
OptionsSchema
{% tab title="mezmo-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: nullapi_key
required

string

The Ingestion API key.
 batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: nulldefault_app
optional

string

The default app that is set for events that do not contain a `file` or `app` field.
default_env
optional

string

The default environment that is set for events that do not contain an `env` field.
 encoding
optional

object

Transformations to prepare an event for serialization.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
endpoint
optional

string

The HTTP endpoint to send logs to.

Both IP address and hostname are accepted formats.
hostname
required

string

A templated field.

The hostname that is attached to each batch of events.
ip
optional

string,​null

The IP address that is attached to each batch of events.
mac
optional

string,​null

The MAC address that is attached to each batch of events.
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60tags
optional

array,​null

The tags that are attached to each batch of events.
{% /tab %}

{% tab title="mezmo-request-example" %}

```yaml
acknowledgements:
  enabled: null
api_key: string
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
default_app: vector
default_env: production
encoding: {}
endpoint: 'https://logs.mezmo.com/'
hostname: string
ip: string
mac: string
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
tags: array
type: mezmo
```

{% /tab %}

### NATS{% #natssink %}

Configuration for the `nats` sink.
OptionsSchema
{% tab title="natssink-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null auth
optional

 <oneOf>

Configuration of the authentication strategy when interacting with NATS.
 Option 1
optional

 <oneOf>

Configuration of the authentication strategy when interacting with NATS.
 UserPassword
optional

object

Username/password authentication.
strategy
required

user_password

Username/password authentication.
 user_password
required

object

Username and password configuration.
password
required

string

Password.
user
required

string

Username.
 Token
optional

object

Token authentication.
strategy
required

token

Token authentication.
 token
required

object

Token configuration.
value
required

string

Token.
 CredentialsFile
optional

object

Credentials file authentication. (JWT-based)
 credentials_file
required

object

Credentials file configuration.
path
required

string

Path to credentials file.
strategy
required

credentials_file

Credentials file authentication. (JWT-based)
 Nkey
optional

object

NKey authentication.
 nkey
required

object

NKeys configuration.
nkey
required

string

User.

Conceptually, this is equivalent to a public key.
seed
required

string

Seed.

Conceptually, this is equivalent to a private key.
strategy
required

nkey

NKey authentication.
connection_name
optional

string

A NATS [name](https://docs.nats.io/using-nats/developer/connecting/name) assigned to the NATS connection.
 encoding
required



Configures how events are encoded into raw bytes.
 Avro
optional

object

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 avro
required

object

Apache Avro-specific encoder options.
schema
required

string

The Avro schema.
codec
required

avro

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 Csv
optional



Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 csv
required

object

The CSV Serializer Options.
capacity
optional

integer

Set the capacity (in bytes) of the internal buffer used in the CSV writer. This defaults to a reasonable setting.
delimiter
optional

integer

The field delimiter to use when writing CSV.
double_quote
optional

boolean

Enable double quote escapes.

This is enabled by default, but it may be disabled. When disabled, quotes in field data are escaped instead of doubled.
escape
optional

integer

The escape character to use when writing CSV.

In some variants of CSV, quotes are escaped using a special escape character like \ (instead of escaping quotes by doubling them).

To use this `double_quotes` needs to be disabled as well otherwise it is ignored
fields
required

[string]

Configures the fields that will be encoded, as well as the order in which they appear in the output.

If a field is not present in the event, the output will be an empty string.

Values of type `Array`, `Object`, and `Regex` are not supported and the output will be an empty string.
quote
optional

integer

The quote character to use when writing CSV.
 quote_style
optional

 <oneOf>

The quoting style to use when writing CSV data.
always
optional

always

This puts quotes around every field. Always.
necessary
optional

necessary

This puts quotes around fields only when necessary. They are necessary when fields contain a quote, delimiter or record terminator. Quotes are also necessary when writing an empty record (which is indistinguishable from a record with one empty field).
non_numeric
optional

non_numeric

This puts quotes around all fields that are non-numeric. Namely, when writing a field that does not parse as a valid float or integer, then quotes will be used even if they aren't strictly necessary.
never
optional

never

This never writes quotes, even if it would produce invalid CSV data.
codec
required

csv

Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 Gelf
optional

object

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
codec
required

gelf

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
 Json
optional



Encodes an event as [JSON](https://www.json.org/).
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

json

Encodes an event as [JSON](https://www.json.org/).
 Logfmt
optional

object

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
codec
required

logfmt

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
 Native
optional

object

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 NativeJson
optional

object

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native_json

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 RawMessage
optional

object

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
codec
required

raw_message

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 Text
optional



Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

text

Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60subject
required

string

The NATS [subject](https://docs.nats.io/nats-concepts/subjects) to publish messages to.
 tls
optional

 <oneOf>

Configures the TLS options for incoming/outgoing connections.
 Option 1
optional



Configures the TLS options for incoming/outgoing connections.
enabled
optional

boolean,​null

Whether or not to require TLS for incoming or outgoing connections.

When enabled and used for incoming connections, an identity certificate is also required. See `tls.crt_file` for more information.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
url
required

string

The NATS [URL](https://docs.nats.io/using-nats/developer/connecting#nats-url) to connect to.

The URL must take the form of `nats://server:port`. If the port is not specified it defaults to 4222.
{% /tab %}

{% tab title="natssink-request-example" %}

```yaml
acknowledgements:
  enabled: null
auth: ''
connection_name: vector
encoding: ''
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
subject: string
tls: ''
url: string
type: nats
```

{% /tab %}

### New Relic{% #newrelic %}

Configuration for the `new_relic` sink.
OptionsSchema
{% tab title="newrelic-request-model" %}
FieldrequiredTypeDescriptionaccount_id
required

string

The New Relic account ID.
 acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null api
required

 <oneOf>

New Relic API endpoint.
events
optional

events

Events API.
metrics
optional

metrics

Metrics API.
logs
optional

logs

Logs API.
 batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: null compression
optional

 <oneOf>

Compression configuration.

All compression algorithms use the default compression level unless otherwise specified.
 Option 1
optional

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
 Option 2
optional

object

Compression algorithm and compression level.
 algorithm
required

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
level
optional

enum

Compression level. Allowed enum values: `none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21`
 encoding
optional

object

Transformations to prepare an event for serialization.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
license_key
required

string

A valid New Relic license key.
 region
optional

 <oneOf>

New Relic region.
 Option 1
optional

 <oneOf>

New Relic region.
us
optional

us

US region.
eu
optional

eu

EU region.
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60
{% /tab %}

{% tab title="newrelic-request-example" %}

```yaml
account_id: string
acknowledgements:
  enabled: null
api: ''
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
compression: gzip
encoding: {}
license_key: string
region: ''
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
type: new_relic
```

{% /tab %}

### Papertrail{% #papertrail %}

Configuration for the `papertrail` sink.
OptionsSchema
{% tab title="papertrail-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null encoding
required



Configures how events are encoded into raw bytes.
 Avro
optional

object

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 avro
required

object

Apache Avro-specific encoder options.
schema
required

string

The Avro schema.
codec
required

avro

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 Csv
optional



Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 csv
required

object

The CSV Serializer Options.
capacity
optional

integer

Set the capacity (in bytes) of the internal buffer used in the CSV writer. This defaults to a reasonable setting.
delimiter
optional

integer

The field delimiter to use when writing CSV.
double_quote
optional

boolean

Enable double quote escapes.

This is enabled by default, but it may be disabled. When disabled, quotes in field data are escaped instead of doubled.
escape
optional

integer

The escape character to use when writing CSV.

In some variants of CSV, quotes are escaped using a special escape character like \ (instead of escaping quotes by doubling them).

To use this `double_quotes` needs to be disabled as well otherwise it is ignored
fields
required

[string]

Configures the fields that will be encoded, as well as the order in which they appear in the output.

If a field is not present in the event, the output will be an empty string.

Values of type `Array`, `Object`, and `Regex` are not supported and the output will be an empty string.
quote
optional

integer

The quote character to use when writing CSV.
 quote_style
optional

 <oneOf>

The quoting style to use when writing CSV data.
always
optional

always

This puts quotes around every field. Always.
necessary
optional

necessary

This puts quotes around fields only when necessary. They are necessary when fields contain a quote, delimiter or record terminator. Quotes are also necessary when writing an empty record (which is indistinguishable from a record with one empty field).
non_numeric
optional

non_numeric

This puts quotes around all fields that are non-numeric. Namely, when writing a field that does not parse as a valid float or integer, then quotes will be used even if they aren't strictly necessary.
never
optional

never

This never writes quotes, even if it would produce invalid CSV data.
codec
required

csv

Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 Gelf
optional

object

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
codec
required

gelf

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
 Json
optional



Encodes an event as [JSON](https://www.json.org/).
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

json

Encodes an event as [JSON](https://www.json.org/).
 Logfmt
optional

object

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
codec
required

logfmt

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
 Native
optional

object

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 NativeJson
optional

object

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native_json

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 RawMessage
optional

object

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
codec
required

raw_message

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 Text
optional



Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

text

Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
endpoint
required

string

The URI component of a request.

The TCP endpoint to send logs to.
 keepalive
optional

 <oneOf>

TCP keepalive settings for socket-based components.
 Option 1
optional

object

TCP keepalive settings for socket-based components.
time_secs
optional

integer,​null

The time to wait before starting to send TCP keepalive probes on an idle connection.
process
optional

string

A templated field.

The value to use as the `process` in Papertrail.
send_buffer_bytes
optional

integer,​null

Configures the send buffer size using the `SO_SNDBUF` option on the socket.
 tls
optional

 <oneOf>

Configures the TLS options for incoming/outgoing connections.
 Option 1
optional



Configures the TLS options for incoming/outgoing connections.
enabled
optional

boolean,​null

Whether or not to require TLS for incoming or outgoing connections.

When enabled and used for incoming connections, an identity certificate is also required. See `tls.crt_file` for more information.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
{% /tab %}

{% tab title="papertrail-request-example" %}

```yaml
acknowledgements:
  enabled: null
encoding: ''
endpoint: string
keepalive: ''
process: vector
send_buffer_bytes: integer
tls: ''
type: papertrail
```

{% /tab %}

### Prometheus Exporter{% #prometheusexporter %}

Configuration for the `prometheus_exporter` sink.
OptionsSchema
{% tab title="prometheusexporter-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: nulladdress
optional

string

The address to expose for scraping.

The metrics are exposed at the typical Prometheus exporter path, `/metrics`.
 auth
optional

 <oneOf>

Configuration of the authentication strategy for HTTP requests.

HTTP authentication should be used with HTTPS only, as the authentication credentials are passed as an HTTP header without any additional encryption beyond what is provided by the transport itself.
 Option 1
optional

 <oneOf>

Configuration of the authentication strategy for HTTP requests.

HTTP authentication should be used with HTTPS only, as the authentication credentials are passed as an HTTP header without any additional encryption beyond what is provided by the transport itself.
 Basic
optional

object

Basic authentication.

The username and password are concatenated and encoded via [base64](https://en.wikipedia.org/wiki/Base64).
password
required

string

The basic authentication password.
strategy
required

basic

Basic authentication.

The username and password are concatenated and encoded via [base64](https://en.wikipedia.org/wiki/Base64).
user
required

string

The basic authentication username.
 Bearer
optional

object

Bearer authentication.

The bearer token value (OAuth2, JWT, etc.) is passed as-is.
strategy
required

bearer

Bearer authentication.

The bearer token value (OAuth2, JWT, etc.) is passed as-is.
token
required

string

The bearer authentication token.
buckets
optional

[number]

Default buckets to use for aggregating [distribution](https://vector.dev/docs/about/under-the-hood/architecture/data-model/metric/#distribution) metrics into histograms.
default_namespace
optional

string,​null

The default namespace for any metrics sent.

This namespace is only used if a metric has no existing namespace. When a namespace is present, it is used as a prefix to the metric name, and separated with an underscore (`_`).

It should follow the Prometheus [naming conventions](https://prometheus.io/docs/practices/naming/#metric-names).
distributions_as_summaries
optional

boolean

Whether or not to render [distributions](https://vector.dev/docs/about/under-the-hood/architecture/data-model/metric/#distribution) as an [aggregated histogram](https://prometheus.io/docs/concepts/metric_types/#histogram) or [aggregated summary](https://prometheus.io/docs/concepts/metric_types/#summary).

While distributions as a lossless way to represent a set of samples for a metric is supported, Prometheus clients (the application being scraped, which is this sink) must aggregate locally into either an aggregated histogram or aggregated summary.
flush_period_secs
optional

integer

The interval, in seconds, on which metrics are flushed.

On the flush interval, if a metric has not been seen since the last flush interval, it is considered expired and is removed.

Be sure to configure this value higher than your client's scrape interval.
quantiles
optional

[number]

Quantiles to use for aggregating [distribution](https://vector.dev/docs/about/under-the-hood/architecture/data-model/metric/#distribution) metrics into a summary.
suppress_timestamp
optional

boolean

Suppresses timestamps on the Prometheus output.

This can sometimes be useful when the source of metrics leads to their timestamps being too far in the past for Prometheus to allow them, such as when aggregating metrics over long time periods, or when replaying old metrics from a disk buffer.
 tls
optional

 <oneOf>

Configures the TLS options for incoming/outgoing connections.
 Option 1
optional



Configures the TLS options for incoming/outgoing connections.
enabled
optional

boolean,​null

Whether or not to require TLS for incoming or outgoing connections.

When enabled and used for incoming connections, an identity certificate is also required. See `tls.crt_file` for more information.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
{% /tab %}

{% tab title="prometheusexporter-request-example" %}

```yaml
acknowledgements:
  enabled: null
address: '0.0.0.0:9598'
auth: ''
buckets:
  - 0.005
  - 0.01
  - 0.025
  - 0.05
  - 0.1
  - 0.25
  - 0.5
  - 1
  - 2.5
  - 5
  - 10
default_namespace: string
distributions_as_summaries: boolean
flush_period_secs: 60
quantiles:
  - 0.5
  - 0.75
  - 0.9
  - 0.95
  - 0.99
suppress_timestamp: boolean
tls: ''
type: prometheus_exporter
```

{% /tab %}

### Prometheus Remote Write{% #remotewrite %}

Configuration for the `prometheus_remote_write` sink.
OptionsSchema
{% tab title="remotewrite-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null auth
optional

 <oneOf>

Authentication strategies.
 Option 1
optional

 <oneOf>

Authentication strategies.
 Basic
optional

object

HTTP Basic Authentication.
password
required

string

Basic authentication password.
strategy
required

basic

HTTP Basic Authentication.
user
required

string

Basic authentication username.
 Bearer
optional

object

Bearer authentication.

A bearer token (OAuth2, JWT, etc) is passed as-is.
strategy
required

bearer

Bearer authentication.

A bearer token (OAuth2, JWT, etc) is passed as-is.
token
required

string

The bearer token to send.
 Aws
optional



Amazon Prometheus Service-specific authentication.
strategy
required

aws

Amazon Prometheus Service-specific authentication.
 aws
optional

 <oneOf>

Configuration of the region/endpoint to use when interacting with an AWS service.
 Option 1
optional

object

Configuration of the region/endpoint to use when interacting with an AWS service.
endpoint
optional

string,​null

Custom endpoint for use with AWS-compatible services.
default: nullregion
optional

string,​null

The [AWS region](https://docs.aws.amazon.com/general/latest/gr/rande.html#regional-endpoints) of the target service.
default: null batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: nullbuckets
optional

[number]

Default buckets to use for aggregating [distribution](https://vector.dev/docs/about/under-the-hood/architecture/data-model/metric/#distribution) metrics into histograms.
 compression
optional

 <oneOf>

Supported compression types for Prometheus Remote Write.
snappy
optional

snappy

Snappy.
gzip
optional

gzip

Gzip.
zstd
optional

zstd

Zstandard.
default_namespace
optional

string,​null

The default namespace for any metrics sent.

This namespace is only used if a metric has no existing namespace. When a namespace is present, it is used as a prefix to the metric name, and separated with an underscore (`_`).

It should follow the Prometheus [naming conventions](https://prometheus.io/docs/practices/naming/#metric-names).
endpoint
required

string

The endpoint to send data to.

The endpoint should include the scheme and the path to write to.
quantiles
optional

[number]

Quantiles to use for aggregating [distribution](https://vector.dev/docs/about/under-the-hood/architecture/data-model/metric/#distribution) metrics into a summary.
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60 tenant_id
optional

 <oneOf>

The tenant ID to send.

If set, a header named `X-Scope-OrgID` is added to outgoing requests with the value of this setting.

This may be used by Cortex or other remote services to identify the tenant making the request.
Option 1
optional

string

A templated field.

In many cases, components can be configured so that part of the component's functionality can be customized on a per-event basis. For example, you have a sink that writes events to a file and you want to specify which file an event should go to by using an event field as part of the input to the filename used.

By using `Template`, users can specify either fixed strings or templated strings. Templated strings use a common syntax to refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string is `my-file.log`. An example of a template string is `my-file-{{key}}.log`, where `{{key}}` is the key's value when the template is rendered into a string.
 tls
optional

 <oneOf>

TLS configuration.
 Option 1
optional

object

TLS configuration.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
{% /tab %}

{% tab title="remotewrite-request-example" %}

```yaml
acknowledgements:
  enabled: null
auth: ''
aws: ''
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
buckets:
  - 0.005
  - 0.01
  - 0.025
  - 0.05
  - 0.1
  - 0.25
  - 0.5
  - 1
  - 2.5
  - 5
  - 10
compression: snappy
default_namespace: string
endpoint: string
quantiles:
  - 0.5
  - 0.75
  - 0.9
  - 0.95
  - 0.99
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
tenant_id: ''
tls: ''
type: prometheus_remote_write
```

{% /tab %}

### Pulsar{% #pulsarsink %}

Configuration for the `pulsar` sink.
OptionsSchema
{% tab title="pulsarsink-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null auth
optional

 <oneOf>

Authentication configuration.
 Option 1
optional

object

Authentication configuration.
name
optional

string,​null

Basic authentication name/username.

This can be used either for basic authentication (username/password) or JWT authentication. When used for JWT, the value should be `token`.
 oauth2
optional

 <oneOf>

OAuth2-specific authentication configuration.
 Option 1
optional

object

OAuth2-specific authentication configuration.
audience
optional

string,​null

The OAuth2 audience.
credentials_url
required

string

The credentials URL.

A data URL is also supported.
issuer_url
required

string

The issuer URL.
scope
optional

string,​null

The OAuth2 scope.
 token
optional

 <oneOf>

Basic authentication password/token.

This can be used either for basic authentication (username/password) or JWT authentication. When used for JWT, the value should be the signed JWT, in the compact representation.
Option 1
optional

string

Wrapper for sensitive strings containing credentials
 batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nullmax_events
optional

integer,​null

The maximum amount of events in a batch before it is flushed.

Note this is an unsigned 32 bit integer which is a smaller capacity than many of the other sink batch settings.
default: null compression
optional

 <oneOf>

Supported compression types for Pulsar.
none
optional

none

No compression.
lz4
optional

lz4

LZ4.
zlib
optional

zlib

Zlib.
zstd
optional

zstd

Zstandard.
snappy
optional

snappy

Snappy.
 encoding
required



Configures how events are encoded into raw bytes.
 Avro
optional

object

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 avro
required

object

Apache Avro-specific encoder options.
schema
required

string

The Avro schema.
codec
required

avro

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 Csv
optional



Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 csv
required

object

The CSV Serializer Options.
capacity
optional

integer

Set the capacity (in bytes) of the internal buffer used in the CSV writer. This defaults to a reasonable setting.
delimiter
optional

integer

The field delimiter to use when writing CSV.
double_quote
optional

boolean

Enable double quote escapes.

This is enabled by default, but it may be disabled. When disabled, quotes in field data are escaped instead of doubled.
escape
optional

integer

The escape character to use when writing CSV.

In some variants of CSV, quotes are escaped using a special escape character like \ (instead of escaping quotes by doubling them).

To use this `double_quotes` needs to be disabled as well otherwise it is ignored
fields
required

[string]

Configures the fields that will be encoded, as well as the order in which they appear in the output.

If a field is not present in the event, the output will be an empty string.

Values of type `Array`, `Object`, and `Regex` are not supported and the output will be an empty string.
quote
optional

integer

The quote character to use when writing CSV.
 quote_style
optional

 <oneOf>

The quoting style to use when writing CSV data.
always
optional

always

This puts quotes around every field. Always.
necessary
optional

necessary

This puts quotes around fields only when necessary. They are necessary when fields contain a quote, delimiter or record terminator. Quotes are also necessary when writing an empty record (which is indistinguishable from a record with one empty field).
non_numeric
optional

non_numeric

This puts quotes around all fields that are non-numeric. Namely, when writing a field that does not parse as a valid float or integer, then quotes will be used even if they aren't strictly necessary.
never
optional

never

This never writes quotes, even if it would produce invalid CSV data.
codec
required

csv

Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 Gelf
optional

object

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
codec
required

gelf

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
 Json
optional



Encodes an event as [JSON](https://www.json.org/).
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

json

Encodes an event as [JSON](https://www.json.org/).
 Logfmt
optional

object

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
codec
required

logfmt

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
 Native
optional

object

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 NativeJson
optional

object

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native_json

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 RawMessage
optional

object

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
codec
required

raw_message

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 Text
optional



Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

text

Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
endpoint
required

string

The endpoint to which the Pulsar client should connect to.

The endpoint should specify the pulsar protocol and port.
 partition_key_field
optional

 <oneOf>

The log field name or tags key to use for the partition key.

If the field does not exist in the log event or metric tags, a blank value will be used.

If omitted, the key is not sent.

Pulsar uses a hash of the key to choose the topic-partition or uses round-robin if the record has no key.
Option 1
optional

string

An optional path that deserializes an empty string to `None`.
producer_name
optional

string,​null

The name of the producer. If not specified, the default name assigned by Pulsar is used.
 properties_key
optional

 <oneOf>

The log field name to use for the Pulsar properties key.

If omitted, no properties will be written.
Option 1
optional

string

An optional path that deserializes an empty string to `None`.
topic
required

string

A templated field.

The Pulsar topic name to write events to.
{% /tab %}

{% tab title="pulsarsink-request-example" %}

```yaml
acknowledgements:
  enabled: null
auth: ''
batch:
  max_bytes: null
  max_events: null
compression: none
encoding: ''
endpoint: string
partition_key_field: ''
producer_name: string
properties_key: ''
topic: string
type: pulsar
```

{% /tab %}

### Redis{% #redissink %}

Configuration for the `redis` sink.
OptionsSchema
{% tab title="redissink-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: null data_type
optional

 <oneOf>

Redis data type to store messages in.
list
optional

list

The Redis `list` type.

This resembles a deque, where messages can be popped and pushed from either end.

This is the default.
channel
optional

channel

The Redis `channel` type.

Redis channels function in a pub/sub fashion, allowing many-to-many broadcasting and receiving.
 encoding
required



Configures how events are encoded into raw bytes.
 Avro
optional

object

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 avro
required

object

Apache Avro-specific encoder options.
schema
required

string

The Avro schema.
codec
required

avro

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 Csv
optional



Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 csv
required

object

The CSV Serializer Options.
capacity
optional

integer

Set the capacity (in bytes) of the internal buffer used in the CSV writer. This defaults to a reasonable setting.
delimiter
optional

integer

The field delimiter to use when writing CSV.
double_quote
optional

boolean

Enable double quote escapes.

This is enabled by default, but it may be disabled. When disabled, quotes in field data are escaped instead of doubled.
escape
optional

integer

The escape character to use when writing CSV.

In some variants of CSV, quotes are escaped using a special escape character like \ (instead of escaping quotes by doubling them).

To use this `double_quotes` needs to be disabled as well otherwise it is ignored
fields
required

[string]

Configures the fields that will be encoded, as well as the order in which they appear in the output.

If a field is not present in the event, the output will be an empty string.

Values of type `Array`, `Object`, and `Regex` are not supported and the output will be an empty string.
quote
optional

integer

The quote character to use when writing CSV.
 quote_style
optional

 <oneOf>

The quoting style to use when writing CSV data.
always
optional

always

This puts quotes around every field. Always.
necessary
optional

necessary

This puts quotes around fields only when necessary. They are necessary when fields contain a quote, delimiter or record terminator. Quotes are also necessary when writing an empty record (which is indistinguishable from a record with one empty field).
non_numeric
optional

non_numeric

This puts quotes around all fields that are non-numeric. Namely, when writing a field that does not parse as a valid float or integer, then quotes will be used even if they aren't strictly necessary.
never
optional

never

This never writes quotes, even if it would produce invalid CSV data.
codec
required

csv

Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 Gelf
optional

object

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
codec
required

gelf

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
 Json
optional



Encodes an event as [JSON](https://www.json.org/).
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

json

Encodes an event as [JSON](https://www.json.org/).
 Logfmt
optional

object

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
codec
required

logfmt

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
 Native
optional

object

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 NativeJson
optional

object

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native_json

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 RawMessage
optional

object

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
codec
required

raw_message

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 Text
optional



Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

text

Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
endpoint
required

string

The URL of the Redis endpoint to connect to.

The URL *must* take the form of `protocol://server:port/db` where the protocol can either be `redis` or `rediss` for connections secured via TLS.
key
required

string

A templated field.

The Redis key to publish messages to.
 list_option
optional

 <oneOf>

List-specific options.
 Option 1
optional

object

List-specific options.
 method
required

 <oneOf>

The method to use for pushing messages into a `list`.
rpush
optional

rpush

Use the `rpush` method.

This pushes messages onto the tail of the list.

This is the default.
lpush
optional

lpush

Use the `lpush` method.

This pushes messages onto the head of the list.
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60
{% /tab %}

{% tab title="redissink-request-example" %}

```yaml
acknowledgements:
  enabled: null
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
data_type: list
encoding: ''
endpoint: string
key: string
list_option: ''
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
type: redis
```

{% /tab %}

### Sematext Logs{% #sematextlogs %}

Configuration for the `sematext_logs` sink.
OptionsSchema
{% tab title="sematextlogs-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: null encoding
optional

object

Transformations to prepare an event for serialization.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
endpoint
optional

string,​null

The endpoint to send data to.

Setting this option overrides the `region` option.
 region
optional

 <oneOf>

The Sematext region to send data to.
us
optional

us

United States
eu
optional

eu

Europe
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60token
required

string

The token that is used to write to Sematext.
{% /tab %}

{% tab title="sematextlogs-request-example" %}

```yaml
acknowledgements:
  enabled: null
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
encoding: {}
endpoint: string
region: us
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
token: string
type: sematext_logs
```

{% /tab %}

### Sematext Metrics{% #sematextmetrics %}

Configuration for the `sematext_metrics` sink.
OptionsSchema
{% tab title="sematextmetrics-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: nulldefault_namespace
required

string

Sets the default namespace for any metrics sent.

This namespace is only used if a metric has no existing namespace. When a namespace is present, it is used as a prefix to the metric name, and separated with a period (`.`).
endpoint
optional

string,​null

The endpoint to send data to.

Setting this option overrides the `region` option.
 region
optional

 <oneOf>

The Sematext region to send data to.
us
optional

us

United States
eu
optional

eu

Europe
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60token
required

string

The token that is used to write to Sematext.
{% /tab %}

{% tab title="sematextmetrics-request-example" %}

```yaml
acknowledgements:
  enabled: null
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
default_namespace: string
endpoint: string
region: us
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
token: string
type: sematext_metrics
```

{% /tab %}

### Socket{% #socketsink %}

Configuration for the `socket` sink.
OptionsSchema
{% tab title="socketsink-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null_metadata
optional


description
optional



Socket mode.
oneOf
optional


{% /tab %}

{% tab title="socketsink-request-example" %}

```yaml
acknowledgements:
  enabled: null
type: socket
```

{% /tab %}

### Splunk HEC Logs{% #heclogssink %}

Configuration for the `splunk_hec_logs` sink.
OptionsSchema
{% tab title="heclogssink-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional



Splunk HEC acknowledgement configuration.
indexer_acknowledgements_enabled
optional

boolean

Controls if the sink integrates with [Splunk HEC indexer acknowledgements](https://docs.splunk.com/Documentation/Splunk/8.2.3/Data/AboutHECIDXAck) for end-to-end acknowledgements.
max_pending_acks
optional

integer

The maximum number of pending acknowledgements from events sent to the Splunk HEC collector.

Once reached, the sink begins applying backpressure.
query_interval
optional

integer

The amount of time to wait between queries to the Splunk HEC indexer acknowledgement endpoint.
retry_limit
optional

integer

The maximum number of times an acknowledgement ID is queried for its status.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: nullauto_extract_timestamp
optional

boolean,​null

Passes the `auto_extract_timestamp` option to Splunk.

This option is only relevant to Splunk v8.x and above, and is only applied when `endpoint_target` is set to `event`.

Setting this to `true` causes Splunk to extract the timestamp from the message text rather than use the timestamp embedded in the event. The timestamp must be in the format `yyyy-mm-dd hh:mm:ss`.
 batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: null compression
optional

 <oneOf>

Compression configuration.

All compression algorithms use the default compression level unless otherwise specified.
 Option 1
optional

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
 Option 2
optional

object

Compression algorithm and compression level.
 algorithm
required

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
level
optional

enum

Compression level. Allowed enum values: `none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21`
default_token
required

string

Default Splunk HEC token.

If an event has a token set in its secrets (`splunk_hec_token`), it prevails over the one set here.
 encoding
required



Configures how events are encoded into raw bytes.
 Avro
optional

object

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 avro
required

object

Apache Avro-specific encoder options.
schema
required

string

The Avro schema.
codec
required

avro

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 Csv
optional



Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 csv
required

object

The CSV Serializer Options.
capacity
optional

integer

Set the capacity (in bytes) of the internal buffer used in the CSV writer. This defaults to a reasonable setting.
delimiter
optional

integer

The field delimiter to use when writing CSV.
double_quote
optional

boolean

Enable double quote escapes.

This is enabled by default, but it may be disabled. When disabled, quotes in field data are escaped instead of doubled.
escape
optional

integer

The escape character to use when writing CSV.

In some variants of CSV, quotes are escaped using a special escape character like \ (instead of escaping quotes by doubling them).

To use this `double_quotes` needs to be disabled as well otherwise it is ignored
fields
required

[string]

Configures the fields that will be encoded, as well as the order in which they appear in the output.

If a field is not present in the event, the output will be an empty string.

Values of type `Array`, `Object`, and `Regex` are not supported and the output will be an empty string.
quote
optional

integer

The quote character to use when writing CSV.
 quote_style
optional

 <oneOf>

The quoting style to use when writing CSV data.
always
optional

always

This puts quotes around every field. Always.
necessary
optional

necessary

This puts quotes around fields only when necessary. They are necessary when fields contain a quote, delimiter or record terminator. Quotes are also necessary when writing an empty record (which is indistinguishable from a record with one empty field).
non_numeric
optional

non_numeric

This puts quotes around all fields that are non-numeric. Namely, when writing a field that does not parse as a valid float or integer, then quotes will be used even if they aren't strictly necessary.
never
optional

never

This never writes quotes, even if it would produce invalid CSV data.
codec
required

csv

Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 Gelf
optional

object

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
codec
required

gelf

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
 Json
optional



Encodes an event as [JSON](https://www.json.org/).
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

json

Encodes an event as [JSON](https://www.json.org/).
 Logfmt
optional

object

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
codec
required

logfmt

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
 Native
optional

object

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 NativeJson
optional

object

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native_json

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 RawMessage
optional

object

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
codec
required

raw_message

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 Text
optional



Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

text

Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
endpoint
required

uri

The base URL of the Splunk instance.

The scheme (`http` or `https`) must be specified. No path should be included since the paths defined by the [`Splunk`](https://docs.splunk.com/Documentation/Splunk/8.0.0/Data/HECRESTendpoints) API are used.
 endpoint_target
optional

 <oneOf>

Splunk HEC endpoint configuration.
raw
optional

raw

Events are sent to the [raw endpoint](https://docs.splunk.com/Documentation/Splunk/8.0.0/RESTREF/RESTinput#services.2Fcollector.2Fraw).

When the raw endpoint is used, configured [event metadata](https://docs.splunk.com/Documentation/Splunk/latest/Data/FormateventsforHTTPEventCollector#Event_metadata) is sent as query parameters on the request, except for the `timestamp` field.
event
optional

event

Events are sent to the [event endpoint](https://docs.splunk.com/Documentation/Splunk/8.0.0/RESTREF/RESTinput#services.2Fcollector.2Fevent).

When the event endpoint is used, configured [event metadata](https://docs.splunk.com/Documentation/Splunk/latest/Data/FormateventsforHTTPEventCollector#Event_metadata) is sent directly with each event.
host_key
optional

string

Overrides the name of the log field used to retrieve the hostname to send to Splunk HEC.

By default, the [global `log_schema.host_key` option](https://vector.dev/docs/reference/configuration/global-options/#log_schema.host_key) is used.
 index
optional

 <oneOf>

The name of the index to send events to.

If not specified, the default index defined within Splunk is used.
Option 1
optional

string

A templated field.

In many cases, components can be configured so that part of the component's functionality can be customized on a per-event basis. For example, you have a sink that writes events to a file and you want to specify which file an event should go to by using an event field as part of the input to the filename used.

By using `Template`, users can specify either fixed strings or templated strings. Templated strings use a common syntax to refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string is `my-file.log`. An example of a template string is `my-file-{{key}}.log`, where `{{key}}` is the key's value when the template is rendered into a string.
indexed_fields
optional

[string]

Fields to be [added to Splunk index](https://docs.splunk.com/Documentation/Splunk/8.0.0/Data/IFXandHEC).
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60 source
optional

 <oneOf>

The source of events sent to this sink.

This is typically the filename the logs originated from.

If unset, the Splunk collector sets it.
Option 1
optional

string

A templated field.

In many cases, components can be configured so that part of the component's functionality can be customized on a per-event basis. For example, you have a sink that writes events to a file and you want to specify which file an event should go to by using an event field as part of the input to the filename used.

By using `Template`, users can specify either fixed strings or templated strings. Templated strings use a common syntax to refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string is `my-file.log`. An example of a template string is `my-file-{{key}}.log`, where `{{key}}` is the key's value when the template is rendered into a string.
 sourcetype
optional

 <oneOf>

The sourcetype of events sent to this sink.

If unset, Splunk defaults to `httpevent`.
Option 1
optional

string

A templated field.

In many cases, components can be configured so that part of the component's functionality can be customized on a per-event basis. For example, you have a sink that writes events to a file and you want to specify which file an event should go to by using an event field as part of the input to the filename used.

By using `Template`, users can specify either fixed strings or templated strings. Templated strings use a common syntax to refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string is `my-file.log`. An example of a template string is `my-file-{{key}}.log`, where `{{key}}` is the key's value when the template is rendered into a string.
timestamp_key
optional

string

Overrides the name of the log field used to retrieve the timestamp to send to Splunk HEC. When set to `""`, a timestamp is not set in the events sent to Splunk HEC.

By default, the [global `log_schema.timestamp_key` option](https://vector.dev/docs/reference/configuration/global-options/#log_schema.timestamp_key) is used.
 tls
optional

 <oneOf>

TLS configuration.
 Option 1
optional

object

TLS configuration.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
{% /tab %}

{% tab title="heclogssink-request-example" %}

```yaml
acknowledgements:
  indexer_acknowledgements_enabled: true
  max_pending_acks: 1000000
  query_interval: 10
  retry_limit: 30
auto_extract_timestamp: boolean
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
compression: none
default_token: string
encoding: ''
endpoint: string
endpoint_target: event
host_key: host
index: ''
indexed_fields: []
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
source: ''
sourcetype: ''
timestamp_key: timestamp
tls: ''
type: splunk_hec_logs
```

{% /tab %}

### Splunk HEC Metrics{% #hecmetricssink %}

Configuration of the `splunk_hec_metrics` sink.
OptionsSchema
{% tab title="hecmetricssink-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional



Splunk HEC acknowledgement configuration.
indexer_acknowledgements_enabled
optional

boolean

Controls if the sink integrates with [Splunk HEC indexer acknowledgements](https://docs.splunk.com/Documentation/Splunk/8.2.3/Data/AboutHECIDXAck) for end-to-end acknowledgements.
max_pending_acks
optional

integer

The maximum number of pending acknowledgements from events sent to the Splunk HEC collector.

Once reached, the sink begins applying backpressure.
query_interval
optional

integer

The amount of time to wait between queries to the Splunk HEC indexer acknowledgement endpoint.
retry_limit
optional

integer

The maximum number of times an acknowledgement ID is queried for its status.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: null compression
optional

 <oneOf>

Compression configuration.

All compression algorithms use the default compression level unless otherwise specified.
 Option 1
optional

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
 Option 2
optional

object

Compression algorithm and compression level.
 algorithm
required

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
level
optional

enum

Compression level. Allowed enum values: `none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21`
default_namespace
optional

string,​null

Sets the default namespace for any metrics sent.

This namespace is only used if a metric has no existing namespace. When a namespace is present, it is used as a prefix to the metric name, and separated with a period (`.`).
default_token
required

string

Default Splunk HEC token.

If an event has a token set in its metadata, it prevails over the one set here.
endpoint
required

uri

The base URL of the Splunk instance.

The scheme (`http` or `https`) must be specified. No path should be included since the paths defined by the [`Splunk`](https://docs.splunk.com/Documentation/Splunk/8.0.0/Data/HECRESTendpoints) API are used.
host_key
optional

string

Overrides the name of the log field used to retrieve the hostname to send to Splunk HEC.

By default, the [global `log_schema.host_key` option](https://vector.dev/docs/reference/configuration/global-options/#log_schema.host_key) is used.
 index
optional

 <oneOf>

The name of the index where to send the events to.

If not specified, the default index defined within Splunk is used.
Option 1
optional

string

A templated field.

In many cases, components can be configured so that part of the component's functionality can be customized on a per-event basis. For example, you have a sink that writes events to a file and you want to specify which file an event should go to by using an event field as part of the input to the filename used.

By using `Template`, users can specify either fixed strings or templated strings. Templated strings use a common syntax to refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string is `my-file.log`. An example of a template string is `my-file-{{key}}.log`, where `{{key}}` is the key's value when the template is rendered into a string.
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60 source
optional

 <oneOf>

The source of events sent to this sink.

This is typically the filename the logs originated from.

If unset, the Splunk collector sets it.
Option 1
optional

string

A templated field.

In many cases, components can be configured so that part of the component's functionality can be customized on a per-event basis. For example, you have a sink that writes events to a file and you want to specify which file an event should go to by using an event field as part of the input to the filename used.

By using `Template`, users can specify either fixed strings or templated strings. Templated strings use a common syntax to refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string is `my-file.log`. An example of a template string is `my-file-{{key}}.log`, where `{{key}}` is the key's value when the template is rendered into a string.
 sourcetype
optional

 <oneOf>

The sourcetype of events sent to this sink.

If unset, Splunk defaults to `httpevent`.
Option 1
optional

string

A templated field.

In many cases, components can be configured so that part of the component's functionality can be customized on a per-event basis. For example, you have a sink that writes events to a file and you want to specify which file an event should go to by using an event field as part of the input to the filename used.

By using `Template`, users can specify either fixed strings or templated strings. Templated strings use a common syntax to refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string is `my-file.log`. An example of a template string is `my-file-{{key}}.log`, where `{{key}}` is the key's value when the template is rendered into a string.
 tls
optional

 <oneOf>

TLS configuration.
 Option 1
optional

object

TLS configuration.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
{% /tab %}

{% tab title="hecmetricssink-request-example" %}

```yaml
acknowledgements:
  indexer_acknowledgements_enabled: true
  max_pending_acks: 1000000
  query_interval: 10
  retry_limit: 30
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
compression: none
default_namespace: string
default_token: string
endpoint: string
host_key: host
index: ''
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
source: ''
sourcetype: ''
tls: ''
type: splunk_hec_metrics
```

{% /tab %}

### StatsD{% #statsdsink %}

Configuration for the `statsd` sink.
OptionsSchema
{% tab title="statsdsink-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: nulldefault_namespace
optional

string,​null

Sets the default namespace for any metrics sent.

This namespace is only used if a metric has no existing namespace. When a namespace is present, it is used as a prefix to the metric name, and separated with a period (`.`).
_metadata
optional


description
optional



Socket mode.
oneOf
optional


{% /tab %}

{% tab title="statsdsink-request-example" %}

```yaml
acknowledgements:
  enabled: null
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
default_namespace: string
type: statsd
```

{% /tab %}

### Vector{% #vector %}

Configuration for the `vector` sink.
OptionsSchema
{% tab title="vector-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: nulladdress
required

uri

The downstream Vector address to which to connect.

Both IP address and hostname are accepted formats.

The address *must* include a port.
 batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: nullcompression
optional

boolean

Whether or not to compress requests.

If set to `true`, requests are compressed with [`gzip`](https://www.gzip.org/).
 request
optional

object

Middleware settings for outbound requests.

Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
 adaptive_concurrency
optional

object

Configuration of adaptive concurrency parameters.

These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}decrease_ratio
optional

number

The fraction of the current value to set the new concurrency limit when decreasing the limit.

Valid values are greater than `0` and less than `1`. Smaller values cause the algorithm to scale back rapidly when latency increases.

Note that the new limit is rounded down after applying this ratio.
default: 0.9ewma_alpha
optional

number

The weighting of new measurements compared to older measurements.

Valid values are greater than `0` and less than `1`.

ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
default: 0.4initial_concurrency
optional

integer

The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).

It is recommended to set this value to your service's average limit if you're seeing that it takes a long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the `adaptive_concurrency_limit` metric.
default: 1rtt_deviation_scale
optional

number

Scale of RTT deviations which are not considered anomalous.

Valid values are greater than or equal to `0`, and we expect reasonable values to range from `1.0` to `3.0`.

When calculating the past RTT average, we also compute a secondary "deviation" value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5 concurrency
optional

 <oneOf>

Configuration for outbound request concurrency.
none
optional

none

A fixed concurrency of 1.

Only one request can be outstanding at any given time.
adaptive
optional

adaptive

Concurrency will be managed by Vector's [Adaptive Request Concurrency](https://vector.dev/docs/about/under-the-hood/networking/arc/) feature.
Fixed
optional

integer

A fixed amount of concurrency will be allowed.
rate_limit_duration_secs
optional

integer,​null

The time window used for the `rate_limit_num` option.
default: 1rate_limit_num
optional

integer,​null

The maximum number of requests allowed within the `rate_limit_duration_secs` time window.
default: 9223372036854776000retry_attempts
optional

integer,​null

The maximum number of retries to make for failed requests.

The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
optional

integer,​null

The amount of time to wait before attempting the first retry for a failed request.

After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1retry_max_duration_secs
optional

integer,​null

The maximum amount of time to wait between retries.
default: 3600timeout_secs
optional

integer,​null

The time a request can take before being aborted.

Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60 tls
optional

 <oneOf>

Configures the TLS options for incoming/outgoing connections.
 Option 1
optional



Configures the TLS options for incoming/outgoing connections.
enabled
optional

boolean,​null

Whether or not to require TLS for incoming or outgoing connections.

When enabled and used for incoming connections, an identity certificate is also required. See `tls.crt_file` for more information.
alpn_protocols
optional

array,​null

Sets the list of supported ALPN protocols.

Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
 ca_file
optional

 <oneOf>

Absolute path to an additional CA certificate file.

The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
 crt_file
optional

 <oneOf>

Absolute path to a certificate file used to identify this server.

The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.

If this is set, and is not a PKCS#12 archive, `key_file` must also be set.
Option 1
optional

string

A file path.
 key_file
optional

 <oneOf>

Absolute path to a private key file used to identify this server.

The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Option 1
optional

string

A file path.
key_pass
optional

string,​null

Passphrase used to unlock the encrypted key file.

This has no effect unless `key_file` is set.
verify_certificate
optional

boolean,​null

Enables certificate verification.

If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.

Relevant for both incoming and outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the validity of certificates.
verify_hostname
optional

boolean,​null

Enables hostname verification.

If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.

Only relevant for outgoing connections.

Do NOT set this to `false` unless you understand the risks of not verifying the remote hostname.
 version
optional

 <oneOf>

Version of the configuration.
 Option 1
optional

 <oneOf>

Marker type for the version two of the configuration for the `vector` sink.
2
optional

2

Marker value for version two.
{% /tab %}

{% tab title="vector-request-example" %}

```yaml
acknowledgements:
  enabled: null
address: string
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
compression: boolean
request:
  adaptive_concurrency:
    decrease_ratio: 0.9
    ewma_alpha: 0.4
    initial_concurrency: 1
    rtt_deviation_scale: 2.5
  rate_limit_duration_secs: 1
  rate_limit_num: 9223372036854776000
  retry_attempts: 9223372036854776000
  retry_initial_backoff_secs: 1
  retry_max_duration_secs: 3600
  timeout_secs: 60
tls: ''
version: ''
type: vector
```

{% /tab %}

### WebHDFS{% #webhdfs %}

Configuration for the `webhdfs` sink.
OptionsSchema
{% tab title="webhdfs-request-model" %}
FieldrequiredTypeDescription acknowledgements
optional

object

Controls how acknowledgements are handled for this sink.

See [End-to-end Acknowledgements](https://vector.dev/docs/about/under-the-hood/architecture/end-to-end-acknowledgements/) for more information on how event acknowledgement is handled.
enabled
optional

boolean,​null

Whether or not end-to-end acknowledgements are enabled.

When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by the sink before acknowledging them at the source.

Enabling or disabling acknowledgements at the sink level takes precedence over any global [`acknowledgements`](https://vector.dev/docs/reference/configuration/global-options/#acknowledgements) configuration.
default: null batch
optional

object

Event batching behavior.
max_bytes
optional

integer,​null

The maximum size of a batch that is processed by a sink.

This is based on the uncompressed size of the batched events, before they are serialized/compressed.
default: nullmax_events
optional

integer,​null

The maximum size of a batch before it is flushed.
default: nulltimeout_secs
optional

number,​null

The maximum age of a batch before it is flushed.
default: null compression
optional

 <oneOf>

Compression configuration.

All compression algorithms use the default compression level unless otherwise specified.
 Option 1
optional

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
 Option 2
optional

object

Compression algorithm and compression level.
 algorithm
required

 <oneOf>

Compression algorithm.
none
optional

none

No compression.
gzip
optional

gzip

[Gzip](https://www.gzip.org/) compression.
zlib
optional

zlib

[Zlib](https://zlib.net/) compression.
zstd
optional

zstd

[Zstandard](https://facebook.github.io/zstd/) compression.
level
optional

enum

Compression level. Allowed enum values: `none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21`
endpoint
optional

string

An HDFS cluster consists of a single NameNode, a master server that manages the file system namespace and regulates access to files by clients.

The endpoint is the HDFS's web restful HTTP API endpoint.

For more information, see the [HDFS Architecture](https://hadoop.apache.org/docs/r3.3.4/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html#NameNode_and_DataNodes) documentation.
prefix
optional

string

A prefix to apply to all keys.

Prefixes are useful for partitioning objects, such as by creating a blob key that stores blobs under a particular directory. If using a prefix for this purpose, it must end in `/` to act as a directory path. A trailing `/` is **not** automatically added.

The final file path is in the format of `{root}/{prefix}{suffix}`.
root
optional

string

The root path for WebHDFS.

Must be a valid directory.

The final file path is in the format of `{root}/{prefix}{suffix}`.
 encoding
required



Configures how events are encoded into raw bytes.
 Avro
optional

object

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 avro
required

object

Apache Avro-specific encoder options.
schema
required

string

The Avro schema.
codec
required

avro

Encodes an event as an [Apache Avro](https://avro.apache.org/) message.
 Csv
optional



Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 csv
required

object

The CSV Serializer Options.
capacity
optional

integer

Set the capacity (in bytes) of the internal buffer used in the CSV writer. This defaults to a reasonable setting.
delimiter
optional

integer

The field delimiter to use when writing CSV.
double_quote
optional

boolean

Enable double quote escapes.

This is enabled by default, but it may be disabled. When disabled, quotes in field data are escaped instead of doubled.
escape
optional

integer

The escape character to use when writing CSV.

In some variants of CSV, quotes are escaped using a special escape character like \ (instead of escaping quotes by doubling them).

To use this `double_quotes` needs to be disabled as well otherwise it is ignored
fields
required

[string]

Configures the fields that will be encoded, as well as the order in which they appear in the output.

If a field is not present in the event, the output will be an empty string.

Values of type `Array`, `Object`, and `Regex` are not supported and the output will be an empty string.
quote
optional

integer

The quote character to use when writing CSV.
 quote_style
optional

 <oneOf>

The quoting style to use when writing CSV data.
always
optional

always

This puts quotes around every field. Always.
necessary
optional

necessary

This puts quotes around fields only when necessary. They are necessary when fields contain a quote, delimiter or record terminator. Quotes are also necessary when writing an empty record (which is indistinguishable from a record with one empty field).
non_numeric
optional

non_numeric

This puts quotes around all fields that are non-numeric. Namely, when writing a field that does not parse as a valid float or integer, then quotes will be used even if they aren't strictly necessary.
never
optional

never

This never writes quotes, even if it would produce invalid CSV data.
codec
required

csv

Encodes an event as a CSV message.

This codec must be configured with fields to encode.
 Gelf
optional

object

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
codec
required

gelf

Encodes an event as a [GELF](https://docs.graylog.org/docs/gelf) message.
 Json
optional



Encodes an event as [JSON](https://www.json.org/).
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

json

Encodes an event as [JSON](https://www.json.org/).
 Logfmt
optional

object

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
codec
required

logfmt

Encodes an event as a [logfmt](https://brandur.org/logfmt) message.
 Native
optional

object

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native

Encodes an event in the [native Protocol Buffers format](https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 NativeJson
optional

object

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
codec
required

native_json

Encodes an event in the [native JSON format](https://github.com/vectordotdev/vector/blob/master/lib/codecs/tests/data/native_encoding/schema.cue).

This codec is **[experimental](https://vector.dev/highlights/2022-03-31-native-event-codecs)**.
 RawMessage
optional

object

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
codec
required

raw_message

No encoding.

This encoding uses the `message` field of a log event.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 Text
optional



Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
 metric_tag_values
optional

 <oneOf>

Controls how metric tag values are encoded.

When set to `single`, only the last non-bare value of tags are displayed with the metric. When set to `full`, all metric tags are exposed as separate assignments.
single
optional

single

Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored.
full
optional

full

All tags are exposed as arrays of either string or null values.
codec
required

text

Plain text encoding.

This encoding uses the `message` field of a log event. For metrics, it uses an encoding that resembles the Prometheus export format.

Be careful if you are modifying your log events (for example, by using a `remap` transform) and removing the message field while doing additional parsing on it, as this could lead to the encoding emitting empty strings for the given event.
except_fields
optional

array,​null

List of fields that are excluded from the encoded event.
only_fields
optional

array,​null

List of fields that are included in the encoded event.
 timestamp_format
optional

 <oneOf>

Format used for timestamp fields.
 Option 1
optional

 <oneOf>

The format in which a timestamp should be represented.
unix
optional

unix

Represent the timestamp as a Unix timestamp.
rfc3339
optional

rfc3339

Represent the timestamp as a RFC 3339 timestamp.
 framing
optional

 <oneOf>

Framing configuration.
 Option 1
optional

 <oneOf>

Framing configuration.
 Bytes
optional

object

Event data is not delimited at all.
method
required

bytes

Event data is not delimited at all.
 CharacterDelimited
optional



Event data is delimited by a single ASCII (7-bit) character.
 character_delimited
required

object

Options for the character delimited encoder.
delimiter
required

integer

The ASCII (7-bit) character that delimits byte sequences.
method
required

character_delimited

Event data is delimited by a single ASCII (7-bit) character.
 LengthDelimited
optional

object

Event data is prefixed with its length in bytes.

The prefix is a 32-bit unsigned integer, little endian.
method
required

length_delimited

Event data is prefixed with its length in bytes.

The prefix is a 32-bit unsigned integer, little endian.
 NewlineDelimited
optional

object

Event data is delimited by a newline (LF) character.
method
required

newline_delimited

Event data is delimited by a newline (LF) character.
{% /tab %}

{% tab title="webhdfs-request-example" %}

```yaml
acknowledgements:
  enabled: null
batch:
  max_bytes: null
  max_events: null
  timeout_secs: null
compression: gzip
endpoint: string
prefix: string
root: string
type: webhdfs
```

{% /tab %}
