Observability Pipelines is not available on the US1-FED Datadog site.
A sink is a destination for events. Each sink’s design and transmission method is determined by the downstream service with which it interacts. For example, the socket
sink streams individual events, while the aws_s3
sink buffers and flushes data.
Supports AMQP version 0.9.1
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullConfigures how events are encoded into raw bytes.
Apache Avro-specific encoder options.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as a GELF message.
Encodes an event as a GELF message.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
Encodes an event as a logfmt message.
Encodes an event as a logfmt message.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
A templated field.
The exchange to publish messages to.
Configure the AMQP message properties.
AMQP message properties.
Configure the AMQP message properties.
AMQP properties configuration.
Content-Encoding for the AMQP messages.
Content-Type for the AMQP messages.
A templated field.
Template used to generate a routing key which corresponds to a queue binding.
A templated field.
In many cases, components can be configured so that part of the component's functionality can be
customized on a per-event basis. For example, you have a sink that writes events to a file and you want to
specify which file an event should go to by using an event field as part of the
input to the filename used.
By using Template
, users can specify either fixed strings or templated strings. Templated strings use a common syntax to
refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string
is my-file.log
. An example of a template string is my-file-{{key}}.log
, where {{key}}
is the key's value when the template is rendered into a string.
URI for the AMQP server.
The URI has the format of
amqp://<user>:<password>@<host>:<port>/<vhost>?timeout=<seconds>
.
The default vhost can be specified by using a value of %2f
.
To connect over TLS, a scheme of amqps
can be specified instead. For example,
amqps://...
. Additional TLS settings, such as client certificate verification, can be
configured under the tls
section.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
acknowledgements:
enabled: null
encoding: ''
exchange: string
properties: ''
routing_key: ''
type: amqp
Configuration for the appsignal
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullCompression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
Transformations to prepare an event for serialization.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
The URI for the AppSignal API to send data to.
A valid app-level AppSignal Push API key.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60Configures the TLS options for incoming/outgoing connections.
Configures the TLS options for incoming/outgoing connections.
Whether or not to require TLS for incoming or outgoing connections.
When enabled and used for incoming connections, an identity certificate is also required. See tls.crt_file
for
more information.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
acknowledgements:
enabled: null
batch:
max_bytes: null
max_events: null
timeout_secs: null
compression: gzip
encoding: {}
endpoint: 'https://appsignal-endpoint.net'
push_api_key: string
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
tls: ''
type: appsignal
Configuration for the aws_cloudwatch_logs
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullConfiguration of the authentication strategy for interacting with AWS services.
The maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullCompression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
Dynamically create a log group if it does not already exist.
This ignores create_missing_stream
directly after creating the group and creates
the first stream.
Dynamically create a log stream if it does not already exist.
Configures how events are encoded into raw bytes.
Apache Avro-specific encoder options.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as a GELF message.
Encodes an event as a GELF message.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
Encodes an event as a logfmt message.
Encodes an event as a logfmt message.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
The group name of the target CloudWatch Logs stream.
Outbound HTTP request settings.
Additional HTTP headers to add to every HTTP request.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
The maximum number of requests allowed within the rate_limit_duration_secs
time window.
The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
The maximum amount of time to wait between retries.
The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
The stream name of the target CloudWatch Logs stream.
There can only be one writer to a log stream at a time. If multiple instances are writing to
the same log group, the stream name must include an identifier that is guaranteed to be
unique per instance.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
Custom endpoint for use with AWS-compatible services.
acknowledgements:
enabled: null
assume_role: string
auth:
imds:
connect_timeout_seconds: 1
max_attempts: 4
read_timeout_seconds: 1
load_timeout_secs: null
region: null
batch:
max_bytes: null
max_events: null
timeout_secs: null
compression: none
create_missing_group: true
create_missing_stream: true
encoding: ''
group_name: string
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
headers: {}
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
stream_name: string
tls: ''
type: aws_cloudwatch_logs
Configuration for the aws_cloudwatch_metrics
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullConfiguration of the authentication strategy for interacting with AWS services.
The maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullCompression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
The default namespace to use for metrics that do not have one.
Metrics with the same name can only be differentiated by their namespace, and not all
metrics have their own namespace.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
Custom endpoint for use with AWS-compatible services.
acknowledgements:
enabled: null
assume_role: string
auth:
imds:
connect_timeout_seconds: 1
max_attempts: 4
read_timeout_seconds: 1
load_timeout_secs: null
region: null
batch:
max_bytes: null
max_events: null
timeout_secs: null
compression: none
default_namespace: string
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
tls: ''
type: aws_cloudwatch_metrics
Configuration for the aws_kinesis_firehose
sink.
The maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullControls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullConfiguration of the authentication strategy for interacting with AWS services.
Compression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
Configures how events are encoded into raw bytes.
Apache Avro-specific encoder options.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as a GELF message.
Encodes an event as a GELF message.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
Encodes an event as a logfmt message.
Encodes an event as a logfmt message.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60Whether or not to retry successful requests containing partial failures.
The stream name of the target Kinesis Firehose delivery stream.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
Custom endpoint for use with AWS-compatible services.
batch:
max_bytes: null
max_events: null
timeout_secs: null
type: aws_kinesis_firehose
Configuration for the aws_kinesis_streams
sink.
The maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullThe log field used as the Kinesis record’s partition key value.
If not specified, a unique partition key is generated for each Kinesis record.
A wrapper around OwnedValuePath
that allows it to be used in Vector config.
This requires a valid path to be used. If you want to allow optional paths,
use [optional_path::OptionalValuePath].
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullConfiguration of the authentication strategy for interacting with AWS services.
Compression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
Configures how events are encoded into raw bytes.
Apache Avro-specific encoder options.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as a GELF message.
Encodes an event as a GELF message.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
Encodes an event as a logfmt message.
Encodes an event as a logfmt message.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60Whether or not to retry successful requests containing partial failures.
The stream name of the target Kinesis Firehose delivery stream.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
Custom endpoint for use with AWS-compatible services.
batch:
max_bytes: null
max_events: null
timeout_secs: null
partition_key_field: ''
type: aws_kinesis_streams
Configuration for the aws_s3
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullConfiguration of the authentication strategy for interacting with AWS services.
The maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullThe S3 bucket name.
This must not include a leading s3://
or a trailing /
.
Compression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Some cloud storage API clients and browsers handle decompression transparently, so
depending on how they are accessed, files may not always appear to be compressed.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
Whether or not to append a UUID v4 token to the end of the object key.
The UUID is appended to the timestamp portion of the object key, such that if the object key
generated is date=2022-07-18/1658176486
, setting this field to true
results
in an object key that looks like date=2022-07-18/1658176486-30f6652c-71da-4f9f-800d-a1189c47c547
.
This ensures there are no name collisions, and can be useful in high-volume workloads where
object keys must be unique.
The filename extension to use in the object key.
This overrides setting the extension based on the configured compression
.
The timestamp format for the time component of the object key.
By default, object keys are appended with a timestamp that reflects when the objects are
sent to S3, such that the resulting object key is functionally equivalent to joining the key
prefix with the formatted timestamp, such as date=2022-07-18/1658176486
.
This would represent a key_prefix
set to date=%F/
and the timestamp of Mon Jul 18 2022
20:34:44 GMT+0000, with the filename_time_format
being set to %s
, which renders
timestamps in seconds since the Unix epoch.
Supports the common strftime
specifiers found in most
languages.
When set to an empty string, no timestamp is appended to the key prefix.
A prefix to apply to all object keys.
Prefixes are useful for partitioning objects, such as by creating an object key that
stores objects under a particular directory. If using a prefix for this purpose, it must end
in /
to act as a directory path. A trailing /
is not automatically added.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
Canned ACL to apply to the created objects.
For more information, see Canned ACL.
S3 Canned ACLs.
For more information, see Canned ACL.
Bucket/object are private.
The bucket/object owner is granted the FULL_CONTROL
permission, and no one else has
access.
This is the default.
Bucket/object can be read publicly.
The bucket/object owner is granted the FULL_CONTROL
permission, and anyone in the
AllUsers
grantee group is granted the READ
permission.
Bucket/object can be read and written publicly.
The bucket/object owner is granted the FULL_CONTROL
permission, and anyone in the
AllUsers
grantee group is granted the READ
and WRITE
permissions.
This is generally not recommended.
Bucket/object are private, and readable by EC2.
The bucket/object owner is granted the FULL_CONTROL
permission, and the AWS EC2 service is
granted the READ
permission for the purpose of reading Amazon Machine Image (AMI) bundles
from the given bucket.
Bucket/object can be read by authenticated users.
The bucket/object owner is granted the FULL_CONTROL
permission, and anyone in the
AuthenticatedUsers
grantee group is granted the READ
permission.
Object is private, except to the bucket owner.
The object owner is granted the FULL_CONTROL
permission, and the bucket owner is granted the READ
permission.
Only relevant when specified for an object: this canned ACL is otherwise ignored when
specified for a bucket.
bucket-owner-full-control
bucket-owner-full-control
Object is semi-private.
Both the object owner and bucket owner are granted the FULL_CONTROL
permission.
Only relevant when specified for an object: this canned ACL is otherwise ignored when
specified for a bucket.
Bucket can have logs written.
The LogDelivery
grantee group is granted WRITE
and READ_ACP
permissions.
Only relevant when specified for a bucket: this canned ACL is otherwise ignored when
specified for an object.
For more information about logs, see Amazon S3 Server Access Logging.
Overrides what content encoding has been applied to the object.
Directly comparable to the Content-Encoding
HTTP header.
If not specified, the compression scheme used dictates this value.
Overrides the MIME type of the object.
Directly comparable to the Content-Type
HTTP header.
If not specified, the compression scheme used dictates this value.
When compression
is set to none
, the value text/x-log
is used.
Grants READ
, READ_ACP
, and WRITE_ACP
permissions on the created objects to the named grantee.
This allows the grantee to read the created objects and their metadata, as well as read and
modify the ACL on the created objects.
Grants READ
permissions on the created objects to the named grantee.
This allows the grantee to read the created objects and their metadata.
Grants READ_ACP
permissions on the created objects to the named grantee.
This allows the grantee to read the ACL on the created objects.
Grants WRITE_ACP
permissions on the created objects to the named grantee.
This allows the grantee to modify the ACL on the created objects.
AWS S3 Server-Side Encryption algorithms.
The Server-side Encryption algorithm used when storing these objects.
AWS S3 Server-Side Encryption algorithms.
More information on each algorithm can be found in the AWS documentation.
Each object is encrypted with AES-256 using a unique key.
This corresponds to the SSE-S3
option.
Each object is encrypted with AES-256 using keys managed by AWS KMS.
Depending on whether or not a KMS key ID is specified, this corresponds either to the
SSE-KMS
option (keys generated/managed by KMS) or the SSE-C
option (keys generated by
the customer, managed by KMS).
Specifies the ID of the AWS Key Management Service (AWS KMS) symmetrical customer managed
customer master key (CMK) that is used for the created objects.
Only applies when server_side_encryption
is configured to use KMS.
If not specified, Amazon S3 uses the AWS managed CMK in AWS to protect the data.
Infrequently Accessed (single Availability zone).
Glacier Flexible Retrieval.
The tag-set for the object.
Custom endpoint for use with AWS-compatible services.
Configures how events are encoded into raw bytes.
Apache Avro-specific encoder options.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as a GELF message.
Encodes an event as a GELF message.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
Encodes an event as a logfmt message.
Encodes an event as a logfmt message.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
Event data is not delimited at all.
Event data is not delimited at all.
Event data is delimited by a single ASCII (7-bit) character.
Options for the character delimited encoder.
The ASCII (7-bit) character that delimits byte sequences.
Event data is delimited by a single ASCII (7-bit) character.
Event data is prefixed with its length in bytes.
The prefix is a 32-bit unsigned integer, little endian.
Event data is prefixed with its length in bytes.
The prefix is a 32-bit unsigned integer, little endian.
Event data is delimited by a newline (LF) character.
Event data is delimited by a newline (LF) character.
acknowledgements:
enabled: null
auth:
imds:
connect_timeout_seconds: 1
max_attempts: 4
read_timeout_seconds: 1
load_timeout_secs: null
region: null
batch:
max_bytes: null
max_events: null
timeout_secs: null
bucket: string
compression: gzip
filename_append_uuid: true
filename_extension: string
filename_time_format: '%s'
key_prefix: date=%F
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
tls: ''
type: aws_s3
Configuration for the aws_sqs
sink.
The URL of the Amazon SQS queue to which messages are sent.
Custom endpoint for use with AWS-compatible services.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullConfiguration of the authentication strategy for interacting with AWS services.
Configures how events are encoded into raw bytes.
Apache Avro-specific encoder options.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as a GELF message.
Encodes an event as a GELF message.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
Encodes an event as a logfmt message.
Encodes an event as a logfmt message.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
The message deduplication ID value to allow AWS to identify duplicate messages.
This value is a template which should result in a unique string for each event. See the AWS
documentation for more about how AWS does message deduplication.
The tag that specifies that a message belongs to a specific message group.
Can be applied only to FIFO queues.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
queue_url: string
type: aws_sqs
Configuration for the axiom
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullCompression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
The Axiom dataset to write to.
The Axiom organization ID.
Only required when using personal tokens.
Outbound HTTP request settings.
Additional HTTP headers to add to every HTTP request.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
The maximum number of requests allowed within the rate_limit_duration_secs
time window.
The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
The maximum amount of time to wait between retries.
The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
URI of the Axiom endpoint to send data to.
Only required if not using Axiom Cloud.
acknowledgements:
enabled: null
compression: none
dataset: string
org_id: string
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
headers: {}
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
tls: ''
token: string
url: string
type: axiom
Configuration for the azure_blob
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullWhether or not to append a UUID v4 token to the end of the blob key.
The UUID is appended to the timestamp portion of the object key, such that if the blob key
generated is date=2022-07-18/1658176486
, setting this field to true
results
in an blob key that looks like
date=2022-07-18/1658176486-30f6652c-71da-4f9f-800d-a1189c47c547
.
This ensures there are no name collisions, and can be useful in high-volume workloads where
blob keys must be unique.
A prefix to apply to all blob keys.
Prefixes are useful for partitioning objects, such as by creating a blob key that
stores blobs under a particular directory. If using a prefix for this purpose, it must end
in /
to act as a directory path. A trailing /
is not automatically added.
The timestamp format for the time component of the blob key.
By default, blob keys are appended with a timestamp that reflects when the blob are sent to
Azure Blob Storage, such that the resulting blob key is functionally equivalent to joining
the blob prefix with the formatted timestamp, such as date=2022-07-18/1658176486
.
This would represent a blob_prefix
set to date=%F/
and the timestamp of Mon Jul 18 2022
20:34:44 GMT+0000, with the filename_time_format
being set to %s
, which renders
timestamps in seconds since the Unix epoch.
Supports the common strftime
specifiers found in most
languages.
When set to an empty string, no timestamp is appended to the blob prefix.
Compression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
The Azure Blob Storage Account connection string.
Authentication with access key is the only supported authentication method.
Either storage_account
, or this field, must be specified.
Wrapper for sensitive strings containing credentials
The Azure Blob Storage Account container name.
The Azure Blob Storage Endpoint URL.
This is used to override the default blob storage endpoint URL in cases where you are using
credentials read from the environment/managed identities or access tokens without using an
explicit connection_string (which already explicitly supports overriding the blob endpoint
URL).
This may only be used with storage_account
and is ignored when used with
connection_string
.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60The Azure Blob Storage Account name.
Attempts to load credentials for the account in the following ways, in order:
Either connection_string
, or this field, must be specified.
Configures how events are encoded into raw bytes.
Apache Avro-specific encoder options.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as a GELF message.
Encodes an event as a GELF message.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
Encodes an event as a logfmt message.
Encodes an event as a logfmt message.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
Event data is not delimited at all.
Event data is not delimited at all.
Event data is delimited by a single ASCII (7-bit) character.
Options for the character delimited encoder.
The ASCII (7-bit) character that delimits byte sequences.
Event data is delimited by a single ASCII (7-bit) character.
Event data is prefixed with its length in bytes.
The prefix is a 32-bit unsigned integer, little endian.
Event data is prefixed with its length in bytes.
The prefix is a 32-bit unsigned integer, little endian.
Event data is delimited by a newline (LF) character.
Event data is delimited by a newline (LF) character.
acknowledgements:
enabled: null
batch:
max_bytes: null
max_events: null
timeout_secs: null
blob_append_uuid: boolean
blob_prefix: blob/%F/
blob_time_format: string
compression: gzip
connection_string: ''
container_name: string
endpoint: string
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
storage_account: string
type: azure_blob
Configuration for the azure_monitor_logs
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe Resource ID of the Azure resource the data should be associated with.
The maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullTransformations to prepare an event for serialization.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
The record type of the data that is being submitted.
Can only contain letters, numbers, and underscores (_), and may not exceed 100 characters.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60Use this option to customize the log field used as TimeGenerated
in Azure.
The setting of log_schema.timestamp_key
, usually timestamp
, is used here by default.
This field should be used in rare cases where TimeGenerated
should point to a specific log
field. For example, use this field to set the log field source_timestamp
as holding the
value that should be used as TimeGenerated
on the Azure side.
An optional path that deserializes an empty string to None
.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
acknowledgements:
enabled: null
azure_resource_id: string
batch:
max_bytes: null
max_events: null
timeout_secs: null
customer_id: string
encoding: {}
host: ods.opinsights.azure.com
log_type: string
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
shared_key: string
time_generated_key: ''
tls: ''
type: azure_monitor_logs
Configuration for the blackhole
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe interval between reporting a summary of activity.
Set to 0
to disable reporting.
The number of events, per second, that the sink is allowed to consume.
By default, there is no limit.
acknowledgements:
enabled: null
print_interval_secs: 1
rate: integer
type: blackhole
Configuration for the clickhouse
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullConfiguration of the authentication strategy for HTTP requests.
HTTP authentication should be used with HTTPS only, as the authentication credentials are passed as an
HTTP header without any additional encryption beyond what is provided by the transport itself.
Configuration of the authentication strategy for HTTP requests.
HTTP authentication should be used with HTTPS only, as the authentication credentials are passed as an
HTTP header without any additional encryption beyond what is provided by the transport itself.
Basic authentication.
The username and password are concatenated and encoded via base64.
The basic authentication password.
Basic authentication.
The username and password are concatenated and encoded via base64.
The basic authentication username.
Bearer authentication.
The bearer token value (OAuth2, JWT, etc.) is passed as-is.
Bearer authentication.
The bearer token value (OAuth2, JWT, etc.) is passed as-is.
The bearer authentication token.
The maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullCompression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
A templated field.
The database that contains the table that data is inserted into.
A templated field.
In many cases, components can be configured so that part of the component's functionality can be
customized on a per-event basis. For example, you have a sink that writes events to a file and you want to
specify which file an event should go to by using an event field as part of the
input to the filename used.
By using Template
, users can specify either fixed strings or templated strings. Templated strings use a common syntax to
refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string
is my-file.log
. An example of a template string is my-file-{{key}}.log
, where {{key}}
is the key's value when the template is rendered into a string.
Sets date_time_input_format
to best_effort
, allowing ClickHouse to properly parse RFC3339/ISO 8601.
Transformations to prepare an event for serialization.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
The URI component of a request.
The endpoint of the ClickHouse server.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60Sets input_format_skip_unknown_fields
, allowing ClickHouse to discard fields not present in the table schema.
A templated field.
The table that data is inserted into.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
acknowledgements:
enabled: null
auth: ''
batch:
max_bytes: null
max_events: null
timeout_secs: null
compression: gzip
database: ''
date_time_best_effort: boolean
encoding: {}
endpoint: string
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
skip_unknown_fields: boolean
table: string
tls: ''
type: clickhouse
Configuration for the console
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullConfigures how events are encoded into raw bytes.
Apache Avro-specific encoder options.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as a GELF message.
Encodes an event as a GELF message.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
Encodes an event as a logfmt message.
Encodes an event as a logfmt message.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional p