This product is not supported for your selected
Datadog site. (
).
このページは日本語には対応しておりません。随時翻訳に取り組んでいます。
翻訳に関してご質問やご意見ございましたら、
お気軽にご連絡ください。
Use Observability Pipelines’ Kafka source to receive logs from your Kafka topics. Select and set up this source when you set up a pipeline. The Kafka source uses librdkafka.
You can also send Azure Event Hub logs to Observability Pipelines using the Kafka source.
Prerequisites
To use Observability Pipelines’ Kafka source, you need the following information available:
- The hosts and ports of the Kafka bootstrap servers, which clients should use to connect to the Kafka cluster and discover all the other hosts in the cluster.
- The appropriate TLS certificates and the password you used to create your private key, if your forwarders are globally configured to enable SSL.
Set up the source in the pipeline UI
Select and set up this source when you set up a pipeline. The information below is for the source settings in the pipeline UI.
Only enter the identifiers for the Kafka servers, username, password, and if applicable, the TLS key pass. Do not enter the actual values.
- Enter the identifier for your Kafka servers. If you leave it blank, the default is used.
- Enter the identifier for your Kafka username. If you leave it blank, the default is used.
- Enter the identifier for your Kafka password. If you leave it blank, the default is used.
- Enter the group ID.
- Enter the topic name. If there is more than one, click Add Field to add additional topics.
Optional settings
Enable SASL Authentication
- Toggle the switch to enable SASL Authentication
- Select the mechanism (PLAIN, SCHRAM-SHA-256, or SCHRAM-SHA-512) in the dropdown menu.
Enable TLS
Toggle the switch to Enable TLS. If you enable TLS, the following certificate and key files are required.
Note: All file paths are made relative to the configuration data directory, which is /var/lib/observability-pipelines-worker/config/ by default. See Advanced Worker Configurations for more information. The file must be owned by the observability-pipelines-worker group and observability-pipelines-worker user, or at least readable by the group or user.
- Enter the identifier for your Kafka key pass. If you leave it blank, the default is used.
Server Certificate Path: The path to the certificate file that has been signed by your Certificate Authority (CA) root file in DER or PEM (X.509).CA Certificate Path: The path to the certificate file that is your Certificate Authority (CA) root file in DER or PEM (X.509).Private Key Path: The path to the .key private key file that belongs to your Server Certificate Path in DER or PEM (PKCS#8) format.
Add additional librdkafka options
- Click Advanced and then Add Option.
- Select an option in the dropdown menu.
- Enter a value for that option.
- Check your values against the librdkafka documentation to make sure they have the correct type and are within the set range.
- Click Add Option to add another librdkafka option.
Set secrets
These are the defaults used for secret identifiers and environment variables.
Note: If you enter identifiers for your secrets and then choose to use environment variables, the environment variable is the identifier entered and prepended with DD_OP. For example, if you entered PASSWORD_1 for a password identifier, the environment variable for that password is DD_OP_PASSWORD_1.
- Kafka bootstrap servers identifier:
- References the bootstrap server that the client uses to connect to the Kafka cluster and discover all the other hosts in the cluster.
- In your secrets manager, the host and port must be entered in the format of
host:port, such as 10.14.22.123:9092. If there is more than one server, use commas to separate them. - The default identifier is
SOURCE_KAFKA_BOOTSTRAP_SERVERS.
- Kafka SASL username identifier:
- The default identifier is
SOURCE_KAFKA_SASL_USERNAME.
- Kafka SASL password identifier:
- The default identifier is
SOURCE_KAFKA_SASL_PASSWORD.
- Kafka TLS passphrase identifier (when TLS is enabled):
- The default identifier is
SOURCE_KAFKA_KEY_PASS.
- The host and port of the Kafka bootstrap servers.
- The bootstrap server that the client uses to connect to the Kafka cluster and discover all the other hosts in the cluster. The host and port must be entered in the format of
host:port, such as 10.14.22.123:9092. If there is more than one server, use commas to separate them. - The default environment variable is
DD_OP_SOURCE_KAFKA_BOOTSTRAP_SERVERS.
- SASL (when enabled):
- Kafka SASL username
- The default environment variable is
DD_OP_SOURCE_KAFKA_SASL_USERNAME.
- Kafka SASL password
- The default environment variable is
DD_OP_SOURCE_KAFKA_SASL_PASSWORD.
- Kafka TLS passphrase (when enabled):
- The default environment variable is
DD_OP_SOURCE_KAFKA_KEY_PASS.
librdkafka options
These are the available librdkafka options:
- auto.offset.reset
- auto.commit.interval.ms
- client.id
- coordinator.query.interval.ms
- enable.auto.commit
- enable.auto.offset.store
- fetch.max.bytes
- fetch.message.max.bytes
- fetch.min.bytes
- fetch.wait.max.ms
- group.instance.id
- heartbeat.interval.ms
- queued.min.messages
- session.timeout.ms
- socket.timeout.ms
See the librdkafka documentation for more information and to ensure your values have the correct type and are within range.