Cette page n'est pas encore disponible en français, sa traduction est en cours. Si vous avez des questions ou des retours sur notre projet de traduction actuel, n'hésitez pas à nous contacter.
Overview
Configure your rsyslog or syslog-ng source so that the Observability Pipelines Worker formats the logs collected into a Datadog-rehydratable format before routing them to Datadog Log Archives.
This document walks you through the following steps:
The prerequisites needed to set up Observability Pipelines
To use Observability Pipelines’s Syslog source, your applications must be sending data in one of the following formats: RFC 6587, RFC 5424, RFC 3164. You also need to have the following information available:
The bind address that your Observability Pipelines Worker (OPW) will listen on to receive logs from your applications. For example, 0.0.0.0:8088. Later on, you configure your applications to send logs to this address.
The appropriate TLS certificates and the password you used to create your private key if your forwarders are globally configured to enable SSL.
Configure Log Archives
If you already have a Datadog Log Archive configured for Observability Pipelines, skip to Set up Observability Pipelines.
Copy the below policy and paste it into the Policy editor. Replace <MY_BUCKET_NAME> and <MY_BUCKET_NAME_1_/_MY_OPTIONAL_BUCKET_PATH_1> with the information for the S3 bucket you created earlier.
Choose the IAM policy you created earlier to attach to the new IAM user.
Click Next.
Optionally, add tags.
Click Create user.
Create access credentials for the new IAM user. The AWS access key and AWS secret access key are added as environment variables in the Install the Observability Pipelines Worker step.
Choose the IAM policy you created earlier to attach to the new IAM user.
Click Next.
Optionally, add tags.
Click Create user.
Create access credentials for the new IAM user. The AWS access key and AWS secret access key are added later as environment variables when you install the Observability Pipelines Worker.
Create an IAM user
Create an IAM user and attach the IAM policy you created earlier to it.
Choose the IAM policy you created earlier to attach to the new IAM user.
Click Next.
Optionally, add tags.
Click Create user.
Create access credentials for the new IAM user. The AWS access key and AWS secret access key are added as environment variables in the Install the Observability Pipelines Worker step.
Add a query that filters out all logs going through log pipelines so that none of those logs go into this archive. For example, add the query observability_pipelines_read_only_archive, assuming no logs going through the pipeline have that tag added.
Select AWS S3.
Select the AWS account that your bucket is in.
Enter the name of the S3 bucket.
Optionally, enter a path.
Check the confirmation statement.
Optionally, add tags and define the maximum scan size for rehydration. See Advanced settings for more information.
On the Buckets page, click Create to create a bucket for your archives..
Enter a name for the bucket and choose where to store your data.
Select Fine-grained in the Choose how to control access to objects section.
Do not add a retention policy because the most recent data needs to be rewritten in some rare cases (typically a timeout case).
Click Create.
Allow the Observability Pipeline Worker to write to the bucket
To authenticate the Observability Pipelines Worker for Google Cloud Storage, contact your Google Security Operations representative for a Google Developer Service Account Credential. This credential is a JSON file and must be placed under DD_OP_DATA_DIR/config. See Getting API authentication credential for more information.
Note: If you are installing the Worker in Kubernetes, see Referencing files in Kubernetes for information on how to reference the credentials file.
Connect the storage bucket to Datadog Log Archives
Add a query that filters out all logs going through log pipelines so that none of those logs go into this archive. For example, add the query observability_pipelines_read_only_archive, assuming no logs going through the pipeline have that tag added.
Select Google Cloud Storage.
Select the service account your storage bucket is in.
Select the project.
Enter the name of the storage bucket you created earlier.
Optionally, enter a path.
Optionally, set permissions, add tags, and define the maximum scan size for rehydration. See Advanced settings for more information.
Add a query that filters out all logs going through log pipelines so that none of those logs go into this archive. For example, add the query observability_pipelines_read_only_archive, assuming no logs going through the pipeline have that tag added.
Select Azure Storage.
Select the Azure tenant and client your storage account is in.
Enter the name of the storage account.
Enter the name of the container you created earlier.
Optionally, enter a path.
Optionally, set permissions, add tags, and define the maximum scan size for rehydration. See Advanced settings for more information.
Select the Archive Logs template to create a new pipeline.
Select rsyslog or syslog-ng as the source.
Set up the source
To configure your Syslog source:
In the Socket Type dropdown menu, select the communication protocol you want to use: TCP or UDP.
Optionally, toggle the switch to enable TLS. If you enable TLS, the following certificate and key files are required:
Server Certificate Path: The path to the certificate file that has been signed by your Certificate Authority (CA) Root File in DER or PEM (X.509) format.
CA Certificate Path: The path to the certificate file that is your Certificate Authority (CA) Root File in DER or PEM (X.509) format.
Private Key Path: The path to the .key private key file that belongs to your Server Certificate Path in DER or PEM (PKCS#8) format.
Set up the destinations
Enter the following information based on your selected logs destination.
If the Worker is ingesting logs that are not coming from the Datadog Agent and are shipped to an archive using the Observability Pipelines Datadog Archives destination, those logs are not tagged with reserved attributes. In addition, logs rehydrated into Datadog will not have standard attributes mapped. This means that when you rehydrate your logs into Log Management, you may lose Datadog telemetry, the ability to search logs easily, and the benefits of unified service tagging if you do not structure and remap your logs in Observability Pipelines before routing your logs to an archive.
For example, say your syslogs are sent to Datadog Archives and those logs have the status tagged as severity instead of the reserved attribute of status and the host tagged as host-name instead of the reserved attribute hostname. When these logs are rehydrated in Datadog, the status for each log is set to info and none of the logs have a hostname tag.
Follow the instructions for the cloud provider you are using to archive your logs.
Amazon S3
Enter the S3 bucket name for the S3 bucket you created earlier.
Enter the AWS region the S3 bucket is in.
Enter the key prefix. Prefixes are useful for partitioning objects. For example, you can use a prefix as an object key to store objects under a particular directory. If using a prefix for this purpose, it must end in / to act as a directory path; a trailing / is not automatically added.
Select the storage class for your S3 bucket in the Storage Class dropdown menu.
Your AWS access key ID and AWS secret access key are set as environment variables when you install the Worker later.
Google Cloud Storage
Enter the name of the Google Cloud storage bucket you created earlier.
Enter the path to the credentials JSON file you downloaded earlier.
Select the storage class for the created objects.
Select the access level of the created objects.
Optionally, enter in the prefix. Prefixes are useful for partitioning objects. For example, you can use a prefix as an object key to store objects under a particular directory. If using a prefix for this purpose, it must end in / to act as a directory path; a trailing / is not automatically added.
Optionally, click Add Header to add metadata.
Azure Storage
Enter the name of the Azure container you created earlier.
Optionally, enter a prefix. Prefixes are useful for partitioning objects. For example, you can use a prefix as an object key to store objects under a particular directory. If using a prefix for this purpose, it must end in / to act as a directory path; a trailing / is not automatically added.
There are no configuration steps for your Datadog destination.
The following fields are optional:
Enter the name of the Splunk index you want your data in. This has to be an allowed index for your HEC.
Select whether the timestamp should be auto-extracted. If set to true, Splunk extracts the timestamp from the message with the expected format of yyyy-mm-dd hh:mm:ss.
Set the sourcetype to override Splunk’s default value, which is httpevent for HEC data.
The following fields are optional:
In the Encoding dropdown menu, select whether you want to encode your pipeline’s output in JSON, Logfmt, or Raw text. If no decoding is selected, the decoding defaults to JSON.
Enter a source name to override the default name value configured for your Sumo Logic collector’s source.
Enter a host name to override the default host value configured for your Sumo Logic collector’s source.
Enter a category name to override the default category value configured for your Sumo Logic collector’s source.
Click Add Header to add any custom header fields and values.
The rsyslog and syslog-ng destinations support the RFC5424 format.
The rsyslog and syslog-ng destinations match these log fields to the following Syslog fields:
Log Event
SYSLOG FIELD
Default
log[“message”]
MESSAGE
NIL
log[“procid”]
PROCID
The running Worker’s process ID.
log[“appname”]
APP-NAME
observability_pipelines
log[“facility”]
FACILITY
8 (log_user)
log[“msgid”]
MSGID
NIL
log[“severity”]
SEVERITY
info
log[“host”]
HOSTNAME
NIL
log[“timestamp”]
TIMESTAMP
Current UTC time.
The following destination settings are optional:
Toggle the switch to enable TLS. If you enable TLS, the following certificate and key files are required:
Server Certificate Path: The path to the certificate file that has been signed by your Certificate Authority (CA) Root File in DER or PEM (X.509).
CA Certificate Path: The path to the certificate file that is your Certificate Authority (CA) Root File in DER or PEM (X.509).
Private Key Path: The path to the .key private key file that belongs to your Server Certificate Path in DER or PEM (PKCS#8) format.
Enter the number of seconds to wait before sending TCP keepalive probes on an idle connection.
To authenticate the Observability Pipelines Worker for Google Chronicle, contact your Google Security Operations representative for a Google Developer Service Account Credential. This credential is a JSON file and must be placed under DD_OP_DATA_DIR/config. See Getting API authentication credential for more information.
Note: If you are installing the Worker in Kubernetes, see Referencing files in Kubernetes for information on how to reference the credentials file.
To set up the Worker’s Google Chronicle destination:
Enter the customer ID for your Google Chronicle instance.
Enter the path to the credentials JSON file you downloaded earlier.
Select JSON or Raw encoding in the dropdown menu.
Select the appropriate Log Type in the dropdown menu.
Note: Logs sent to the Google Chronicle destination must have ingestion labels. For example, if the logs are from a A10 load balancer, it must have the ingestion label A10_LOAD_BALANCER. See Google Cloud’s Support log types with a default parser for a list of available log types and their respective ingestion labels.
The following fields are optional:
Enter the name for the Elasticsearch index.
Enter the Elasticsearch version.
Optionally, enter the name of the OpenSearch index.
Optionally, enter the name of the Amazon OpenSearch index.
Select an authentication strategy, Basic or AWS. For AWS, enter the AWS region.
Set up processors
There are pre-selected processors added to your processor group out of the box. You can add additional processors or delete any existing ones based on your processing needs.
Processor groups are executed from top to bottom. The order of the processors is important because logs are checked by each processor, but only logs that match the processor’s filters are processed. To modify the order of the processors, use the drag handle on the top left corner of the processor you want to move.
Filter query syntax
Each processor has a corresponding filter query in their fields. Processors only process logs that match their filter query. And for all processors except the filter processor, logs that do not match the query are sent to the next step of the pipeline. For the filter processor, logs that do not match the query are dropped.
For any attribute, tag, or key:value pair that is not a reserved attribute, your query must start with @. Conversely, to filter reserved attributes, you do not need to append @ in front of your filter query.
For example, to filter out and drop status:info logs, your filter can be set as NOT (status:info). To filter out and drop system-status:info, your filter must be set as NOT (@system-status:info).
Filter query examples:
NOT (status:debug): This filters for only logs that do not have the status DEBUG.
status:ok service:flask-web-app: This filters for all logs with the status OK from your flask-web-app service.
This query can also be written as: status:ok AND service:flask-web-app.
host:COMP-A9JNGYK OR host:COMP-J58KAS: This filter query only matches logs from the labeled hosts.
@user.status:inactive: This filters for logs with the status inactive nested under the user attribute.
Enter the information for the processors you want to use. Click the Add button to add additional processors. To delete a processor, click the kebab on the right side of the processor and select Delete.
This processor filters for logs that match the specified filter query and drops all non-matching logs. If a log is dropped at this processor, then none of the processors below this one receives that log. This processor can filter out unnecessary logs, such as debug or warning logs.
To set up the filter processor:
Define a filter query. The query you specify filters for and passes on only logs that match it, dropping all other logs.
The remap processor can add, drop, or rename fields within your individual log data. Use this processor to enrich your logs with additional context, remove low-value fields to reduce volume, and standardize naming across important attributes. Select add field, drop field, or rename field in the dropdown menu to get started.
Add field
Use add field to append a new key-value field to your log.
To set up the add field processor:
Define a filter query. Only logs that match the specified filter query are processed. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline.
Enter the field and value you want to add. To specify a nested field for your key, use the path notation: <OUTER_FIELD>.<INNER_FIELD>. All values are stored as strings.
Note: If the field you want to add already exists, the Worker throws an error and the existing field remains unchanged.
Drop field
Use drop field to drop a field from logging data that matches the filter you specify below. It can delete objects, so you can use the processor to drop nested keys.
To set up the drop field processor:
Define a filter query. Only logs that match the specified filter query are processed. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline.
Enter the key of the field you want to drop. To specify a nested field for your specified key, use the path notation: <OUTER_FIELD>.<INNER_FIELD>.
Note: If your specified key does not exist, your log will be unimpacted.
Rename field
Use rename field to rename a field within your log.
To set up the rename field processor:
Define a filter query. Only logs that match the specified filter query are processed. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline.
Enter the name of the field you want to rename in the Source field. To specify a nested field for your key, use the path notation: <OUTER_FIELD>.<INNER_FIELD>. Once renamed, your original field is deleted unless you enable the Preserve source tag checkbox described below. Note: If the source key you specify doesn’t exist, a default null value is applied to your target.
In the Target field, enter the name you want the source field to be renamed to. To specify a nested field for your specified key, use the path notation: <OUTER_FIELD>.<INNER_FIELD>. Note: If the target field you specify already exists, the Worker throws an error and does not overwrite the existing target field.
Optionally, check the Preserve source tag box if you want to retain the original source field and duplicate the information from your source key to your specified target key. If this box is not checked, the source key is dropped after it is renamed.
Path notation example
For the following message structure, use outer_key.inner_key.double_inner_key to refer to the key with the value double_inner_value.
This processor samples your logging traffic for a representative subset at the rate that you define, dropping the remaining logs. As an example, you can use this processor to sample 20% of logs from a noisy non-critical service.
The sampling only applies to logs that match your filter query and does not impact other logs. If a log is dropped at this processor, none of the processors below receives that log.
To set up the sample processor:
Define a filter query. Only logs that match the specified filter query are sampled at the specified retention rate below. The sampled logs and the logs that do not match the filter query are sent to the next step in the pipeline.
Set the retain field with your desired sampling rate expressed as a percentage. For example, entering 2 means 2% of logs are retained out of all the logs that match the filter query.
This processor parses logs using the grok parsing rules that are available for a set of sources. The rules are automatically applied to logs based on the log source. Therefore, logs must have a source field with the source name. If this field is not added when the log is sent to the Observability Pipelines Worker, you can use the Add field processor to add it.
If the source field of a log matches one of the grok parsing rule sets, the log’s message field is checked against those rules. If a rule matches, the resulting parsed data is added in the message field as a JSON object, overwriting the original message.
If there isn’t a source field on the log, or no rule matches the log message, then no changes are made to the log and it is sent to the next step in the pipeline.
To set up the grok parser:
Define a filter query. Only logs that match the specified filter query are processed. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline.
Click the Preview Rules button.
Search or select a source in the dropdown menu to see the grok parsing rules for that source.
The quota processor measures the logging traffic for logs that match the filter you specify. When the configured daily quota is met inside the 24-hour rolling window, the processor can either drop additional logs or send an alert using a Datadog monitor. You can configure the processor to track the total volume or the total number of events.
As an example, you can configure this processor to drop new logs or trigger an alert without dropping logs after the processor has received 10 million events from a certain service in the last 24 hours.
To set up the quota processor:
Enter a name for the quota processor. The pipeline uses the name to identify the quota across multiple Remote Configuration deployments of the Worker.
Define a filter query. Only logs that match the specified filter query are counted towards the daily limit.
Logs that match the quota filter and are within the daily quota are sent to the next step in the pipeline.
Logs that do not match the quota filter are sent to the next step of the pipeline.
In the Unit for quota dropdown menu, select if you want to measure the quota by the number of Events or by the Volume in bytes.
Set the daily quota limit and select the unit of magnitude for your desired quota.
Check the Drop events checkbox if you want to drop all events when your quota is met. Leave it unchecked if you plan to set up a monitor that sends an alert when the quota is met.
If logs that match the quota filter are received after the daily quota has been met and the Drop events option is selected, then those logs are dropped. In this case, only logs that did not match the filter query are sent to the next step in the pipeline.
If logs that match the quota filter are received after the daily quota has been met and the Drop events option is not selected, then those logs and the logs that did not match the filter query are sent to the next step in the pipeline.
The reduce processor groups multiple log events into a single log, based on the fields specified and the merge strategies selected. Logs are grouped at 10-second intervals. After the interval has elapsed for the group, the reduced log for that group is sent to the next step in the pipeline.
To set up the reduce processor:
Define a filter query. Only logs that match the specified filter query are processed. Reduced logs and logs that do not match the filter query are sent to the next step in the pipeline.
In the Group By section, enter the field you want to group the logs by.
Click Add Group by Field to add additional fields.
In the Merge Strategy section:
In On Field, enter the name of the field you want to merge the logs on.
Select the merge strategy in the Apply dropdown menu. This is the strategy used to combine events. See the following Merge strategies section for descriptions of the available strategies.
Click Add Merge Strategy to add additional strategies.
Merge strategies
These are the available merge strategies for combining log events.
Name
Description
Array
Appends each value to an array.
Concat
Concatenates each string value, delimited with a space.
Concat newline
Concatenates each string value, delimited with a newline.
Concat raw
Concatenates each string value, without a delimiter.
Discard
Discards all values except the first value that was received.
Flat unique
Creates a flattened array of all unique values that were received.
Longest array
Keeps the longest array that was received.
Max
Keeps the maximum numeric value that was received.
Min
Keeps the minimum numeric value that was received.
Retain
Discards all values except the last value that was received. Works as a way to coalesce by not retaining `null`.
Shortest array
Keeps the shortest array that was received.
Sum
Sums all numeric values that were received.
The deduplicate processor removes copies of data to reduce volume and noise. It caches 5,000 messages at a time and compares your incoming logs traffic against the cached messages. For example, this processor can be used to keep only unique warning logs in the case where multiple identical warning logs are sent in succession.
To set up the deduplicate processor:
Define a filter query. Only logs that match the specified filter query are processed. Deduped logs and logs that do not match the filter query are sent to the next step in the pipeline.
In the Type of deduplication dropdown menu, select whether you want to Match on or Ignore the fields specified below.
If Match is selected, then after a log passes through, future logs that have the same values for all of the fields you specify below are removed.
If Ignore is selected, then after a log passes through, future logs that have the same values for all of their fields, except the ones you specify below, are removed.
Enter the fields you want to match on, or ignore. At least one field is required, and you can specify a maximum of three fields.
Use the path notation <OUTER_FIELD>.<INNER_FIELD> to match subfields. See the Path notation example below.
Click Add field to add additional fields you want to filter on.
Path notation example
For the following message structure, use outer_key.inner_key.double_inner_key to refer to the key with the value double_inner_value.
The Sensitive Data Scanner processor scans logs to detect and redact or hash sensitive information such as PII, PCI, and custom sensitive data. You can pick from our library of predefined rules, or input custom Regex rules to scan for sensitive data.
To set up the sensitive data scanner processor:
Define a filter query. Only logs that match the specified filter query are scanned and processed. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline.
Click Add Scanning Rule.
Name your scanning rule.
In the Select scanning rule type field, select whether you want to create a rule from the library or create a custom rule.
If you are creating a rule from the library, select the library pattern you want to use.
If you are creating a custom rule, enter the regex pattern to check against the data.
In the Scan entire or part of event section, select if you want to scan the Entire Event, Specific Attributes, or Exclude Attributes in the dropdown menu.
If you selected Specific Attributes, click Add Field and enter the specific attributes you want to scan. You can add up to three fields. Use path notation (outer_key.inner_key) to access nested keys. For specified attributes with nested data, all nested data is scanned.
If you selected Exclude Attributes, click Add Field and enter the specific attributes you want to exclude from scanning. You can add up to three fields. Use path notation (outer_key.inner_key) to access nested keys. For specified attributes with nested data, all nested data is excluded.
In the Define action on match section, select the action you want to take for the matched information. Redaction, partial redaction, and hashing are all irreversible actions.
If you are redacting the information, specify the text to replace the matched data.
If you are partially redacting the information, specify the number of characters you want to redact and whether to apply the partial redaction to the start or the end of your matched data.
Note: If you select hashing, the UTF-8 bytes of the match are hashed with the 64-bit fingerprint of FarmHash.
Optionally, add tags to all events that match the regex, so that you can filter, analyze, and alert on the events.
This processor adds a field with the name of the host that sent the log. For example, hostname: 613e197f3526. Note: If the hostname already exists, the Worker throws an error and does not overwrite the existing hostname.
To set up this processor:
Define a filter query. Only logs that match the specified filter query are processed. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline.
This processor converts the specified field into JSON objects.
To set up this processor:
Define a filter query. Only logs that match the specified filter query are processed. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline.
Enter the name of the field you want to parse JSON on. Note: The parsed JSON overwrites what was originally contained in the field.
Use this processor to enrich your logs with information from a reference table, which could be a local file or database.
To set up the enrichment table processor:
Define a filter query. Only logs that match the specified filter query are processed. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline.
Enter the source attribute of the log. The source attribute’s value is what you want to find in the reference table.
Enter the target attribute. The target attribute’s value stores, as a JSON object, the information found in the reference table.
Select the type of reference table you want to use, File or GeoIP.
For the File type:
Enter the file path.
Enter the column name. The column name in the enrichment table is used for matching the source attribute value. See the Enrichment file example. Note: If you are installing the Worker in Kubernetes, see Referencing files in Kubernetes for information on how to reference the file.
For the GeoIP type, enter the GeoIP path.
Enrichment file example
For this example, merchant_id is used as the source attribute and merchant_info as the target attribute.
This is the example reference table that the enrichment processor uses:
merch_id
merchant_name
city
state
803
Andy’s Ottomans
Boise
Idaho
536
Cindy’s Couches
Boulder
Colorado
235
Debra’s Benches
Las Vegas
Nevada
merch_id is set as the column name the processor uses to find the source attribute’s value. Note: The source attribute’s value does not have to match the column name.
If the enrichment processor receives a log with "merchant_id":"536":
The processor looks for the value 536 in the reference table’s merch_id column.
After it finds the value, it adds the entire row of information from the reference table to the merchant_info attribute as a JSON object:
Select your platform in the Choose your installation platform dropdown menu.
Enter the Syslog address. This is a Syslog-compatible endpoint, exposed by the Worker, that your applications send logs to. The Observability Pipelines Worker listens on this address for incoming logs.
Provide the environment variables for each of your selected destinations. See prerequisites for more information.
Amazon S3
Enter the AWS access key ID and AWS secret access key for the S3 archive bucket you created earlier.
Google Cloud Storage
There are no environment variables to configure.
Azure Storage
Enter the Azure connection string you created earlier.
The connection string gives the Worker access to your Azure Storage bucket.
Click Access keys under Security and networking in the left navigation menu.
Copy the connection string for the storage account and paste it into the Azure connection string field on the Observability Pipelines Worker installation page.
There are no environment variables to configure for Datadog Log Management.
Enter your Splunk HEC token and the base URL of the Splunk instance. See prerequisites for more information.
The Worker passes the HEC token to the Splunk collection endpoint. After the Observability Pipelines Worker processes the logs, it sends the logs to the specified Splunk instance URL.
Note: The Splunk HEC destination forwards all logs to the /services/collector/event endpoint regardless of whether you configure your Splunk HEC destination to encode your output in JSON or raw.
Enter the Sumo Logic HTTP collector URL. See prerequisites for more information.
Enter the rsyslog or syslog-ng endpoint URL. For example, 127.0.0.1:9997. The Observability Pipelines Worker sends logs to this address and port.
Enter the Google Chronicle endpoint URL. For example, https://chronicle.googleapis.com.
Enter the Elasticsearch authentication username.
Enter the Elasticsearch authentication password.
Enter the Elasticsearch endpoint URL. For example, http://CLUSTER_ID.LOCAL_HOST_IP.ip.es.io:9200.
Enter the OpenSearch authentication username.
Enter the OpenSearch authentication password.
Enter the OpenSearch endpoint URL. For example, http://<hostname.IP>:9200.
Enter the Amazon OpenSearch authentication username.
Enter the Amazon OpenSearch authentication password.
Enter the Amazon OpenSearch endpoint URL. For example, http://<hostname.IP>:9200.
Follow the instructions for your environment to install the Worker.
Click Select API key to choose the Datadog API key you want to use.
Run the command provided in the UI to install the Worker. The command is automatically populated with the environment variables you entered earlier.
Note: By default, the docker run command exposes the same port the Worker is listening on. If you want to map the Worker’s container port to a different port on the Docker host, use the -p | --publish option in the command:
-p 8282:8088 datadog/observability-pipelines-worker run
Navigate back to the Observability Pipelines installation page and click Deploy.
Note: By default, the Kubernetes Service maps incoming port <SERVICE_PORT> to the port the Worker is listening on (<TARGET_PORT>). If you want to map the Worker’s pod port to a different incoming port of the Kubernetes Service, use the following service.ports[0].port and service.ports[0].targetPort values in the command:
Note: By default, the Kubernetes Service maps incoming port <SERVICE_PORT> to the port the Worker is listening on (<TARGET_PORT>). If you want to map the Worker’s pod port to a different incoming port of the Kubernetes Service, use the following service.ports[0].port and service.ports[0].targetPort values in the command:
Note: By default, the Kubernetes Service maps incoming port <SERVICE_PORT> to the port the Worker is listening on (<TARGET_PORT>). If you want to map the Worker’s pod port to a different incoming port of the Kubernetes Service, use the following service.ports[0].port and service.ports[0].targetPort values in the command:
Click Select API key to choose the Datadog API key you want to use.
Run the one-step command provided in the UI to install the Worker.
Note: The environment variables used by the Worker in /etc/default/observability-pipelines-worker are not updated on subsequent runs of the install script. If changes are needed, update the file manually and restart the Worker.
If you prefer not to use the one-line installation script, follow these step-by-step instructions:
Click Select API key to choose the Datadog API key you want to use.
Run the one-step command provided in the UI to install the Worker.
Note: The environment variables used by the Worker in /etc/default/observability-pipelines-worker are not updated on subsequent runs of the install script. If changes are needed, update the file manually and restart the Worker.
If you prefer not to use the one-line installation script, follow these step-by-step instructions:
Set up the Datadog rpm repo on your system with the below command. Note: If you are running RHEL 8.1 or CentOS 8.1, use repo_gpgcheck=0 instead of repo_gpgcheck=1 in the configuration below.
Select one of the options in the dropdown to provide the expected log volume for the pipeline:
Option
Description
Unsure
Use this option if you are not able to project the log volume or you want to test the Worker. This option provisions the EC2 Auto Scaling group with a maximum of 2 general purpose t4g.large instances.
1-5 TB/day
This option provisions the EC2 Auto Scaling group with a maximum of 2 compute optimized instances c6g.large.
5-10 TB/day
This option provisions the EC2 Auto Scaling group with a minimum of 2 and a maximum of 5 compute optimized c6g.large instances.
>10 TB/day
Datadog recommends this option for large-scale production deployments. It provisions the EC2 Auto Scaling group with a minimum of 2 and a maximum of 10 compute optimized c6g.xlarge instances.
Note: All other parameters are set to reasonable defaults for a Worker deployment, but you can adjust them for your use case as needed in the AWS Console before creating the stack.
Select the AWS region you want to use to install the Worker.
Click Select API key to choose the Datadog API key you want to use.
Click Launch CloudFormation Template to navigate to the AWS Console to review the stack configuration and then launch it. Make sure the CloudFormation parameters are as expected.
Select the VPC and subnet you want to use to install the Worker.
Review and check the necessary permissions checkboxes for IAM. Click Submit to create the stack. CloudFormation handles the installation at this point; the Worker instances are launched, the necessary software is downloaded, and the Worker starts automatically.
Navigate back to the Observability Pipelines installation page and click Deploy.
<OPW_HOST> is the IP/URL of the host (or load balancer) associated with the Observability Pipelines Worker.
For CloudFormation installs, the LoadBalancerDNS CloudFormation output has the correct URL to use.
For Kubernetes installs, the internal DNS record of the Observability Pipelines Worker service can be used, for example opw-observability-pipelines-worker.default.svc.cluster.local.
syslog-ng
To send syslog-ng logs to the Observability Pipelines Worker, update your syslog-ng config file:
<OPW_HOST> is the IP/URL of the host (or load balancer) associated with the Observability Pipelines Worker.
For CloudFormation installs, the LoadBalancerDNS CloudFormation output has the correct URL to use.
For Kubernetes installs, the internal DNS record of the Observability Pipelines Worker service can be used, for example opw-observability-pipelines-worker.default.svc.cluster.local.