To use Observability Pipelines’ HTTP/S Client source, you need the following information available:
The full path of the HTTP Server endpoint that the Observability Pipelines Worker collects log events from. For example, https://127.0.0.8/logs.
The HTTP authentication token or password.
The HTTP/S Client source pulls data from your upstream HTTP server. Your HTTP server must support GET requests for the HTTP Client endpoint URL that you set as an environment variable when you install the Worker.
Sensitive Data Redactions テンプレートを選択し、新しいパイプラインを作成します。
ソースとして HTTP Client を選択します。
ソースの設定
To configure your HTTP/S Client source:
Select your authorization strategy.
Select the decoder you want to use on the HTTP messages. Logs pulled from the HTTP source must be in this format.
Optionally, toggle the switch to enable TLS. If you enable TLS, the following certificate and key files are required. Note: All file paths are made relative to the configuration data directory, which is /var/lib/observability-pipelines-worker/config/ by default. See Advanced Configurations for more information. The file must be owned by the observability-pipelines-worker group and observability-pipelines-worker user, or at least readable by the group or user.
Server Certificate Path: The path to the certificate file that has been signed by your Certificate Authority (CA) Root File in DER or PEM (X.509) format.
CA Certificate Path: The path to the certificate file that is your Certificate Authority (CA) Root File in DER or PEM (X.509) format.
Private Key Path: The path to the .key private key file that belongs to your Server Certificate Path in DER or PEM (PKCS#8) format.
Enter the interval between scrapes.
Your HTTP Server must be able to handle GET requests at this interval.
Since requests run concurrently, if a scrape takes longer than the interval given, a new scrape is started, which can consume extra resources. Set the timeout to a value lower than the scrape interval to prevent this from happening.
Encoding ドロップダウンメニューで、パイプラインの出力を JSON、Logfmt、または Raw テキストでエンコードするかを選択します。デコードが選択されていない場合、デフォルトで JSON にデコードされます。
Sumo Logic コレクターのソースに設定されたデフォルトの name 値を上書きするには、source name を入力します。
Sumo Logic コレクターのソースに設定されたデフォルトの host 値を上書きするには、host name を入力します。
Sumo Logic コレクターのソースに設定されたデフォルトの category 値を上書きするには、category name を入力します。
カスタムヘッダーフィールドと値を追加するには、Add Header をクリックします。
The rsyslog and syslog-ng destinations support the RFC5424 format.
The rsyslog and syslog-ng destinations match these log fields to the following Syslog fields:
Log Event
SYSLOG FIELD
Default
log[“message”]
MESSAGE
NIL
log[“procid”]
PROCID
The running Worker’s process ID.
log[“appname”]
APP-NAME
observability_pipelines
log[“facility”]
FACILITY
8 (log_user)
log[“msgid”]
MSGID
NIL
log[“severity”]
SEVERITY
info
log[“host”]
HOSTNAME
NIL
log[“timestamp”]
TIMESTAMP
Current UTC time.
The following destination settings are optional:
Toggle the switch to enable TLS. If you enable TLS, the following certificate and key files are required:
Server Certificate Path: The path to the certificate file that has been signed by your Certificate Authority (CA) Root File in DER or PEM (X.509).
CA Certificate Path: The path to the certificate file that is your Certificate Authority (CA) Root File in DER or PEM (X.509).
Private Key Path: The path to the .key private key file that belongs to your Server Certificate Path in DER or PEM (PKCS#8) format.
Enter the number of seconds to wait before sending TCP keepalive probes on an idle connection.
Optionally, toggle the switch to enable Buffering Options. Note: Buffering options is in Preview. Contact your account manager to request access.
If left disabled, the maximum size for buffering is 500 events.
If enabled:
Select the buffer type you want to set (Memory or Disk).
Enter the buffer size and select the unit.
Observability Pipelines Worker for Google Chronicle を認証するには、Google Security Operations の担当者に連絡して Google Developer Service Account Credential を取得してください。このクレデンシャルは JSON ファイルであり、DD_OP_DATA_DIR/config に配置する必要があります。詳細については、API 認証情報の取得を参照してください。
注: Google Chronicle 宛先に送信するログにはインジェスションラベルが必須です。たとえば、A10 ロードバランサーのログであれば、インジェスションラベルとして A10_LOAD_BALANCER を付与する必要があります。利用可能なログタイプと対応するインジェスションラベルの一覧については、Google Cloud のデフォルトパーサーでログタイプをサポートするを参照してください。
The following fields are optional:
Enter the name for the Elasticsearch index. See template syntax if you want to route logs to different indexes based on specific fields in your logs.
Enter the Elasticsearch version.
Optionally, toggle the switch to enable Buffering Options. Note: Buffering options is in Preview. Contact your account manager to request access.
If left disabled, the maximum size for buffering is 500 events.
If enabled:
Select the buffer type you want to set (Memory or Disk).
Enter the buffer size and select the unit.
Optionally, enter the name of the OpenSearch index. See template syntax if you want to route logs to different indexes based on specific fields in your logs.
Optionally, toggle the switch to enable Buffering Options. Note: Buffering options is in Preview. Contact your account manager to request access.
If left disabled, the maximum size for buffering is 500 events.
If enabled:
Select the buffer type you want to set (Memory or Disk).
Enter the buffer size and select the unit.
Optionally, enter the name of the Amazon OpenSearch index. See template syntax if you want to route logs to different indexes based on specific fields in your logs.
Select an authentication strategy, Basic or AWS. For AWS, enter the AWS region.
Optionally, toggle the switch to enable Buffering Options. Note: Buffering options is in Preview. Contact your account manager to request access.
If left disabled, the maximum size for buffering is 500 events.
If enabled:
Select the buffer type you want to set (Memory or Disk).
Enter the buffer size and select the unit.
プロセッサーの設定
There are pre-selected processors added to your processor group out of the box. You can add additional processors or delete any existing ones based on your processing needs.
Processor groups are executed from top to bottom. The order of the processors is important because logs are checked by each processor, but only logs that match the processor’s filters are processed. To modify the order of the processors, use the drag handle on the top left corner of the processor you want to move.
Filter query syntax
Each processor has a corresponding filter query in their fields. Processors only process logs that match their filter query. And for all processors except the filter processor, logs that do not match the query are sent to the next step of the pipeline. For the filter processor, logs that do not match the query are dropped.
For any attribute, tag, or key:value pair that is not a reserved attribute, your query must start with @. Conversely, to filter reserved attributes, you do not need to append @ in front of your filter query.
For example, to filter out and drop status:info logs, your filter can be set as NOT (status:info). To filter out and drop system-status:info, your filter must be set as NOT (@system-status:info).
Filter query examples:
NOT (status:debug): This filters for only logs that do not have the status DEBUG.
status:ok service:flask-web-app: This filters for all logs with the status OK from your flask-web-app service.
This query can also be written as: status:ok AND service:flask-web-app.
host:COMP-A9JNGYK OR host:COMP-J58KAS: This filter query only matches logs from the labeled hosts.
@user.status:inactive: This filters for logs with the status inactive nested under the user attribute.
Queries run in the Observability Pipelines Worker are case sensitive. Learn more about writing filter queries in Datadog’s Log Search Syntax.
This processor samples your logging traffic for a representative subset at the rate that you define, dropping the remaining logs. As an example, you can use this processor to sample 20% of logs from a noisy non-critical service.
The sampling only applies to logs that match your filter query and does not impact other logs. If a log is dropped at this processor, none of the processors below receives that log.
To set up the sample processor:
Define a filter query. Only logs that match the specified filter query are sampled at the specified retention rate below. The sampled logs and the logs that do not match the filter query are sent to the next step in the pipeline.
Enter your desired sampling rate in the Retain field. For example, entering 2 means 2% of logs are retained out of all the logs that match the filter query.
Optionally, enter a Group By field to create separate sampling groups for each unique value for that field. For example, status:error and status:info are two unique field values. Each bucket of events with the same field is sampled independently. Click Add Field if you want to add more fields to partition by. See the group-by example.
Group-by example
If you have the following setup for the sample processor:
Filter query: env:staging
Retain: 40% of matching logs
Group by: status and host
Then, 40% of logs for each unique combination of status and service from env:staging is retained. For example:
40% of logs with status:info and service:networks are retained.
40% of logs with status:info and service:core-web are retained.
40% of logs with status:error and service:networks are retained.
40% of logs with status:error and service:core-web are retained.
This processor parses logs using the grok parsing rules that are available for a set of sources. The rules are automatically applied to logs based on the log source. Therefore, logs must have a source field with the source name. If this field is not added when the log is sent to the Observability Pipelines Worker, you can use the Add field processor to add it.
If the source field of a log matches one of the grok parsing rule sets, the log’s message field is checked against those rules. If a rule matches, the resulting parsed data is added in the message field as a JSON object, overwriting the original message.
If there isn’t a source field on the log, or no rule matches the log message, then no changes are made to the log and it is sent to the next step in the pipeline.
Datadog’s Grok patterns differ from the standard Grok pattern, where Datadog’s Grok implementation provides:
Matchers that include options for how you define parsing rules
Filters for post-processing of extracted data
A set of built-in patterns tailored to common log formats
See Parsing for more information on Datadog’s Grok patterns.
To set up the grok parser, define a filter query. Only logs that match the specified filter query are processed. All logs, regardless of whether they match the filter query, are sent to the next step in the pipeline.
To test log samples for out-of-the-box rules:
Click the Preview Library Rules button.
Search or select a source in the dropdown menu.
Enter a log sample to test the parsing rules for that source.
To add a custom parsing rule:
Click Add Custom Rule.
If you want to clone a library rule, select Clone library rule and then the library source from the dropdown menu.
If you want to create a custom rule, select Custom and then enter the source. The parsing rules are applied to logs with that source.
Enter log samples to test the parsing rules.
Enter the rules for parsing the logs. See Parsing for more information on writing parsing rules with Datadog Grok patterns. Note: The url, useragent, and csv filters are not available.
日次クォータの上限に達した後でも Drop events (イベントを削除) オプションが選択されていない場合、クォータフィルターに一致するログと一致しなかったログが共にパイプラインの次のステップに送信されます。
オプション: 特定のサービスまたはリージョンフィールドにクォータを設定したい場合は、Add Field をクリックします。
a. パーティション分割したいフィールド名を入力します。詳細はパーティションの例を参照してください。
i. クォータをパーティションに一致するイベントのみに適用したい場合は、Ignore when missing を選択します。詳細は「欠落時に無視」オプションの例を参照してください。
ii. オプション: パーティション化されたフィールドに異なるクォータを設定したい場合は、Overrides をクリックします。
CSV の構造例については、Download as CSV をクリックします。
オーバーライド CSV をドラッグアンドドロップしてアップロードします。または、Browse をクリックしてファイルを選択してアップロードすることもできます。詳細はオーバーライドの例を参照してください。
b. もう 1 つのパーティションを追加したい場合は、Add Field をクリックします。
例
パーティションの例
特定のサービスまたはリージョンにクォータを設定したい場合は、Partition by を使用します。例えば、1 日に 10 件のイベントのクォータを設定し、service フィールドでイベントをグループ化したい場合、service を Partition by フィールドに入力します。
「欠落時に無視」オプションの例
クォータをパーティションに一致するイベントのみに適用したい場合は、Ignore when missing を選択します。例えば、Worker が次のイベントセットを受信した場合:
Ignore when missing が選択されていない場合、クォータは 5 つのすべてのイベントに適用されます。
オーバーライドの例
service でパーティション分割し、2 つのサービス a と b がある場合、オーバーライドを使用してそれぞれに異なるクォータを適用できます。例えば、service:a に 5,000 バイトのクォータ制限、service:b に 50 イベントの制限を設定したい場合、オーバーライドルールは次のようになります。
サービス
タイプ
Limit
a
Bytes
5,000
b
イベント
50
The reduce processor groups multiple log events into a single log, based on the fields specified and the merge strategies selected. Logs are grouped at 10-second intervals. After the interval has elapsed for the group, the reduced log for that group is sent to the next step in the pipeline.
To set up the reduce processor:
Define a filter query. Only logs that match the specified filter query are processed. Reduced logs and logs that do not match the filter query are sent to the next step in the pipeline.
In the Group By section, enter the field you want to group the logs by.
Click Add Group by Field to add additional fields.
In the Merge Strategy section:
In On Field, enter the name of the field you want to merge the logs on.
Select the merge strategy in the Apply dropdown menu. This is the strategy used to combine events. See the following Merge strategies section for descriptions of the available strategies.
Click Add Merge Strategy to add additional strategies.
Merge strategies
These are the available merge strategies for combining log events.
Name
Description
Array
Appends each value to an array.
Concat
Concatenates each string value, delimited with a space.
Concat newline
Concatenates each string value, delimited with a newline.
Concat raw
Concatenates each string value, without a delimiter.
Discard
Discards all values except the first value that was received.
Flat unique
Creates a flattened array of all unique values that were received.
Longest array
Keeps the longest array that was received.
Max
Keeps the maximum numeric value that was received.
Min
Keeps the minimum numeric value that was received.
Retain
Discards all values except the last value that was received. Works as a way to coalesce by not retaining `null`.
Select scanning rule type フィールドで、ライブラリからルールを作成するか、カスタムルールを作成するかを選択します。
ライブラリからルールを作成する場合は、使用するライブラリパターンを選択します。
カスタムルールを作成する場合は、データに対して確認する正規表現パターンを入力します。
Scan entire or part of event セクションで、ドロップダウンメニューの Entire Event (イベント全体)、Specific Attributes (特定の属性)、Exclude Attributes (属性の除外) からスキャン対象を選択します。
Specific Attributes (特定の属性) を選択した場合は、Add Field をクリックし、スキャンする特定の属性を入力します。最大 3 つのフィールドを追加できます。ネストされたキーにアクセスするには、パス記法 (outer_key.inner_key) を使用します。ネストされたデータを持つ特定の属性では、すべてのネストされたデータがスキャンされます。
Exclude Attributes (属性の除外) を選択した場合は、Add Field をクリックし、スキャンから除外する特定の属性を入力します。最大 3 つのフィールドを追加できます。ネストされたキーにアクセスするには、パス記法 (outer_key.inner_key) を使用します。ネストされたデータを持つ指定された属性については、すべてのネストされたデータが除外されます。
Define action on match セクションで、一致した情報に対して実行するアクションを選択します。注: マスキング、部分的なマスキング、およびハッシュ化はすべて元に戻せないアクションです。
This processor adds a field with the name of the host that sent the log. For example, hostname: 613e197f3526. Note: If the hostname already exists, the Worker throws an error and does not overwrite the existing hostname.
To set up this processor:
Define a filter query. Only logs that match the specified filter query are processed. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline.
This processor parses the specified JSON field into objects. For example, if you have a message field that contains stringified JSON:
Define a filter query. Only logs that match the specified filter query are processed. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline.
Enter the name of the field you want to parse JSON on. Note: The parsed JSON overwrites what was originally contained in the field.
Use this processor to enrich your logs with information from a reference table, which could be a local file or database.
To set up the enrichment table processor:
Define a filter query. Only logs that match the specified filter query are processed. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline.
Enter the source attribute of the log. The source attribute’s value is what you want to find in the reference table.
Enter the target attribute. The target attribute’s value stores, as a JSON object, the information found in the reference table.
Select the type of reference table you want to use, File or GeoIP.
For the File type:
Enter the file path. Note: All file paths are made relative to the configuration data directory, which is /var/lib/observability-pipelines-worker/config/ by default. See Advanced Configurations for more information. The file must be owned by the observability-pipelines-worker group and observability-pipelines-worker user, or at least readable by the group or user.
Enter the column name. The column name in the enrichment table is used for matching the source attribute value. See the Enrichment file example.
For the GeoIP type, enter the GeoIP path.
Enrichment file example
For this example, merchant_id is used as the source attribute and merchant_info as the target attribute.
This is the example reference table that the enrichment processor uses:
merch_id
merchant_name
city
state
803
Andy’s Ottomans
Boise
Idaho
536
Cindy’s Couches
Boulder
Colorado
235
Debra’s Benches
Las Vegas
Nevada
merch_id is set as the column name the processor uses to find the source attribute’s value. Note: The source attribute’s value does not have to match the column name.
If the enrichment processor receives a log with "merchant_id":"536":
The processor looks for the value 536 in the reference table’s merch_id column.
After it finds the value, it adds the entire row of information from the reference table to the merchant_info attribute as a JSON object:
Click Select API key to choose the Datadog API key you want to use.
Run the one-step command provided in the UI to install the Worker.
Note: The environment variables used by the Worker in /etc/default/observability-pipelines-worker are not updated on subsequent runs of the install script. If changes are needed, update the file manually and restart the Worker.
If you prefer not to use the one-line installation script, follow these step-by-step instructions:
For RHEL and CentOS, the Observability Pipelines Worker supports versions 8.0 or later.
Click Select API key to choose the Datadog API key you want to use.
Run the one-step command provided in the UI to install the Worker.
Note: The environment variables used by the Worker in /etc/default/observability-pipelines-worker are not updated on subsequent runs of the install script. If changes are needed, update the file manually and restart the Worker.
If you prefer not to use the one-line installation script, follow these step-by-step instructions:
Set up the Datadog rpm repo on your system with the below command. Note: If you are running RHEL 8.1 or CentOS 8.1, use repo_gpgcheck=0 instead of repo_gpgcheck=1 in the configuration below.