Amazon OpenSearch Destination

Ce produit n'est pas pris en charge par le site Datadog que vous avez sélectionné. ().
Cette page n'est pas encore disponible en français, sa traduction est en cours.
Si vous avez des questions ou des retours sur notre projet de traduction actuel, n'hésitez pas à nous contacter.
Disponible pour:

Logs

Use Observability Pipelines’ Amazon OpenSearch destination to send logs to Amazon OpenSearch.

Setup

Set up the Amazon OpenSearch destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.

Set up the destination

Only enter the identifiers for the Amazon OpenSearch endpoint URL, and if applicable, username and password. Do not enter the actual values.
  1. Enter the identifier for your Amazon OpenSearch endpoint URL. If you leave it blank, the default is used.
  2. In the Mode dropdown menu, select Bulk or Data streams.
    • Bulk mode
      • Uses Amazon OpenSearch’s Bulk API to send batched events directly into a standard index.
      • Choose this mode when you want direct control over index naming and lifecycle management. Data is appended to the index you specify, and you are responsible for handling rollovers, deletions, and mappings.
      • To configure Bulk mode:
        • In the Index field, optionally enter the name of the Amazon OpenSearch index. You can use template syntax to dynamically route logs to different indexes based on specific fields in your logs, for example logs-{{service}}.
    • Data streams mode
      • Uses Amazon OpenSearch Data Streams for log storage. Data streams automatically manage backing indexes and rollovers, making them ideal for timeseries log data.
      • Choose this mode when you want Amazon OpenSearch to manage the index lifecycle for you. Data streams ensures smooth rollovers, Index Lifecycle Management (ILM) compatibility, and optimized handling of time-based data.
      • To configure Data streams mode, optionally define the data stream name (default is logs-generic-default) by entering the following information:
        • In the Type field, enter the category of data being ingested, for example logs.
        • In the Dataset field, specify the format or data source that describes the structure, for example apache.
        • In the Namespace field, enter the grouping for organizing your data streams, for example production.
        • In the UI, there is a preview of the data stream name you configured. With the above example inputs, the data stream name that the Worker writes to is logs-apache-production.
  3. Optionally, enter the name of the Amazon OpenSearch index. See template syntax if you want to route logs to different indexes based on specific fields in your logs.
  4. Select an authentication strategy, Basic or AWS. If you selected:
    • Basic:
      • Enter the identifier for your Amazon OpenSearch username. If you leave it blank, the default is used.
      • Enter the identifier for your Amazon OpenSearch password. If you leave it blank, the default is used.
    • AWS:
      1. Enter the AWS region.
      2. (Optional) Select an AWS authentication option. The Assume role option should only be used if the user or role you created earlier needs to assume a different role to access the specific AWS resource and that permission has to be explicitly defined.
        If you select Assume role:
        1. Enter the ARN of the IAM role you want to assume.
        2. Optionally, enter the assumed role session name and external ID.
  5. Optionally, toggle the switch to enable Buffering Options. Enable a configurable buffer on your destination to ensure intermittent latency or an outage at the destination doesn’t create immediate backpressure, and allow events to continue to be ingested from your source. Disk buffers can also increase pipeline durability by writing logs to disk, ensuring buffered logs persist through a Worker restart. See Configurable buffers for destinations for more information.
    • If left unconfigured, your destination uses a memory buffer with a capacity of 500 events.
    • To configure a buffer on your destination:
      1. Select the buffer type you want to set (Memory or Disk).
      2. Enter the buffer size and select the unit.
        • Maximum memory buffer size is 128 GB.
        • Maximum disk buffer size is 500 GB.

Set secrets

These are the defaults used for secret identifiers and environment variables.

Note: If you enter identifiers for your secrets and then choose to use environment variables, the environment variable is the identifier entered and prepended with DD_OP. For example, if you entered PASSWORD_1 for a password identifier, the environment variable for that password is DD_OP_PASSWORD_1.

  • Amazon OpenSearch endpoint URL identifier:
    • The default identifier is DESTINATION_AMAZON_OPENSEARCH_ENDPOINT_URL.
  • Amazon OpenSearch authentication username identifier:
    • The default identifier is DESTINATION_AMAZON_OPENSEARCH_USERNAME.
  • Amazon OpenSearch authentication password identifier:
    • The default identifier is DESTINATION_AMAZON_OPENSEARCH_PASSWORD.
  • Amazon OpenSearch authentication username:
    • The default environment variable is DD_OP_DESTINATION_AMAZON_OPENSEARCH_USERNAME.
  • Amazon OpenSearch authentication password:
    • The default environment variable is DD_OP_DESTINATION_AMAZON_OPENSEARCH_PASSWORD.
  • Amazon OpenSearch endpoint URL:
    • The default environment variable is DD_OP_DESTINATION_AMAZON_OPENSEARCH_ENDPOINT_URL.

How the destination works

Event batching

A batch of events is flushed when one of these parameters is met. See event batching for more information.

Max EventsMax BytesTimeout (seconds)
None10,000,0001