El destino Azure Storage está disponible para la plantilla Logs de archivos. Utiliza este destino para enviar tus logs en formato rehidratable en Datadog a un bucket de Azure Storage para archivarlo. Debes configurar logs de archivos de Datadog, si aún no lo has hecho y, a continuación, debes configurar el destino en la interfaz de usuario del pipeline.

Configurar archivos de logs

Si ya tienes un archivo de log de Datadog configurado para Observability Pipelines, ve directamente a Configurar el destino de tu pipeline.

Para configurar archivos de logs de Datadog necesitas tener instalada la integración Google Cloud Platform de Datadog.

Create a storage account

Create an Azure storage account if you don’t already have one.

  1. Navigate to Storage accounts.
  2. Click Create.
  3. Select the subscription name and resource name you want to use.
  4. Enter a name for your storage account.
  5. Select a region in the dropdown menu.
  6. Select Standard performance or Premium account type.
  7. Click Next.
  8. In the Blob storage section, select Hot or Cool storage.
  9. Click Review + create.

Create a storage bucket

  1. In your storage account, click Containers under Data storage in the left navigation menu.
  2. Click + Container at the top to create a new container.
  3. Enter a name for the new container. This name is used later when you set up the Observability Pipelines Azure Storage destination.

Note: Do not set immutability policies because the most recent data might need to be rewritten in rare cases (typically when there is a timeout).

Connect the Azure container to Datadog Log Archives

  1. Navigate to Datadog Log Forwarding.
  2. Click New archive.
  3. Enter a descriptive archive name.
  4. Add a query that filters out all logs going through log pipelines so that none of those logs go into this archive. For example, add the query observability_pipelines_read_only_archive, assuming no logs going through the pipeline have that tag added.
  5. Select Azure Storage.
  6. Select the Azure tenant and client your storage account is in.
  7. Enter the name of the storage account.
  8. Enter the name of the container you created earlier.
  9. Optionally, enter a path.
  10. Optionally, set permissions, add tags, and define the maximum scan size for rehydration. See Advanced settings for more information.
  11. Click Save.

See the Log Archives documentation for additional information.

Configurar el destino de tu pipeline

Configura el destino Amazon S3 y sus variables de entorno cuando configures un pipeline de logs de archivos. La siguiente información se configura en la interfaz de usuario del pipeline.

  1. Enter the name of the Azure container you created earlier.
  2. Optionally, enter a prefix. Prefixes are useful for partitioning objects. For example, you can use a prefix as an object key to store objects under a particular directory. If using a prefix for this purpose, it must end in / to act as a directory path; a trailing / is not automatically added.

Configurar las variables de entorno

Enter the Azure connection string you created earlier. The connection string gives the Worker access to your Azure Storage bucket.

To get the connection string:

  1. Navigate to Azure Storage accounts.
  2. Click Access keys under Security and networking in the left navigation menu.
  3. Copy the connection string for the storage account and paste it into the Azure connection string field on the Observability Pipelines Worker installation page.

Cómo funciona el destino

Colocación de eventos en lotes

Un lote de eventos se descarga cuando se cumple uno de estos parámetros. Para obtener más información, consulta la colocación de eventos en lotes.

Eventos máximosBytes máximosTiempo de espera (segundos)
Ninguno100,000,000900