Cette page n'est pas encore disponible en français, sa traduction est en cours.
Si vous avez des questions ou des retours sur notre projet de traduction actuel,
n'hésitez pas à nous contacter.
Overview
The Observability Pipelines Worker is software that runs in your environment to centrally aggregate, process, and route your logs. You install and configure the Worker as part of the pipeline setup process. These are the general steps for setting up a pipeline in the UI:
- Select a log source.
- Select destinations to which you want to send your logs.
- Select and configure processors to transform your logs.
- Install the Worker.
- Deploy the pipeline.
Note: If you are using a proxy, see the proxy
option in Bootstrap options.
Install the Worker
After you set up your source, destinations, and processors on the Build page of the pipeline UI, follow the steps on the Install page.
- Select the platform on which you want to install the Worker.
- Enter the environment variables for your sources and destinations, if applicable.
- Follow the instructions on installing the Worker for your platform. The command provided in the UI to install the Worker has the relevant environment variables populated.
- Click Select API key to choose the Datadog API key you want to use.
- Run the command provided in the UI to install the Worker. The command is automatically populated with the environment variables you entered earlier.
docker run -i -e DD_API_KEY=<DATADOG_API_KEY> \
-e DD_OP_PIPELINE_ID=<PIPELINE_ID> \
-e DD_SITE=<DATADOG_SITE> \
-e <SOURCE_ENV_VARIABLE> \
-e <DESTINATION_ENV_VARIABLE> \
-p 8088:8088 \
datadog/observability-pipelines-worker run
Note: By default, the docker run
command exposes the same port the Worker is listening on. If you want to map the Worker’s container port to a different port on the Docker host, use the -p | --publish
option in the command:-p 8282:8088 datadog/observability-pipelines-worker run
- Navigate back to the Observability Pipelines installation page and click Deploy.
See Update Existing Pipelines if you want to make changes to your pipeline’s configuration.
- Download the Helm chart values file. If you are not using a managed service such as Amazon EKS, Google GKE, or Azure AKS, see Self-hosted and self-managed Kubernetes clusters before continuing to the next step.
- Click Select API key to choose the Datadog API key you want to use.
- Add the Datadog chart repository to Helm:
helm repo add datadog https://helm.datadoghq.com
If you already have the Datadog chart repository, run the following command to make sure it is up to date: - Run the command provided in the UI to install the Worker. The command is automatically populated with the environment variables you entered earlier.
helm upgrade --install opw \
-f values.yaml \
--set datadog.apiKey=<DATADOG_API_KEY> \
--set datadog.pipelineId=<PIPELINE_ID> \
--set <SOURCE_ENV_VARIABLES> \
--set <DESTINATION_ENV_VARIABLES> \
--set service.ports[0].protocol=TCP,service.ports[0].port=<SERVICE_PORT>,service.ports[0].targetPort=<TARGET_PORT> \
datadog/observability-pipelines-worker
Note: By default, the Kubernetes Service maps incoming port <SERVICE_PORT>
to the port the Worker is listening on (<TARGET_PORT>
). If you want to map the Worker’s pod port to a different incoming port of the Kubernetes Service, use the following service.ports[0].port
and service.ports[0].targetPort
values in the command:--set service.ports[0].protocol=TCP,service.ports[0].port=8088,service.ports[0].targetPort=8282
- Navigate back to the Observability Pipelines installation page and click Deploy.
See Update Existing Pipelines if you want to make changes to your pipeline’s configuration.
Self-hosted and self-managed Kubernetes clusters
If you are running a self-hosted and self-managed Kubernetes cluster, and defined zones with node labels using topology.kubernetes.io/zone
, then you can use the Helm chart values file as is. However, if you are not using the label topology.kubernetes.io/zone
, you need to update the topologyKey
in the values.yaml
file to match the key you are using. Or if you run your Kubernetes install without zones, remove the entire topology.kubernetes.io/zone
section.
For RHEL and CentOS, the Observability Pipelines Worker supports versions 8.0 or later.
Follow the steps below if you want to use the one-line installation script to install the Worker. Otherwise, see Manually install the Worker on Linux.
Click Select API key to choose the Datadog API key you want to use.
Run the one-step command provided in the UI to install the Worker.
Note: The environment variables used by the Worker in /etc/default/observability-pipelines-worker
are not updated on subsequent runs of the install script. If changes are needed, update the file manually and restart the Worker.
Navigate back to the Observability Pipelines installation page and click Deploy.
See Update Existing Pipelines if you want to make changes to your pipeline’s configuration.
Sélectionnez l’une des options du menu déroulant pour indiquer le volume prévu du log pour le pipeline :
Option | Description |
---|
Unsure | Utilisez cette option si vous n’êtes pas en mesure de projeter le volume du log ou si vous souhaitez tester le worker. Cette option approvisionne le groupe EC2 Auto Scaling avec un maximum de 2 instances t4g.large à usage général. |
1-5 TB/day | Cette option permet de provisionner le groupe EC2 Auto Scaling avec un maximum de 2 instances optimisées c6g.large` pour le calcul. |
5-10 TB/day | Cette option permet de doter le groupe EC2 Auto Scaling d’un minimum de 2 et d’un maximum de 5 instances c6g.large optimisées pour le calcul. |
>10 TB/day | Datadog recommande cette option pour les déploiements de production à grande échelle. Elle approvisionne le groupe EC2 Auto Scaling avec un minimum de 2 et un maximum de 10 instances c6g.xlarge optimisées pour le calcul. |
Remarque : tous les autres paramètres sont définis sur des valeurs par défaut raisonnables pour un déploiement de worker, mais vous pouvez les adapter à votre cas d’utilisation si nécessaire dans la console AWS avant de créer la stack.
Sélectionnez la région AWS que vous souhaitez utiliser pour installer le worker.
Cliquez sur Select API key pour choisir la clé d’API Datadog que vous souhaitez utiliser.
Cliquez sur Launch CloudFormation Template pour accéder à la console AWS afin d’examiner la configuration de la stack et de la lancer. Assurez-vous que les paramètres de CloudFormation correspondent à ce qui est prévu.
Sélectionnez le VPC et le sous-réseau que vous souhaitez utiliser pour installer le worker.
Passez en revue et vérifiez les cases des autorisations nécessaires pour IAM. Cliquez sur Submit pour créer la stack. CloudFormation s’occupe de l’installation à ce stade. Les instances du worker sont lancées, le logiciel nécessaire est téléchargé et le worker démarre automatiquement.
Retournez à la page d’installation des pipelines d’observabilité et cliquez sur Deploy.
Consultez la section Mise à jour des pipelines existants si vous souhaitez apporter des modifications à la configuration de votre pipeline.
Manually install the Worker on Linux
If you prefer not to use the one-line installation script for Linux, follow these step-by-step instructions:
- Set up APT transport for downloading using HTTPS:
sudo apt-get update
sudo apt-get install apt-transport-https curl gnupg
- Run the following commands to set up the Datadog
deb
repo on your system and create a Datadog archive keyring:sudo sh -c "echo 'deb [signed-by=/usr/share/keyrings/datadog-archive-keyring.gpg] https://apt.datadoghq.com/ stable observability-pipelines-worker-2' > /etc/apt/sources.list.d/datadog-observability-pipelines-worker.list"
sudo touch /usr/share/keyrings/datadog-archive-keyring.gpg
sudo chmod a+r /usr/share/keyrings/datadog-archive-keyring.gpg
curl https://keys.datadoghq.com/DATADOG_APT_KEY_CURRENT.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
curl https://keys.datadoghq.com/DATADOG_APT_KEY_06462314.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
curl https://keys.datadoghq.com/DATADOG_APT_KEY_F14F620E.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
curl https://keys.datadoghq.com/DATADOG_APT_KEY_C0962C7D.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
- Run the following commands to update your local
apt
repo and install the Worker:sudo apt-get update
sudo apt-get install observability-pipelines-worker datadog-signing-keys
- Add your keys, site (for example,
datadoghq.com
for US1), source, and destination environment variables to the Worker’s environment file:sudo cat <<EOF > /etc/default/observability-pipelines-worker
DD_API_KEY=<DATADOG_API_KEY>
DD_OP_PIPELINE_ID=<PIPELINE_ID>
DD_SITE=<DATADOG_SITE>
<SOURCE_ENV_VARIABLES>
<DESTINATION_ENV_VARIABLES>
EOF
- Start the worker:
sudo systemctl restart observability-pipelines-worker
Note: The environment variables used by the Worker in /etc/default/observability-pipelines-worker
are not updated on subsequent runs of the install script. If changes are needed, update the file manually and restart the Worker.
See Update Existing Pipelines if you want to make changes to your pipeline’s configuration.
For RHEL and CentOS, the Observability Pipelines Worker supports versions 8.0 or later.
- Set up the Datadog
rpm
repo on your system with the below command.
Note: If you are running RHEL 8.1 or CentOS 8.1, use repo_gpgcheck=0
instead of repo_gpgcheck=1
in the configuration below.cat <<EOF > /etc/yum.repos.d/datadog-observability-pipelines-worker.repo
[observability-pipelines-worker]
name = Observability Pipelines Worker
baseurl = https://yum.datadoghq.com/stable/observability-pipelines-worker-2/\$basearch/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://keys.datadoghq.com/DATADOG_RPM_KEY_CURRENT.public
https://keys.datadoghq.com/DATADOG_RPM_KEY_B01082D3.public
EOF
- Update your packages and install the Worker:
sudo yum makecache
sudo yum install observability-pipelines-worker
- Add your keys, site (for example,
datadoghq.com
for US1), source, and destination environment variables to the Worker’s environment file:sudo cat <<-EOF > /etc/default/observability-pipelines-worker
DD_API_KEY=<API_KEY>
DD_OP_PIPELINE_ID=<PIPELINE_ID>
DD_SITE=<SITE>
<SOURCE_ENV_VARIABLES>
<DESTINATION_ENV_VARIABLES>
EOF
- Start the worker:
sudo systemctl restart observability-pipelines-worker
- Navigate back to the Observability Pipelines installation page and click Deploy.
Note: The environment variables used by the Worker in /etc/default/observability-pipelines-worker
are not updated on subsequent runs of the install script. If changes are needed, update the file manually and restart the Worker.
See Update Existing Pipelines if you want to make changes to your pipeline’s configuration.
Further reading
Documentation, liens et articles supplémentaires utiles: