- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
",t};e.buildCustomizationMenuUi=t;function n(e){let t='
",t}function s(e){let n=e.filter.currentValue||e.filter.defaultValue,t='${e.filter.label}
`,e.filter.options.forEach(s=>{let o=s.id===n;t+=``}),t+="${e.filter.label}
`,t+=`If you want to run multiple pipelines on a single host to send logs from different sources, you need to manually add the Worker files for any additional Workers. This document explains which files you need to add and modify to run those Workers.
Set up the first pipeline and install the Worker on your host.
Set up another pipeline for the additional Worker that you want to run on the same host. When you reach the Install page, follow the below steps to run the Worker for this pipeline.
When you installed the first Worker, by default you have:
/usr/bin/observability-pipelines-worker
/lib/systemd/system/observability-pipelines-worker.service
[Unit]
Description="Observability Pipelines Worker"
Documentation=https://docs.datadoghq.com/observability_pipelines/
After=network-online.target
Wants=network-online.target
[Service]
User=observability-pipelines-worker
Group=observability-pipelines-worker
ExecStart=/usr/bin/observability-pipelines-worker run
Restart=always
AmbientCapabilities=CAP_NET_BIND_SERVICE
EnvironmentFile=-/etc/default/observability-pipelines-worker
[Install]
WantedBy=multi-user.target
/etc/default/observability-pipelines-worker
DD_API_KEY=<datadog_api_key>
DD_SITE=<dd_site>
DD_OP_PIPELINE_ID=<pipeline_id>
/var/lib/observability-pipelines-worker
For this example, another pipeline was created with the Fluent source. To configure a Worker for this pipeline:
Run the following command to create a new data directory, replacing op-fluent
with a directory name that fits your use case:
sudo mkdir /var/lib/op-fluent
Run the following command to change the owner of the data directory to observability-pipelines-worker:observability-pipelines-worker
. Make sure to update op-fluent
to your data directory’s name.
sudo chown -R observability-pipelines-worker:observability-pipelines-worker /var/lib/op-fluent/
Create an environment file for the new systemd service, such as /etc/default/op-fluent
where op-fluent
is replaced with your specific filename. Example of the file content:
/etc/default/op-fluent
DD_API_KEY=<datadog_api_key>
DD_OP_PIPELINE_ID=<pipeline_id>
DD_SITE=<dd_site>
<destintation_environment_variables>
DD_OP_SOURCE_FLUENT_ADDRESS=0.0.0.0:9091
DD_OP_DATA_DIR=/var/lib/op-fluent
DD_OP_DATA_DIR
is set to /var/lib/op-fluent
. Replace /var/lib/op-fluent
with the path to your data directory.DD_OP_SOURCE_FLUENT_ADDRESS=0.0.0.0:9091
is the environment variable required for the Fluent source in this example. Replace it with the environment variable for your source.Also, make sure to replace:
<datadog_api_key>
with your Datadog API key.<pipeline_id>
with the ID of the pipeline for this Worker.<dd_site>
with your Datadog Site.<destination_environment_variables>
with the environment variables for your destinations.Create a new systemd service entry, such as /lib/systemd/system/op-fluent.service
. Example content for the entry:
/lib/systemd/system/op-fluent.service
[Unit]
Description="OPW for Fluent Pipeline"
Documentation=https://docs.datadoghq.com/observability_pipelines/
After=network-online.target
Wants=network-online.target
[Service]
User=observability-pipelines-worker
Group=observability-pipelines-worker
ExecStart=/usr/bin/observability-pipelines-worker run
Restart=always
AmbientCapabilities=CAP_NET_BIND_SERVICE
EnvironmentFile=-/etc/default/op-fluent
[Install]
WantedBy=multi-user.target
op-fluent
because the pipeline is using the Fluent source. Replace op-fluent.service
with a service name for your use case.Description
is OPW for Fluent Pipeline
. Replace OPW for Fluent Pipeline
with a description for your use case.EnvironmentFile
is set to -/etc/default/op-fluent
. Replace -/etc/default/op-fluent
with the systemd service environment variables file you created for your Worker.Run this command to reload systemd:
sudo systemctl daemon-reload
Run this command to start the new service:
sudo systemctl enable --now op-fluent
Run this command to verify the service is running:
sudo systemctl status op-fluent
Additionally, you can use the command sudo journalctl -u op-fluent.service
to help you debug any issues.