- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
The Observability Pipelines Worker is software that runs in your environment to centrally aggregate, process, and route your logs. You install and configure the Worker as part of the pipeline setup process. These are the general steps for setting up a pipeline in the UI:
Note: If you are using a proxy, see the proxy
option in Bootstrap options.
After you set up your source, destinations, and processors on the Build page of the pipeline UI, follow the steps on the Install page.
docker run -i -e DD_API_KEY=<DATADOG_API_KEY> \
-e DD_OP_PIPELINE_ID=<PIPELINE_ID> \
-e DD_SITE=<DATADOG_SITE> \
-e <SOURCE_ENV_VARIABLE> \
-e <DESTINATION_ENV_VARIABLE> \
-p 8088:8088 \
datadog/observability-pipelines-worker run
docker run
명령은 작업자가 수신 중인 포트와 동일한 포트를 노출시킵니다. 작업자의 컨테이너 포트를 도커(Docker) 호스트의 다른 포트에 매핑하려면 명령에 -p | --publish
옵션을 사용하세요:-p 8282:8088 datadog/observability-pipelines-worker run
파이프라인 설정을 변경하려면 기존 파이프라인 업데이트를 참조하세요.
helm repo add datadog https://helm.datadoghq.com
helm repo update
helm upgrade --install opw \
-f values.yaml \
--set datadog.apiKey=<DATADOG_API_KEY> \
--set datadog.pipelineId=<PIPELINE_ID> \
--set <SOURCE_ENV_VARIABLES> \
--set <DESTINATION_ENV_VARIABLES> \
--set service.ports[0].protocol=TCP,service.ports[0].port=<SERVICE_PORT>,service.ports[0].targetPort=<TARGET_PORT> \
datadog/observability-pipelines-worker
<SERVICE_PORT>
to the port the Worker is listening on (<TARGET_PORT>
). If you want to map the Worker’s pod port to a different incoming port of the Kubernetes Service, use the following service.ports[0].port
and service.ports[0].targetPort
values in the command:--set service.ports[0].protocol=TCP,service.ports[0].port=8088,service.ports[0].targetPort=8282
See Update Existing Pipelines if you want to make changes to your pipeline’s configuration.
If you are running a self-hosted and self-managed Kubernetes cluster, and defined zones with node labels using topology.kubernetes.io/zone
, then you can use the Helm chart values file as is. However, if you are not using the label topology.kubernetes.io/zone
, you need to update the topologyKey
in the values.yaml
file to match the key you are using. Or if you run your Kubernetes install without zones, remove the entire topology.kubernetes.io/zone
section.
Follow the steps below if you want to use the one-line installation script to install the Worker. Otherwise, see Manually install the Worker on Linux.
Click Select API key to choose the Datadog API key you want to use.
Run the one-step command provided in the UI to install the Worker.
Note: The environment variables used by the Worker in /etc/default/observability-pipelines-worker
are not updated on subsequent runs of the install script. If changes are needed, update the file manually and restart the Worker.
Navigate back to the Observability Pipelines installation page and click Deploy.
See Update Existing Pipelines if you want to make changes to your pipeline’s configuration.
파이프라인의 예상 로그 볼륨을 입력하려면 드롭다운 메뉴 옵션 중 하나를 선택하세요.
옵션 | 설명 |
---|---|
Unsure | 로그 볼륨을 예상할 수 없거나 Worker를 테스트하고 싶을 경우 이 옵션을 선택하세요. 이 옵션의 경우 일반적인 목적의 t4g.large 인스턴스 2개를 최댓값으로 EC2 Auto Scaling 그룹을 프로비저닝합니다. |
1-5 TB/day | 이 옵션을 선택하면 컴퓨팅 최적화된 c6g.large 인스턴스 2개를 최댓값으로 EC2 Auto Scaling 그룹을 프로비저닝합니다. |
5-10 TB/day | 이 옵션을 선택하면 컴퓨팅 최적화된 c6g.large 인스턴스 2개를 최솟값으로, 5개를 최댓값으로 EC2 Auto Scaling 그룹을 프로비저닝합니다. |
>10 TB/day | Datadog에서는 대용량 프로덕션 배포에 이 옵션을 사용할 것을 권고합니다. 이 옵션을 선택하면 컴퓨팅 최적화된 c6g.xlarge 인스턴스 2개를 최솟값으로, 10개를 최댓값으로 EC2 Auto Scaling 그룹을 프로비저닝합니다. |
참고: 다른 파라미터는 Worker 배포에 적절한 기본값으로 설정되어 있습니다. 그러나 스택을 생성하기 전에 AWS 콘솔에서 내 사용 사례에 맞게 값을 조정할 수 있습니다.
Worker를 설치할 때 사용할 AWS 리전을 선택하세요.
API 키 선택을 클릭해 사용하고 싶은 Datadog API 키를 선택하세요.
Launch CloudFormation Template을 클릭해 AWS 콘솔로 이동해 스택 구성을 검토한 후 실행하세요. CloudFormation 파라미터가 올바른지 다시 확인하세요.
Worker를 설치할 때 사용할 VPC와 서브넷을 선택하세요.
IAM과 관련해 필요한 권한 체크 상자가 모두 선택되어 있는지 검토하세요. Submit을 클릭해 스택을 생성하세요. 이 지점부터 CloudFormation에서 설치를 처리합니다. Worker 인스턴스가 실행되고, 필요한 소프트웨어가 설치되며, Worker가 자동으로 시작됩니다.
관측 가능성 파이프라인 설치 페이지로 돌아가서 배포를 클릭하세요.
파이프라인 구성을 변경하고 싶으면 기존 파이프라인 업데이트을 참고하세요.
If you prefer not to use the one-line installation script for Linux, follow these step-by-step instructions:
sudo apt-get update
sudo apt-get install apt-transport-https curl gnupg
deb
repo on your system and create a Datadog archive keyring:sudo sh -c "echo 'deb [signed-by=/usr/share/keyrings/datadog-archive-keyring.gpg] https://apt.datadoghq.com/ stable observability-pipelines-worker-2' > /etc/apt/sources.list.d/datadog-observability-pipelines-worker.list"
sudo touch /usr/share/keyrings/datadog-archive-keyring.gpg
sudo chmod a+r /usr/share/keyrings/datadog-archive-keyring.gpg
curl https://keys.datadoghq.com/DATADOG_APT_KEY_CURRENT.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
curl https://keys.datadoghq.com/DATADOG_APT_KEY_06462314.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
curl https://keys.datadoghq.com/DATADOG_APT_KEY_F14F620E.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
curl https://keys.datadoghq.com/DATADOG_APT_KEY_C0962C7D.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
apt
repo and install the Worker:sudo apt-get update
sudo apt-get install observability-pipelines-worker datadog-signing-keys
datadoghq.com
for US1), source, and destination environment variables to the Worker’s environment file:sudo cat <<EOF > /etc/default/observability-pipelines-worker
DD_API_KEY=<DATADOG_API_KEY>
DD_OP_PIPELINE_ID=<PIPELINE_ID>
DD_SITE=<DATADOG_SITE>
<SOURCE_ENV_VARIABLES>
<DESTINATION_ENV_VARIABLES>
EOF
sudo systemctl restart observability-pipelines-worker
Note: The environment variables used by the Worker in /etc/default/observability-pipelines-worker
are not updated on subsequent runs of the install script. If changes are needed, update the file manually and restart the Worker.
See Update Existing Pipelines if you want to make changes to your pipeline’s configuration.
rpm
repo on your system with the below command.repo_gpgcheck=0
instead of repo_gpgcheck=1
in the configuration below.cat <<EOF > /etc/yum.repos.d/datadog-observability-pipelines-worker.repo
[observability-pipelines-worker]
name = Observability Pipelines Worker
baseurl = https://yum.datadoghq.com/stable/observability-pipelines-worker-2/\$basearch/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://keys.datadoghq.com/DATADOG_RPM_KEY_CURRENT.public
https://keys.datadoghq.com/DATADOG_RPM_KEY_B01082D3.public
EOF
sudo yum makecache
sudo yum install observability-pipelines-worker
datadoghq.com
for US1), source, and destination environment variables to the Worker’s environment file:sudo cat <<-EOF > /etc/default/observability-pipelines-worker
DD_API_KEY=<API_KEY>
DD_OP_PIPELINE_ID=<PIPELINE_ID>
DD_SITE=<SITE>
<SOURCE_ENV_VARIABLES>
<DESTINATION_ENV_VARIABLES>
EOF
sudo systemctl restart observability-pipelines-worker
Note: The environment variables used by the Worker in /etc/default/observability-pipelines-worker
are not updated on subsequent runs of the install script. If changes are needed, update the file manually and restart the Worker.
See Update Existing Pipelines if you want to make changes to your pipeline’s configuration.
추가 유용한 문서, 링크 및 기사: