이 페이지는 아직 영어로 제공되지 않습니다. 번역 작업 중입니다.
현재 번역 프로젝트에 대한 질문이나 피드백이 있으신 경우 언제든지 연락주시기 바랍니다.

Overview

The Observability Pipelines Worker is software that runs in your environment to centrally aggregate, process, and route your logs. You install and configure the Worker as part of the pipeline setup process. These are the general steps for setting up a pipeline in the UI:

  1. Select a log source.
  2. Select destinations to which you want to send your logs.
  3. Select and configure processors to transform your logs.
  4. Install the Worker.
  5. Deploy the pipeline.

Note: If you are using a proxy, see the proxy option in Bootstrap options.

Install the Worker

After you set up your source, destinations, and processors on the Build page of the pipeline UI, follow the steps on the Install page.

The install page in the UI with a dropdown menu to choose your installation platform and fields to enter environment variables
  1. Select the platform on which you want to install the Worker.
  2. Enter the environment variables for your sources and destinations, if applicable.
  3. Follow the instructions on installing the Worker for your platform. The command provided in the UI to install the Worker has the relevant environment variables populated.
  1. API 키 선택을 클릭해 사용하고 싶은 Datadog API 키를 선택하세요.
  2. 안내된 명령을 UI에 실행해 Worker를 설치하세요. 이 명령을 사용하면 이전에 입력한 환경 변수가 자동으로 채워집니다.
    docker run -i -e DD_API_KEY=<DATADOG_API_KEY> \
        -e DD_OP_PIPELINE_ID=<PIPELINE_ID> \
        -e DD_SITE=<DATADOG_SITE> \
        -e <SOURCE_ENV_VARIABLE> \
        -e <DESTINATION_ENV_VARIABLE> \
        -p 8088:8088 \
        datadog/observability-pipelines-worker run
    
    참고: 기본적으로 docker run 명령은 작업자가 수신 중인 포트와 동일한 포트를 노출시킵니다. 작업자의 컨테이너 포트를 도커(Docker) 호스트의 다른 포트에 매핑하려면 명령에 -p | --publish 옵션을 사용하세요:
    -p 8282:8088 datadog/observability-pipelines-worker run
    
  3. 관측 가능성 파이프라인 설치 페이지로 돌아가서 배포를 클릭하세요.

파이프라인 설정을 변경하려면 기존 파이프라인 업데이트를 참조하세요.

  1. Download the Helm chart values file. If you are not using a managed service such as Amazon EKS, Google GKE, or Azure AKS, see Self-hosted and self-managed Kubernetes clusters before continuing to the next step.
  2. Click Select API key to choose the Datadog API key you want to use.
  3. Add the Datadog chart repository to Helm:
    helm repo add datadog https://helm.datadoghq.com
    
    If you already have the Datadog chart repository, run the following command to make sure it is up to date:
    helm repo update
    
  4. Run the command provided in the UI to install the Worker. The command is automatically populated with the environment variables you entered earlier.
    helm upgrade --install opw \
    -f values.yaml \
    --set datadog.apiKey=<DATADOG_API_KEY> \
    --set datadog.pipelineId=<PIPELINE_ID> \
    --set <SOURCE_ENV_VARIABLES> \
    --set <DESTINATION_ENV_VARIABLES> \
    --set service.ports[0].protocol=TCP,service.ports[0].port=<SERVICE_PORT>,service.ports[0].targetPort=<TARGET_PORT> \
    datadog/observability-pipelines-worker
    
    Note: By default, the Kubernetes Service maps incoming port <SERVICE_PORT> to the port the Worker is listening on (<TARGET_PORT>). If you want to map the Worker’s pod port to a different incoming port of the Kubernetes Service, use the following service.ports[0].port and service.ports[0].targetPort values in the command:
    --set service.ports[0].protocol=TCP,service.ports[0].port=8088,service.ports[0].targetPort=8282
    
  5. Navigate back to the Observability Pipelines installation page and click Deploy.

See Update Existing Pipelines if you want to make changes to your pipeline’s configuration.

Self-hosted and self-managed Kubernetes clusters

If you are running a self-hosted and self-managed Kubernetes cluster, and defined zones with node labels using topology.kubernetes.io/zone, then you can use the Helm chart values file as is. However, if you are not using the label topology.kubernetes.io/zone, you need to update the topologyKey in the values.yaml file to match the key you are using. Or if you run your Kubernetes install without zones, remove the entire topology.kubernetes.io/zone section.

For RHEL and CentOS, the Observability Pipelines Worker supports versions 8.0 or later.

Follow the steps below if you want to use the one-line installation script to install the Worker. Otherwise, see Manually install the Worker on Linux.

  1. Click Select API key to choose the Datadog API key you want to use.

  2. Run the one-step command provided in the UI to install the Worker.

    Note: The environment variables used by the Worker in /etc/default/observability-pipelines-worker are not updated on subsequent runs of the install script. If changes are needed, update the file manually and restart the Worker.

  3. Navigate back to the Observability Pipelines installation page and click Deploy.

See Update Existing Pipelines if you want to make changes to your pipeline’s configuration.

  1. 파이프라인의 예상 로그 볼륨을 입력하려면 드롭다운 메뉴 옵션 중 하나를 선택하세요.

    옵션설명
    Unsure로그 볼륨을 예상할 수 없거나 Worker를 테스트하고 싶을 경우 이 옵션을 선택하세요. 이 옵션의 경우 일반적인 목적의 t4g.large 인스턴스 2개를 최댓값으로 EC2 Auto Scaling 그룹을 프로비저닝합니다.
    1-5 TB/day이 옵션을 선택하면 컴퓨팅 최적화된 c6g.large 인스턴스 2개를 최댓값으로 EC2 Auto Scaling 그룹을 프로비저닝합니다.
    5-10 TB/day이 옵션을 선택하면 컴퓨팅 최적화된 c6g.large 인스턴스 2개를 최솟값으로, 5개를 최댓값으로 EC2 Auto Scaling 그룹을 프로비저닝합니다.
    >10 TB/dayDatadog에서는 대용량 프로덕션 배포에 이 옵션을 사용할 것을 권고합니다. 이 옵션을 선택하면 컴퓨팅 최적화된 c6g.xlarge 인스턴스 2개를 최솟값으로, 10개를 최댓값으로 EC2 Auto Scaling 그룹을 프로비저닝합니다.

    참고: 다른 파라미터는 Worker 배포에 적절한 기본값으로 설정되어 있습니다. 그러나 스택을 생성하기 전에 AWS 콘솔에서 내 사용 사례에 맞게 값을 조정할 수 있습니다.

  2. Worker를 설치할 때 사용할 AWS 리전을 선택하세요.

  3. API 키 선택을 클릭해 사용하고 싶은 Datadog API 키를 선택하세요.

  4. Launch CloudFormation Template을 클릭해 AWS 콘솔로 이동해 스택 구성을 검토한 후 실행하세요. CloudFormation 파라미터가 올바른지 다시 확인하세요.

  5. Worker를 설치할 때 사용할 VPC와 서브넷을 선택하세요.

  6. IAM과 관련해 필요한 권한 체크 상자가 모두 선택되어 있는지 검토하세요. Submit을 클릭해 스택을 생성하세요. 이 지점부터 CloudFormation에서 설치를 처리합니다. Worker 인스턴스가 실행되고, 필요한 소프트웨어가 설치되며, Worker가 자동으로 시작됩니다.

  7. 관측 가능성 파이프라인 설치 페이지로 돌아가서 배포를 클릭하세요.

파이프라인 구성을 변경하고 싶으면 기존 파이프라인 업데이트을 참고하세요.

Manually install the Worker on Linux

If you prefer not to use the one-line installation script for Linux, follow these step-by-step instructions:

  1. Set up APT transport for downloading using HTTPS:
    sudo apt-get update
    sudo apt-get install apt-transport-https curl gnupg
    
  2. Run the following commands to set up the Datadog deb repo on your system and create a Datadog archive keyring:
    sudo sh -c "echo 'deb [signed-by=/usr/share/keyrings/datadog-archive-keyring.gpg] https://apt.datadoghq.com/ stable observability-pipelines-worker-2' > /etc/apt/sources.list.d/datadog-observability-pipelines-worker.list"
    sudo touch /usr/share/keyrings/datadog-archive-keyring.gpg
    sudo chmod a+r /usr/share/keyrings/datadog-archive-keyring.gpg
    curl https://keys.datadoghq.com/DATADOG_APT_KEY_CURRENT.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
    curl https://keys.datadoghq.com/DATADOG_APT_KEY_06462314.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
    curl https://keys.datadoghq.com/DATADOG_APT_KEY_F14F620E.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
    curl https://keys.datadoghq.com/DATADOG_APT_KEY_C0962C7D.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
    
  3. Run the following commands to update your local apt repo and install the Worker:
    sudo apt-get update
    sudo apt-get install observability-pipelines-worker datadog-signing-keys
    
  4. Add your keys, site (for example, datadoghq.com for US1), source, and destination environment variables to the Worker’s environment file:
    sudo cat <<EOF > /etc/default/observability-pipelines-worker
    DD_API_KEY=<DATADOG_API_KEY>
    DD_OP_PIPELINE_ID=<PIPELINE_ID>
    DD_SITE=<DATADOG_SITE>
    <SOURCE_ENV_VARIABLES>
    <DESTINATION_ENV_VARIABLES>
    EOF
    
  5. Start the worker:
    sudo systemctl restart observability-pipelines-worker
    

Note: The environment variables used by the Worker in /etc/default/observability-pipelines-worker are not updated on subsequent runs of the install script. If changes are needed, update the file manually and restart the Worker.

See Update Existing Pipelines if you want to make changes to your pipeline’s configuration.

For RHEL and CentOS, the Observability Pipelines Worker supports versions 8.0 or later.
  1. Set up the Datadog rpm repo on your system with the below command.
    Note: If you are running RHEL 8.1 or CentOS 8.1, use repo_gpgcheck=0 instead of repo_gpgcheck=1 in the configuration below.
    cat <<EOF > /etc/yum.repos.d/datadog-observability-pipelines-worker.repo
    [observability-pipelines-worker]
    name = Observability Pipelines Worker
    baseurl = https://yum.datadoghq.com/stable/observability-pipelines-worker-2/\$basearch/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://keys.datadoghq.com/DATADOG_RPM_KEY_CURRENT.public
        https://keys.datadoghq.com/DATADOG_RPM_KEY_B01082D3.public
    EOF
    
  2. Update your packages and install the Worker:
    sudo yum makecache
    sudo yum install observability-pipelines-worker
    
  3. Add your keys, site (for example, datadoghq.com for US1), source, and destination environment variables to the Worker’s environment file:
    sudo cat <<-EOF > /etc/default/observability-pipelines-worker
    DD_API_KEY=<API_KEY>
    DD_OP_PIPELINE_ID=<PIPELINE_ID>
    DD_SITE=<SITE>
    <SOURCE_ENV_VARIABLES>
    <DESTINATION_ENV_VARIABLES>
    EOF
    
  4. Start the worker:
    sudo systemctl restart observability-pipelines-worker
    
  5. Navigate back to the Observability Pipelines installation page and click Deploy.

Note: The environment variables used by the Worker in /etc/default/observability-pipelines-worker are not updated on subsequent runs of the install script. If changes are needed, update the file manually and restart the Worker.

See Update Existing Pipelines if you want to make changes to your pipeline’s configuration.

Further reading

추가 유용한 문서, 링크 및 기사: