- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
For existing pipelines in Observability Pipelines, you can update and deploy changes for source settings, destination settings, and processors in the Observability Pipelines UI. But if you want to update source and destination environment variables, you need to manually update the Worker with the new values.
On the Worker installation page:
Select your platform in the Choose your installation platform dropdown menu.
If you want to update source environment variables, update the information for your log source.
DD_OP_SOURCE_AWS_S3_SQS_URL
AWS_CONFIG_FILE
.AWS_PROFILE
.DD_OP_SOURCE_DATADOG_AGENT_ADDRESS
.DD_OP_SOURCE_FLUENT_ADDRESS
.There are no environment variables for the Google Pub/Sub source.
https://127.0.0.8/logs
.DD_OP_SOURCE_HTTP_CLIENT_ENDPOINT_URL
.DD_OP_SOURCE_HTTP_CLIENT_USERNAME
and DD_OP_SOURCE_HTTP_CLIENT_PASSWORD
.DD_OP_SOURCE_HTTP_CLIENT_BEARER_TOKEN
.0.0.0.0:9997
, for your HTTP client logs.DD_OP_SOURCE_HTTP_SERVER_ADDRESS
.host:port
, such as 10.14.22.123:9092
. If there is more than one server, use commas to separate them.DD_OP_SOURCE_KAFKA_BOOTSTRAP_SERVERS
.DD_OP_SOURCE_KAFKA_SASL_USERNAME
.DD_OP_SOURCE_KAFKA_SASL_PASSWORD
.0.0.0.0:9997
, for incoming log messages.DD_OP_SOURCE_LOGSTASH_ADDRESS
0.0.0.0:8088
참고: /services/collector/event
는 엔드포인트에 자동으로 추가됩니다.DD_OP_SOURCE_SPLUNK_HEC_ADDRESS
에 저장됩니다.0.0.0.0:9997
.DD_OP_SOURCE_SPLUNK_TCP_ADDRESS
.0.0.0.0:80
./receiver/v1/http/
path is automatically appended to the endpoint.DD_OP_SOURCE_SUMO_LOGIC_ADDRESS
.0.0.0.0:9997
.DD_OP_SOURCE_SYSLOG_ADDRESS
.If you want to update destination environment variables, update the information for your log destination.
DD_OP_DESTINATION_AMAZON_OPENSEARCH_USERNAME
.DD_OP_DESTINATION_AMAZON_OPENSEARCH_PASSWORD
.DD_OP_DESTINATION_AMAZON_OPENSEARCH_ENDPOINT_URL
.DD_OP_DESTINATION_GOOGLE_CHRONICLE_UNSTRUCTURED_ENDPOINT_URL
.환경 변수가 필요하지 않습니다.
AWS access key ID of your S3 archive:
DD_OP_DESTINATION_DATADOG_ARCHIVES_AWS_ACCESS_KEY_ID
AWS secret access key ID of your S3 archive:
DD_OP_DESTINATION_DATADOG_ARCHIVES_AWS_SECRET_KEY
.There are no environment variables to configure.
DD_OP_DESTINATION_DATADOG_ARCHIVES_AZURE_BLOB_CONNECTION_STRING
.DD_OP_DESTINATION_ELASTICSEARCH_USERNAME
.DD_OP_DESTINATION_ELASTICSEARCH_PASSWORD
.DD_OP_DESTINATION_ELASTICSEARCH_ENDPOINT_URL
.DD_OP_DESTINATION_MICROSOFT_SENTINEL_DCE_URI
DD_OP_DESTINATION_MICROSOFT_SENTINEL_CLIENT_SECRET
DD_OP_DESTINATION_NEW_RELIC_ACCOUNT_ID
.DD_OP_DESTINATION_NEW_RELIC_LICENSE_KEY
.DD_OP_DESTINATION_OPENSEARCH_USERNAME
.DD_OP_DESTINATION_OPENSEARCH_PASSWORD
.DD_OP_DESTINATION_OPENSEARCH_ENDPOINT_URL
.DD_OP_DESTINATION_SENTINEL_ONE_TOKEN
DD_OP_DESTINATION_SPLUNK_HEC_TOKEN
.https://hec.splunkcloud.com:8088
./services/collector/event
path is automatically appended to the endpoint.DD_OP_DESTINATION_SPLUNK_HEC_ENDPOINT_URL
.https://<ENDPOINT>.collection.sumologic.com/receiver/v1/http/<UNIQUE_HTTP_COLLECTOR_CODE>
, where:<ENDPOINT>
is your Sumo collection endpoint.<UNIQUE_HTTP_COLLECTOR_CODE>
is the string that follows the last forward slash (/
) in the upload URL for the HTTP source.DD_OP_DESTINATION_SUMO_LOGIC_HTTP_COLLECTOR_URL
.127.0.0.1:9997
.DD_OP_DESTINATION_SYSLOG_ENDPOINT_URL
.Follow the instructions for your environment to update the worker:
docker run -i -e DD_API_KEY=<DATADOG_API_KEY> \
-e DD_OP_PIPELINE_ID=<PIPELINE_ID> \
-e DD_SITE=<DATADOG_SITE> \
-e <SOURCE_ENV_VARIABLE> \
-e <DESINATION_ENV_VARIABLE> \
-p 8088:8088 \
datadog/observability-pipelines-worker run
docker run
명령은 Worker가 수신 중인 포트와 동일한 포트를 노출합니다. Worker의 컨테이너 포트를 도커(Docker) 호스트의 다른 포트에 매핑하려면 -p | --publish
옵션을 사용하세요.-p 8282:8088 datadog/observability-pipelines-worker run
helm repo update
helm upgrade --install opw \
-f values.yaml \
--set datadog.apiKey=<DATADOG_API_KEY> \
--set datadog.pipelineId=<PIPELINE_ID> \
--set <SOURCE_ENV_VARIABLES> \
--set <DESTINATION_ENV_VARIABLES> \
--set service.ports[0].protocol=TCP,service.ports[0].port=<SERVICE_PORT>,service.ports[0].targetPort=<TARGET_PORT> \
datadog/observability-pipelines-worker
<SERVICE_PORT>
to the port the Worker is listening on (<TARGET_PORT>)
. If you want to map the Worker’s pod port to a different incoming port of the Kubernetes Service, use the following service.ports[0].port
and service.ports[0].targetPort
values:--set service.ports[0].protocol=TCP,service.ports[0].port=8088,service.ports[0].targetPort=8282
API 키 선택을 클릭해 사용하고 싶은 Datadog API 키를 선택하세요.
UI에 제공된 원스텝 명령을 실행하여 Worker를 다시 설치합니다.
참고: /etc/default/observability-pipelines-worker
에서 Worker가 사용하는 환경 변수는 설치 스크립트를 실행할 때 업데이트되지 않습니다. 변경이 필요한 경우 파일을 수동으로 업데이트하고 Worker를 다시 시작하세요.
한 줄 설치 스크립트를 사용하지 않으려면 다음 단계별 지침을 따르세요.
apt
리포지토리를 업데이트하고 최신 Worker 버전을 설치하세요.sudo apt-get update
sudo apt-get install observability-pipelines-worker datadog-signing-keys
datadoghq.com
), 소스 및 대상 환경 변수를 추가합니다.sudo cat <<EOF > /etc/default/observability-pipelines-worker
DD_API_KEY=<DATADOG_API_KEY>
DD_OP_PIPELINE_ID=<PIPELINE_ID>
DD_SITE=<DATADOG_SITE>
<SOURCE_ENV_VARIABLES>
<DESTINATION_ENV_VARIABLES>
EOF
sudo systemctl restart observability-pipelines-worker
API 키 선택을 클릭해 사용하고 싶은 Datadog API 키를 선택하세요.
UI에 제공된 원스텝 명령을 실행하여 Worker를 다시 설치합니다.
참고: /etc/default/observability-pipelines-worker
에서 Worker가 사용하는 환경 변수는 설치 스크립트를 실행할 때 업데이트되지 않습니다. 변경이 필요한 경우 파일을 수동으로 업데이트하고 Worker를 다시 시작하세요.
한 줄 설치 스크립트를 사용하지 않으려면 다음 단계별 지침을 따르세요.
sudo yum makecache
sudo yum install observability-pipelines-worker
datadoghq.com
), 소스 및 대상 업데이트된 환경 변수를 추가합니다.sudo cat <<-EOF > /etc/default/observability-pipelines-worker
DD_API_KEY=<API_KEY>
DD_OP_PIPELINE_ID=<PIPELINE_ID>
DD_SITE=<SITE>
<SOURCE_ENV_VARIABLES>
<DESTINATION_ENV_VARIABLES>
EOF
sudo systemctl restart observability-pipelines-worker