- 필수 기능
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- 디지털 경험
- 소프트웨어 제공
- 보안
- 로그 관리
- 관리
- 인프라스트럭처
- ci
- containers
- csm
- ndm
- otel_guides
- overview
- slos
- synthetics
- tests
- 워크플로
For existing pipelines in Observability Pipelines, you can update and deploy changes for source settings, destination settings, and processors in the Observability Pipelines UI. But if you want to update source and destination environment variables, you need to manually update the Worker with the new values.
On the the Worker installation page:
DD_OP_SOURCE_DATADOG_AGENT_ADDRESS
.0.0.0.0:8088
/services/collector/event
is automatically appended to the endpoint.DD_OP_SOURCE_SPLUNK_HEC_ADDRESS
.0.0.0.0:9997
.DD_OP_SOURCE_SPLUNK_TCP_ADDRESS
.0.0.0.0:80
./receiver/v1/http/
path is automatically appended to the endpoint.DD_OP_SOURCE_SUMO_LOGIC_ADDRESS
.AWS access key ID of your S3 archive
DD_OP_DESTINATION_DATADOG_ARCHIVES_AWS_ACCESS_KEY_ID
AWS secret access key ID of your S3 archive
DD_OP_DESTINATION_DATADOG_ARCHIVES_AWS_SECRET_KEY
.No environment variables required.
DD_OP_DESTINATION_SPLUNK_HEC_TOKEN
.https://hec.splunkcloud.com:8088
./services/collector/event
path is automatically appended to the endpoint.DD_OP_DESTINATION_SPLUNK_HEC_ENDPOINT_URL
.https://<ENDPOINT>.collection.sumologic.com/receiver/v1/http/<UNIQUE_HTTP_COLLECTOR_CODE>
, where:<ENDPOINT>
is your Sumo collection endpoint.<UNIQUE_HTTP_COLLECTOR_CODE>
is the string that follows the last forward slash (/
) in the upload URL for the HTTP source.DD_OP_DESTINATION_SUMO_LOGIC_HTTP_COLLECTOR_URL
.docker run -i -e DD_API_KEY=<DATADOG_API_KEY> \
-e DD_OP_PIPELINE_ID=<PIPELINE_ID> \
-e DD_SITE=<DATADOG_SITE> \
-e <SOURCE_ENV_VARIABLE> \
-e <DESINATION_ENV_VARIABLE> \
-p 8088:8088 \
datadog/observability-pipelines-worker run
docker run
command exposes the same port the Worker is listening on. If you want to map the Worker’s container port to a different port on the Docker host, use the -p | --publish
option:-p 8282:8088 datadog/observability-pipelines-worker run
helm repo update
helm upgrade --install opw \
-f aws_eks.yaml \
--set datadog.apiKey=<DATADOG_API_KEY> \
--set datadog.pipelineId=<PIPELINE_ID> \
--set <SOURCE_ENV_VARIABLES> \
--set <DESTINATION_ENV_VARIABLES> \
--set service.ports[0].protocol=TCP,service.ports[0].port=<SERVICE_PORT>,service.ports[0].targetPort=<TARGET_PORT> \
datadog/observability-pipelines-worker
<SERVICE_PORT>
to the port the Worker is listening on (<TARGET_PORT>)
. If you want to map the Worker’s pod port to a different incoming port of the Kubernetes Service, use the following service.ports[0].port
and service.ports[0].targetPort
values:--set service.ports[0].protocol=TCP,service.ports[0].port=8088,service.ports[0].targetPort=8282
helm repo add datadog https://helm.datadoghq.com
helm repo update
helm upgrade --install opw \
-f azure_aks.yaml \
--set datadog.apiKey=<DATADOG_API_KEY> \
--set datadog.pipelineId=<PIPELINE_ID> \
--set <SOURCE_ENV_VARIABLES> \
--set <DESTINATION_ENV_VARIABLES> \
--set service.ports[0].protocol=TCP,service.ports[0].port=<SERVICE_PORT>,service.ports[0].targetPort=<TARGET_PORT> \
datadog/observability-pipelines-worker
<SERVICE_PORT>
to the port the Worker is listening on (<TARGET_PORT>
). If you want to map the Worker’s pod port to a different incoming port of the Kubernetes Service, use the following service.ports[0].port
and service.ports[0].targetPort
values:--set service.ports[0].protocol=TCP,service.ports[0].port=8088,service.ports[0].targetPort=8282
helm repo add datadog https://helm.datadoghq.com
helm repo update
helm upgrade --install opw \
-f google_gke.yaml \
--set datadog.apiKey=<DATADOG_API_KEY> \
--set datadog.pipelineId=<PIPELINE_ID> \
--set <SOURCE_ENV_VARIABLES> \
--set <DESTINATION_ENV_VARIABLES> \
--set service.ports[0].protocol=TCP,service.ports[0].port=<SERVICE_PORT>,service.ports[0].targetPort=<TARGET_PORT> \
datadog/observability-pipelines-worker
<SERVICE_PORT>
to the port the Worker is listening on (<TARGET_PORT>
). If you want to map the Worker’s pod port to a different incoming port of the Kubernetes Service, use the following service.ports[0].port
and service.ports[0].targetPort
values:--set service.ports[0].protocol=TCP,service.ports[0].port=8088,service.ports[0].targetPort=8282
DD_API_KEY=<DATADOG_API_KEY> DD_OP_PIPELINE_ID=<PIPELINES_ID> DD_SITE=<DATADOG_SITE> <SOURCE_ENV_VARIABLES> <DESTINATION_ENV_VARIABLES> bash -c "$(curl -L https://s3.amazonaws.com/dd-agent/scripts/install_script_op_worker2.sh)"
If you prefer not to use the one-line installation script, follow these step-by-step instructions:
apt
repo and install the latest Worker version:sudo apt-get update
sudo apt-get install observability-pipelines-worker datadog-signing-keys
datadoghq.com
for US1), source, and destination environment variables to the Worker’s environment file:sudo cat <<EOF > /etc/default/observability-pipelines-worker
DD_API_KEY=<DATADOG_API_KEY>
DD_OP_PIPELINE_ID=<PIPELINE_ID>
DD_SITE=<DATADOG_SITE>
<SOURCE_ENV_VARIABLES>
<DESTINATION_ENV_VARIABLES>
EOF
sudo systemctl restart observability-pipelines-worker
DD_API_KEY=<DATADOG_API_KEY> DD_OP_PIPELINE_ID=<PIPELINE_ID> DD_SITE=<DATADOG_SITE> <SOURCE_ENV_VARIABLES> <DESTINATION_ENV_VARIABLES> bash -c "$(curl -L https://s3.amazonaws.com/dd-agent/scripts/install_script_op_worker2.sh)"
If you prefer not to use the one-line installation script, follow these step-by-step instructions:
sudo yum makecache
sudo yum install observability-pipelines-worker
datadoghq.com
for US1), source, and destination updated environment variables to the Worker’s environment file:sudo cat <<-EOF > /etc/default/observability-pipelines-worker
DD_API_KEY=<API_KEY>
DD_OP_PIPELINE_ID=<PIPELINE_ID>
DD_SITE=<SITE>
<SOURCE_ENV_VARIABLES>
<DESTINATION_ENV_VARIABLES>
EOF
sudo systemctl restart observability-pipelines-worker