- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
Use the Observability Pipelines Worker to process and route your HTTP client logs to different destinations based on your use case.
This document walks you through the following steps:
To use Observability Pipelines’ HTTP/S Server source, you need the following information available:
0.0.0.0:9997
. The Observability Pipelines Worker listens to this socket address for your HTTP client logs.To configure your HTTP/S Server source, enter the following:
Server Certificate Path
: The path to the certificate file that has been signed by your Certificate Authority (CA) Root File in DER or PEM (X.509).CA Certificate Path
: The path to the certificate file that is your Certificate Authority (CA) Root File in DER or PEM (X.509).Private Key Path
: The path to the .key
private key file that belongs to your Server Certificate Path in DER or PEM (PKCS #8) format.Enter the following information based on your selected logs destination.
Datadog 목적지에 대한 설정 단계가 없습니다.
0.0.0.0:8088
참고: /services/collector/event
는 엔드포인트에 자동으로 추가됩니다.DD_OP_SOURCE_SPLUNK_HEC_ADDRESS
에 저장됩니다.다음 필드는 선택 항목입니다.
JSON
, Logfmt
, 또는 Raw
텍스트로 인코딩할지 선택합니다. 디코딩을 선택하지 않으면 디코딩은 기본적으로 JSON으로 설정됩니다.name
을 재정의합니다.host
값을 재정의하여 Sumo Logic 컬렉터(Collector) 소스에 대한 기본값을 재정의할 수 있습니다.category
을 재정의합니다.The rsyslog and syslog-ng destinations match these log fields to the following Syslog fields:
Log Event | SYSLOG FIELD | Default |
---|---|---|
log[“message”] | MESSAGE | NIL |
log[“procid”] | PROCID | The running Worker’s process ID. |
log[“appname”] | APP-NAME | observability_pipelines |
log[“facility”] | FACILITY | 8 (log_user) |
log[“msgid”] | MSGID | NIL |
log[“severity”] | SEVERITY | info |
log[“host”] | HOSTNAME | NIL |
log[“timestamp”] | TIMESTAMP | Current UTC time. |
The following destination settings are optional:
Server Certificate Path
: The path to the certificate file that has been signed by your Certificate Authority (CA) Root File in DER or PEM (X.509).CA Certificate Path
: The path to the certificate file that is your Certificate Authority (CA) Root File in DER or PEM (X.509).Private Key Path
: The path to the .key
private key file that belongs to your Server Certificate Path in DER or PEM (PKCS#8) format.To authenticate the Observability Pipelines Worker for Google Chronicle, contact your Google Security Operations representative for a Google Developer Service Account Credential. This credential is a JSON file and must be placed under DD_OP_DATA_DIR/config
. See Getting API authentication credential for more information.
To set up the Worker’s Google Chronicle destination:
Note: Logs sent to the Google Chronicle destination must have ingestion labels. For example, if the logs are from a A10 load balancer, it must have the ingestion label A10_LOAD_BALANCER
. See Google Cloud’s Support log types with a default parser for a list of available log types and their respective ingestion labels.
The following fields are optional:
Optionally, enter the name of the OpenSearch index.
Select the data center region (US or EU) of your New Relic account.
There are pre-selected processors added to your processor group out of the box. You can add additional processors or delete any existing ones based on your processing needs.
Processor groups are executed from top to bottom. The order of the processors is important because logs are checked by each processor, but only logs that match the processor’s filters are processed. To modify the order of the processors, use the drag handle on the top left corner of the processor you want to move.
Each processor has a corresponding filter query in their fields. Processors only process logs that match their filter query. And for all processors except the filter processor, logs that do not match the query are sent to the next step of the pipeline. For the filter processor, logs that do not match the query are dropped.
For any attribute, tag, or key:value
pair that is not a reserved attribute, your query must start with @
. Conversely, to filter reserved attributes, you do not need to append @
in front of your filter query.
For example, to filter out and drop status:info
logs, your filter can be set as NOT (status:info)
. To filter out and drop system-status:info
, your filter must be set as NOT (@system-status:info)
.
Filter query examples:
NOT (status:debug)
: This filters for only logs that do not have the status DEBUG
.status:ok service:flask-web-app
: This filters for all logs with the status OK
from your flask-web-app
service.status:ok AND service:flask-web-app
.host:COMP-A9JNGYK OR host:COMP-J58KAS
: This filter query only matches logs from the labeled hosts.@user.status:inactive
: This filters for logs with the status inactive
nested under the user
attribute.Learn more about writing filter queries in Datadog’s Log Search Syntax.
Enter the information for the processors you want to use. Click the Add button to add additional processors. To delete a processor, click the kebab on the right side of the processor and select Delete.
이 프로세서는 지정된 필터 쿼리와 일치하는 로그를 필터링하고 일치하지 않는 모든 로그를 제외합니다 이 프로세스에서 로그가 제외되면, 아래 나와 있는 프로세서가 모두 해당 로그를 수신하지 않습니다. 이 프로세서는 디버그 또는 경고 로그와 같은 불필요한 로그를 필터링할 수 있습니다.
필터 프로세서를 설정하려면 다음과 같이 하세요.
The remap processor can add, drop, or rename fields within your individual log data. Use this processor to enrich your logs with additional context, remove low-value fields to reduce volume, and standardize naming across important attributes. Select add field, drop field, or rename field in the dropdown menu to get started.
Use add field to append a new key-value field to your log.
To set up the add field processor:
<OUTER_FIELD>.<INNER_FIELD>
. All values are stored as strings.
Note: If the field you want to add already exists, the Worker throws an error and the existing field remains unchanged.Use drop field to drop a field from logging data that matches the filter you specify below. It can delete objects, so you can use the processor to drop nested keys.
To set up the drop field processor:
<OUTER_FIELD>.<INNER_FIELD>
.
Note: If your specified key does not exist, your log will be unimpacted.Use rename field to rename a field within your log.
To set up the rename field processor:
<OUTER_FIELD>.<INNER_FIELD>
. Once renamed, your original field is deleted unless you enable the Preserve source tag checkbox described below.null
value is applied to your target.<OUTER_FIELD>.<INNER_FIELD>
.For the following message structure, use outer_key.inner_key.double_inner_key
to refer to the key with the value double_inner_value
.
{
"outer_key": {
"inner_key": "inner_value",
"a": {
"double_inner_key": "double_inner_value",
"b": "b value"
},
"c": "c value"
},
"d": "d value"
}
This processor samples your logging traffic for a representative subset at the rate that you define, dropping the remaining logs. As an example, you can use this processor to sample 20% of logs from a noisy non-critical service.
The sampling only applies to logs that match your filter query and does not impact other logs. If a log is dropped at this processor, none of the processors below receives that log.
To set up the sample processor:
2
means 2% of logs are retained out of all the logs that match the filter query.This processor parses logs using the grok parsing rules that are available for a set of sources. The rules are automatically applied to logs based on the log source. Therefore, logs must have a source
field with the source name. If this field is not added when the log is sent to the Observability Pipelines Worker, you can use the Add field processor to add it.
If the source
field of a log matches one of the grok parsing rule sets, the log’s message
field is checked against those rules. If a rule matches, the resulting parsed data is added in the message
field as a JSON object, overwriting the original message
.
If there isn’t a source
field on the log, or no rule matches the log message
, then no changes are made to the log and it is sent to the next step in the pipeline.
To set up the grok parser, define a filter query. Only logs that match the specified filter query are processed. All logs, regardless of whether they match the filter query, are sent to the next step in the pipeline.
To test log samples for out-of-the-box rules:
To add a custom parsing rule:
source
. The parsing rules are applied to logs with that source
.url
, useragent
, and csv
filters are not available.The quota processor measures the logging traffic for logs that match the filter you specify. When the configured daily quota is met inside the 24-hour rolling window, the processor can either drop additional logs or send an alert using a Datadog monitor. You can configure the processor to track the total volume or the total number of events. The pipeline uses the name of the quota to identify the quota across multiple Remote Configuration deployments of the Worker.
As an example, you can configure this processor to drop new logs or trigger an alert without dropping logs after the processor has received 10 million events from a certain service in the last 24 hours.
To set up the quota processor:
Events
or by the Volume
in bytes.Use Partition by if you want to set a quota on a specific service or region. For example, if you want to set a quota for 10 events per day and group the events by the service
field, enter service
into the Partition by field.
Select Ignore when missing if you want the quota applied only to events that match the partition. For example, if the Worker receives the following set of events:
{"service":"a", "source":"foo", "message": "..."}
{"service":"b", "source":"bar", "message": "..."}
{"service":"b", "message": "..."}
{"source":"redis", "message": "..."}
{"message": "..."}
And the Ignore when missing is selected, then the Worker:
service:a
and source:foo
service:b
and source:bar
The quota is applied to the two sets of logs and not to the last three events.
If the Ignore when missing is not selected, the quota is applied to all five events.
If you are partitioning by service
and have two services: a
and b
, you can use overrides to apply different quotas for them. For example, if you want service:a
to have a quota limit of 5,000 bytes and service:b
to have a limit of 50 events, the override rules look like this:
Service | Type | Limit |
---|---|---|
a | Bytes | 5,000 |
b | Events | 50 |
The reduce processor groups multiple log events into a single log, based on the fields specified and the merge strategies selected. Logs are grouped at 10-second intervals. After the interval has elapsed for the group, the reduced log for that group is sent to the next step in the pipeline.
To set up the reduce processor:
These are the available merge strategies for combining log events.
Name | Description |
---|---|
Array | Appends each value to an array. |
Concat | Concatenates each string value, delimited with a space. |
Concat newline | Concatenates each string value, delimited with a newline. |
Concat raw | Concatenates each string value, without a delimiter. |
Discard | Discards all values except the first value that was received. |
Flat unique | Creates a flattened array of all unique values that were received. |
Longest array | Keeps the longest array that was received. |
Max | Keeps the maximum numeric value that was received. |
Min | Keeps the minimum numeric value that was received. |
Retain | Discards all values except the last value that was received. Works as a way to coalesce by not retaining `null`. |
Shortest array | Keeps the shortest array that was received. |
Sum | Sums all numeric values that were received. |
The deduplicate processor removes copies of data to reduce volume and noise. It caches 5,000 messages at a time and compares your incoming logs traffic against the cached messages. For example, this processor can be used to keep only unique warning logs in the case where multiple identical warning logs are sent in succession.
To set up the deduplicate processor:
Match
on or Ignore
the fields specified below.Match
is selected, then after a log passes through, future logs that have the same values for all of the fields you specify below are removed.Ignore
is selected, then after a log passes through, future logs that have the same values for all of their fields, except the ones you specify below, are removed.<OUTER_FIELD>.<INNER_FIELD>
to match subfields. See the Path notation example below.For the following message structure, use outer_key.inner_key.double_inner_key
to refer to the key with the value double_inner_value
.
{
"outer_key": {
"inner_key": "inner_value",
"a": {
"double_inner_key": "double_inner_value",
"b": "b value"
},
"c": "c value"
},
"d": "d value"
}
민감 데이터 프로세서는 로그를 스캔해 PII, PCI, 커스텀 민감 데이터와 같은 민감 정보를 감지, 수정, 또는 해시합니다. 사전 정의된 규칙 라이브러리에서 고르거나 커스텀 Regex 규칙을 입력해 민감 데이터를 스캔할 수 있습니다.
민감 데이터 스캐너 프로세서를 설정하려면 다음을 따르세요.
outer_key.inner_key
)을 사용해 중첩된 키에 접근할 수 있습니다. 중첩된 데이터로 지정된 속성의 경우 모든 중첩된 데이터가 스캔됩니다.outer_key.inner_key
)을 사용해 중첩된 키에 접근할 수 있습니다. 중첩된 데이터로 지정된 속성의 경우, 모든 중첩된 데이터가 예외가 됩니다.This processor adds a field with the name of the host that sent the log. For example, hostname: 613e197f3526
. Note: If the hostname
already exists, the Worker throws an error and does not overwrite the existing hostname
.
To set up this processor:
This processor converts the specified field into JSON objects.
To set up this processor:
Use this processor to enrich your logs with information from a reference table, which could be a local file or database.
To set up the enrichment table processor:
For this example, merchant_id
is used as the source attribute and merchant_info
as the target attribute.
This is the example reference table that the enrichment processor uses:
merch_id | merchant_name | city | state |
---|---|---|---|
803 | Andy’s Ottomans | Boise | Idaho |
536 | Cindy’s Couches | Boulder | Colorado |
235 | Debra’s Benches | Las Vegas | Nevada |
merch_id
is set as the column name the processor uses to find the source attribute’s value. Note: The source attribute’s value does not have to match the column name.
If the enrichment processor receives a log with "merchant_id":"536"
:
536
in the reference table’s merch_id
column.merchant_info
attribute as a JSON object:merchant_info {
"merchant_name":"Cindy's Couches",
"city":"Boulder",
"state":"Colorado"
}
Many types of logs are meant to be used for telemetry to track trends, such as KPIs, over long periods of time. Generating metrics from your logs is a cost-effective way to summarize log data from high-volume logs, such as CDN logs, VPC flow logs, firewall logs, and networks logs. Use the generate metrics processor to generate either a count metric of logs that match a query or a distribution metric of a numeric value contained in the logs, such as a request duration.
Note: The metrics generated are custom metrics and billed accordingly. See Custom Metrics Billing for more information.
To set up the processor:
Click Manage Metrics to create new metrics or edit existing metrics. This opens a side panel.
You can generate these types of metrics for your logs. See the Metrics Types and Distributions documentation for more details.
Metric type | Description | Example |
---|---|---|
COUNT | Represents the total number of event occurrences in one time interval. This value can be reset to zero, but cannot be decreased. | You want to count the number of logs with status:error . |
GAUGE | Represents a snapshot of events in one time interval. | You want to measure the latest CPU utilization per host for all logs in the production environment. |
DISTRIBUTION | Represent the global statistical distribution of a set of values calculated across your entire distributed infrastructure in one time interval. | You want to measure the average time it takes for an API call to be made. |
For this status:error
log example:
{"status": "error", "env": "prod", "host": "ip-172-25-222-111.ec2.internal"}
To create a count metric that counts the number of logs that contain "status":"error"
and groups them by env
and host
, enter the following information:
Input parameters | Value |
---|---|
Filter query | @status:error |
Metric name | status_error_total |
Metric type | Count |
Group by | env , prod |
For this example of an API response log:
{
"timestamp": "2018-10-15T17:01:33Z",
"method": "GET",
"status": 200,
"request_body": "{"information"}",
"response_time_seconds: 10
}
To create a distribution metric that measures the average time it takes for an API call to be made, enter the following information:
Input parameters | Value |
---|---|
Filter query | @method |
Metric name | status_200_response |
Metric type | Distribution |
Select a log attribute | response_time_seconds |
Group by | method |
Use this processor to add a field name and value of an environment variable to the log message.
To set up this processor:
Environment variables that match any of the following patterns are blocked from being added to log messages because the environment variable could contain sensitive data.
CONNECTIONSTRING
/ CONNECTION-STRING
/ CONNECTION_STRING
AUTH
CERT
CLIENTID
/ CLIENT-ID
/ CLIENT_ID
CREDENTIALS
DATABASEURL
/ DATABASE-URL
/ DATABASE_URL
DBURL
/ DB-URL
/ DB_URL
KEY
OAUTH
PASSWORD
PWD
ROOT
SECRET
TOKEN
USER
The environment variable is matched to the pattern and not the literal word. For example, PASSWORD
blocks environment variables like USER_PASSWORD
and PASSWORD_SECRET
from getting added to the log messages.
0.0.0.0:9997
. The Observability Pipelines Worker listens to this socket address for your HTTP client logs.Datadog 로그 관리에 대해 설정할 환경 변수가 없습니다.
Splunk HEC 토큰과 Splunk 인스턴스의 기본 URL을 입력합니다. 자세한 내용은 필수 구성 요소를 참조하세요.
Worker는 HEC 토큰을 Splunk 수집 엔드포인트로 전달합니다. Observability Pipelines Worker가 로그를 처리한 후 로그를 지정된 Splunk 인스턴스 URL로 전송합니다.
참고: Splunk HEC 대상은 대상을 설정하는지에 관계없이 모든 로그를 /services/collector/event
엔드포인트로 전달하여 출력을 JSON
또는 raw
로 인코딩합니다.
Sumo Logic HTTP 컬렉터(Collector) URL을 입력합니다. 자세한 내용은 전제 조건을 참조하세요.
Enter the rsyslog or syslog-ng endpoint URL. For example, 127.0.0.1:9997
. The Observability Pipelines Worker sends logs to this address and port.
Enter the Google Chronicle endpoint URL. For example, https://chronicle.googleapis.com
.
http://CLUSTER_ID.LOCAL_HOST_IP.ip.es.io:9200
.http://<hostname.IP>:9200
.http://<hostname.IP>:9200
.docker run -i -e DD_API_KEY=<DATADOG_API_KEY> \
-e DD_OP_PIPELINE_ID=<PIPELINE_ID> \
-e DD_SITE=<DATADOG_SITE> \
-e <SOURCE_ENV_VARIABLE> \
-e <DESTINATION_ENV_VARIABLE> \
-p 8088:8088 \
datadog/observability-pipelines-worker run
docker run
명령은 작업자가 수신 중인 포트와 동일한 포트를 노출시킵니다. 작업자의 컨테이너 포트를 도커(Docker) 호스트의 다른 포트에 매핑하려면 명령에 -p | --publish
옵션을 사용하세요:-p 8282:8088 datadog/observability-pipelines-worker run
파이프라인 설정을 변경하려면 기존 파이프라인 업데이트를 참조하세요.
helm repo add datadog https://helm.datadoghq.com
helm repo update
helm upgrade --install opw \
-f values.yaml \
--set datadog.apiKey=<DATADOG_API_KEY> \
--set datadog.pipelineId=<PIPELINE_ID> \
--set <SOURCE_ENV_VARIABLES> \
--set <DESTINATION_ENV_VARIABLES> \
--set service.ports[0].protocol=TCP,service.ports[0].port=<SERVICE_PORT>,service.ports[0].targetPort=<TARGET_PORT> \
datadog/observability-pipelines-worker
<SERVICE_PORT>
to the port the Worker is listening on (<TARGET_PORT>
). If you want to map the Worker’s pod port to a different incoming port of the Kubernetes Service, use the following service.ports[0].port
and service.ports[0].targetPort
values in the command:--set service.ports[0].protocol=TCP,service.ports[0].port=8088,service.ports[0].targetPort=8282
See Update Existing Pipelines if you want to make changes to your pipeline’s configuration.
If you are running a self-hosted and self-managed Kubernetes cluster, and have defined zones with node labels using topology.kubernetes.io/zone
, then you can use the Helm chart values file as is. However, if you are not using the label topology.kubernetes.io/zone
, you need to update the topologyKey
in the values.yaml
file to match the key you are using. Or if you run your Kubernetes install without zones, remove the entire topology.kubernetes.io/zone
section.
API 키 선택을 클릭해 사용하고 싶은 Datadog API 키를 선택하세요.
UI에 제공된 원스텝 명령을 실행하여 Worker를 설치합니다.
참고: /etc/default/observability-pipelines-worker
에서 Worker가 사용하는 환경 변수는 설치 스크립트를 실행할 때 업데이트되지 않습니다. 변경이 필요한 경우 파일을 수동으로 업데이트하고 Worker를 다시 시작하세요.
한 줄 설치 스크립트를 사용하지 않으려면 다음 단계별 지침을 따르세요.
sudo apt-get update
sudo apt-get install apt-transport-https curl gnupg
deb
리포지토리를 설정하고 Datadog 아카이브 키링을 생성합니다:sudo sh -c "echo 'deb [signed-by=/usr/share/keyrings/datadog-archive-keyring.gpg] https://apt.datadoghq.com/ stable observability-pipelines-worker-2' > /etc/apt/sources.list.d/datadog-observability-pipelines-worker.list"
sudo touch /usr/share/keyrings/datadog-archive-keyring.gpg
sudo chmod a+r /usr/share/keyrings/datadog-archive-keyring.gpg
curl https://keys.datadoghq.com/DATADOG_APT_KEY_CURRENT.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
curl https://keys.datadoghq.com/DATADOG_APT_KEY_06462314.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
curl https://keys.datadoghq.com/DATADOG_APT_KEY_F14F620E.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
curl https://keys.datadoghq.com/DATADOG_APT_KEY_C0962C7D.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
apt
리포지토리를 업데이트하고 Worker를 설치합니다:sudo apt-get update
sudo apt-get install observability-pipelines-worker datadog-signing-keys
datadoghq.com
), 소스 및 대상 환경 변수를 추가합니다.sudo cat <<EOF > /etc/default/observability-pipelines-worker
DD_API_KEY=<DATADOG_API_KEY>
DD_OP_PIPELINE_ID=<PIPELINE_ID>
DD_SITE=<DATADOG_SITE>
<SOURCE_ENV_VARIABLES>
<DESTINATION_ENV_VARIABLES>
EOF
sudo systemctl restart observability-pipelines-worker
파이프라인의 설정을 변경하려면 기존 파이프라인 업데이트를 참조하세요.
API 키 선택을 클릭해 사용하고 싶은 Datadog API 키를 선택하세요.
UI에 제공된 원스텝 명령을 실행하여 Worker를 설치합니다.
참고: /etc/default/observability-pipelines-worker
에서 Worker가 사용하는 환경 변수는 설치 스크립트를 실행할 때 업데이트되지 않습니다. 변경이 필요한 경우 파일을 수동으로 업데이트하고 Worker를 다시 시작하세요.
한 줄 설치 스크립트를 사용하지 않으려면 다음 단계별 지침을 따르세요.
rpm
리포지토리를 설정합니다. 참고: RHEL 8.1 또는 CentOS 8.1을 실행하는 경우, 아래 설정 에서 repo_gpgcheck=1
대신 repo_gpgcheck=0
을 사용하세요.cat <<EOF > /etc/yum.repos.d/datadog-observability-pipelines-worker.repo
[observability-pipelines-worker]
name = Observability Pipelines Worker
baseurl = https://yum.datadoghq.com/stable/observability-pipelines-worker-2/\$basearch/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://keys.datadoghq.com/DATADOG_RPM_KEY_CURRENT.public
https://keys.datadoghq.com/DATADOG_RPM_KEY_B01082D3.public
EOF
sudo yum makecache
sudo yum install observability-pipelines-worker
datadoghq.com
), 소스 및 대상 환경 변수를 추가합니다.sudo cat <<-EOF > /etc/default/observability-pipelines-worker
DD_API_KEY=<API_KEY>
DD_OP_PIPELINE_ID=<PIPELINE_ID>
DD_SITE=<SITE>
<SOURCE_ENV_VARIABLES>
<DESTINATION_ENV_VARIABLES>
EOF
sudo systemctl restart observability-pipelines-worker
파이프라인의 설정을 변경하려면 기존 파이프라인 업데이트를 참조하세요.
파이프라인의 예상 로그 볼륨을 입력하려면 드롭다운 메뉴 옵션 중 하나를 선택하세요.
옵션 | 설명 |
---|---|
Unsure | 로그 볼륨을 예상할 수 없거나 Worker를 테스트하고 싶을 경우 이 옵션을 선택하세요. 이 옵션의 경우 일반적인 목적의 t4g.large 인스턴스 2개를 최댓값으로 EC2 Auto Scaling 그룹을 프로비저닝합니다. |
1-5 TB/day | 이 옵션을 선택하면 컴퓨팅 최적화된 c6g.large 인스턴스 2개를 최댓값으로 EC2 Auto Scaling 그룹을 프로비저닝합니다. |
5-10 TB/day | 이 옵션을 선택하면 컴퓨팅 최적화된 c6g.large 인스턴스 2개를 최솟값으로, 5개를 최댓값으로 EC2 Auto Scaling 그룹을 프로비저닝합니다. |
>10 TB/day | Datadog에서는 대용량 프로덕션 배포에 이 옵션을 사용할 것을 권고합니다. 이 옵션을 선택하면 컴퓨팅 최적화된 c6g.xlarge 인스턴스 2개를 최솟값으로, 10개를 최댓값으로 EC2 Auto Scaling 그룹을 프로비저닝합니다. |
참고: 다른 파라미터는 Worker 배포에 적절한 기본값으로 설정되어 있습니다. 그러나 스택을 생성하기 전에 AWS 콘솔에서 내 사용 사례에 맞게 값을 조정할 수 있습니다.
Worker를 설치할 때 사용할 AWS 리전을 선택하세요.
API 키 선택을 클릭해 사용하고 싶은 Datadog API 키를 선택하세요.
Launch CloudFormation Template을 클릭해 AWS 콘솔로 이동해 스택 구성을 검토한 후 실행하세요. CloudFormation 파라미터가 올바른지 다시 확인하세요.
Worker를 설치할 때 사용할 VPC와 서브넷을 선택하세요.
IAM과 관련해 필요한 권한 체크 상자가 모두 선택되어 있는지 검토하세요. Submit을 클릭해 스택을 생성하세요. 이 지점부터 CloudFormation에서 설치를 처리합니다. Worker 인스턴스가 실행되고, 필요한 소프트웨어가 설치되며, Worker가 자동으로 시작됩니다.
관측 가능성 파이프라인 설치 페이지로 돌아가서 배포를 클릭하세요.
파이프라인 구성을 변경하고 싶으면 기존 파이프라인 업데이트을 참고하세요.