Log Collection for AWS Lambda
このページは日本語には対応しておりません。随時翻訳に取り組んでいます。
翻訳に関してご質問やご意見ございましたら、
お気軽にご連絡ください。
Collect logs from non-Lambda resources
Logs generated by managed resources besides AWS Lambda functions can be valuable in helping identify the root cause of issues in your serverless applications. Datadog recommends you collect logs from the following AWS managed resources in your environment:
- APIs: API Gateway, AppSync, ALB
- Queues & Streams: SQS, SNS, Kinesis
- Data Stores: DynamoDB, S3, RDS
Configuration
Enable log collection
Log collection through the Datadog Lambda extension is enabled by default.
custom:
datadog:
# ... other required parameters, such as the Datadog site and API key
enableDDLogs: true
Transform:
- AWS::Serverless-2016-10-31
- Name: DatadogServerless
Parameters:
# ... other required parameters, such as the Datadog site and API key
enableDDLogs: true
const datadog = new Datadog(this, "Datadog", {
// ... other required parameters, such as the Datadog site and API key
enableDatadogLogs: true
});
datadog.addLambdaFunctions([<LAMBDA_FUNCTIONS>]);
Set the environment variable DD_SERVERLESS_LOGS_ENABLED
to true
on your Lambda functions.
Disable log collection
If you want to stop collecting logs using the Datadog Forwarder Lambda function, remove the subscription filter from your own Lambda function’s CloudWatch log group.
If you want to stop collecting logs using the Datadog Lambda extension, follow the instructions below for the installation method you use:
custom:
datadog:
# ... other required parameters, such as the Datadog site and API key
enableDDLogs: false
Transform:
- AWS::Serverless-2016-10-31
- Name: DatadogServerless
Parameters:
# ... other required parameters, such as the Datadog site and API key
enableDDLogs: false
const datadog = new Datadog(this, "Datadog", {
// ... other required parameters, such as the Datadog site and API key
enableDatadogLogs: false
});
datadog.addLambdaFunctions([<LAMBDA_FUNCTIONS>]);
Set the environment variable DD_SERVERLESS_LOGS_ENABLED
to false
on your Lambda functions.
For more information, see Log Management.
To exclude the START
and END
logs, set the environment variable DD_LOGS_CONFIG_PROCESSING_RULES
to [{"type": "exclude_at_match", "name": "exclude_start_and_end_logs", "pattern": "(START|END) RequestId"}]
. Alternatively, you can add a datadog.yaml
file in your project root directory with the following content:
logs_config:
processing_rules:
- type: exclude_at_match
name: exclude_start_and_end_logs
pattern: (START|END) RequestId
Datadog recommends keeping the REPORT
logs, as they are used to populate the invocations list in the serverless function views.
To scrub or filter other logs before sending them to Datadog, see Advanced Log Collection.
To parse and transform your logs in Datadog, see documentation for Datadog log pipelines.
Connect logs and traces
If you are using the Lambda extension to collect traces and logs, Datadog automatically adds the AWS Lambda request ID to the aws.lambda
span under the request_id
tag. Additionally, Lambda logs for the same request are added under the lambda.request_id
attribute. The Datadog trace and log views are connected using the AWS Lambda request ID.
If you are using the Forwarder Lambda function to collect traces and logs, dd.trace_id
is automatically injected into logs (enabled by the environment variable DD_LOGS_INJECTION
). The Datadog trace and log views are connected using the Datadog trace ID. This feature is supported for most applications using a popular runtime and logger (see the support by runtime).
If you are using a runtime or custom logger that isn’t supported, follow these steps:
- When logging in JSON, you need to obtain the Datadog trace ID using
dd-trace
and add it to your logs under the dd.trace_id
field:{
"message": "This is a log",
"dd": {
"trace_id": "4887065908816661012"
}
// ... the rest of your log
}
- When logging in plaintext, you need to:
- Obtain the Datadog trace ID using
dd-trace
and add it to your log. - Clone the default Lambda log pipeline, which is read-only.
- Enable the cloned pipeline and disable the default one.
- Update the Grok parser rules of the cloned pipeline to parse the Datadog trace ID into the
dd.trace_id
attribute. For example, use rule my_rule \[%{word:level}\]\s+dd.trace_id=%{word:dd.trace_id}.*
for logs that look like [INFO] dd.trace_id=4887065908816661012 My log message
.