---
title: Log Collection for AWS Lambda
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: >-
  Docs > Serverless > Serverless Monitoring for AWS Lambda > Log Collection for
  AWS Lambda
---

# Log Collection for AWS Lambda

{% alert level="info" %}
If you are using the [Datadog Lambda extension](https://docs.datadoghq.com/serverless/libraries_integrations/extension/), log collection is **enabled by default**.
{% /alert %}

### Collect logs from non-Lambda resources{% #collect-logs-from-non-lambda-resources %}

Logs generated by managed resources besides AWS Lambda functions can be valuable in helping identify the root cause of issues in your serverless applications. Datadog recommends you [collect logs](https://docs.datadoghq.com/integrations/amazon_web_services/#log-collection) from the following AWS managed resources in your environment:

- APIs: API Gateway, AppSync, ALB
- Queues & Streams: SQS, SNS, Kinesis
- Data Stores: DynamoDB, S3, RDS

## Configuration{% #configuration %}

### Enable log collection{% #enable-log-collection %}

Log collection through the Datadog Lambda extension is enabled by default.

{% tab title="Serverless Framework" %}

```yaml
custom:
  datadog:
    # ... other required parameters, such as the Datadog site and API key
    enableDDLogs: true
```

{% /tab %}

{% tab title="AWS SAM" %}

```yaml
Transform:
  - AWS::Serverless-2016-10-31
  - Name: DatadogServerless
    Parameters:
      # ... other required parameters, such as the Datadog site and API key
      enableDDLogs: true
```

{% /tab %}

{% tab title="AWS CDK" %}

```typescript
const datadog = new Datadog(this, "Datadog", {
    // ... other required parameters, such as the Datadog site and API key
    enableDatadogLogs: true
});
datadog.addLambdaFunctions([<LAMBDA_FUNCTIONS>]);
```

{% /tab %}

{% tab title="Others" %}
Set the environment variable `DD_SERVERLESS_LOGS_ENABLED` to `true` on your Lambda functions.
{% /tab %}

### Disable log collection{% #disable-log-collection %}

If you want to stop collecting logs using the Datadog Forwarder Lambda function, remove the subscription filter from your own Lambda function's CloudWatch log group.

If you want to stop collecting logs using the Datadog Lambda extension, follow the instructions below for the installation method you use:

{% tab title="Serverless Framework" %}

```yaml
custom:
  datadog:
    # ... other required parameters, such as the Datadog site and API key
    enableDDLogs: false
```

{% /tab %}

{% tab title="AWS SAM" %}

```yaml
Transform:
  - AWS::Serverless-2016-10-31
  - Name: DatadogServerless
    Parameters:
      # ... other required parameters, such as the Datadog site and API key
      enableDDLogs: false
```

{% /tab %}

{% tab title="AWS CDK" %}

```typescript
const datadog = new Datadog(this, "Datadog", {
    // ... other required parameters, such as the Datadog site and API key
    enableDatadogLogs: false
});
datadog.addLambdaFunctions([<LAMBDA_FUNCTIONS>]);
```

{% /tab %}

{% tab title="Others" %}
Set the environment variable `DD_SERVERLESS_LOGS_ENABLED` to `false` on your Lambda functions.
{% /tab %}

For more information, see [Log Management](https://docs.datadoghq.com/logs/).

### Filter or scrub information from logs{% #filter-or-scrub-information-from-logs %}

To exclude the `START` and `END` logs, set the environment variable `DD_LOGS_CONFIG_PROCESSING_RULES` to `[{"type": "exclude_at_match", "name": "exclude_start_and_end_logs", "pattern": "(START|END) RequestId"}]`. Alternatively, you can add a `datadog.yaml` file in your project root directory with the following content:

```yaml
logs_config:
  processing_rules:
    - type: exclude_at_match
      name: exclude_start_and_end_logs
      pattern: (START|END) RequestId
```

Datadog recommends keeping the `REPORT` logs, as they are used to populate the invocations list in the serverless function views.

To scrub or filter other logs before sending them to Datadog, see [Advanced Log Collection](https://docs.datadoghq.com/agent/logs/advanced_log_collection/).

### Parse and transform logs{% #parse-and-transform-logs %}

To parse and transform your logs in Datadog, see documentation for [Datadog log pipelines](https://docs.datadoghq.com/logs/log_configuration/pipelines/).

### Connect logs and traces{% #connect-logs-and-traces %}

If you are using the [Lambda extension](https://docs.datadoghq.com/serverless/libraries_integrations/extension/) to collect traces and logs, Datadog automatically adds the AWS Lambda request ID to the `aws.lambda` span under the `request_id` tag. Additionally, Lambda logs for the same request are added under the `lambda.request_id` attribute. The Datadog trace and log views are connected using the AWS Lambda request ID.

If you are using the [Forwarder Lambda function](https://docs.datadoghq.com/serverless/libraries_integrations/forwarder/) to collect traces and logs, `dd.trace_id` is automatically injected into logs (enabled by the environment variable `DD_LOGS_INJECTION`). The Datadog trace and log views are connected using the Datadog trace ID. This feature is supported for most applications using a popular runtime and logger (see the [support by runtime](https://docs.datadoghq.com/tracing/other_telemetry/connect_logs_and_traces/)).

If you are using a runtime or custom logger that isn't supported, follow these steps:

- When logging in JSON, you need to obtain the Datadog trace ID using `dd-trace` and add it to your logs under the `dd.trace_id` field:
  ```javascript
  {
    "message": "This is a log",
    "dd": {
      "trace_id": "4887065908816661012"
    }
    // ... the rest of your log
  }
  ```
- When logging in plaintext, you need to:
  1. Obtain the Datadog trace ID using `dd-trace` and add it to your log.
  1. Clone the default Lambda log pipeline, which is read-only.
  1. Enable the cloned pipeline and disable the default one.
  1. Update the [Grok parser](https://docs.datadoghq.com/logs/log_configuration/parsing/) rules of the cloned pipeline to parse the Datadog trace ID into the `dd.trace_id` attribute. For example, use rule `my_rule \[%{word:level}\]\s+dd.trace_id=%{word:dd.trace_id}.*` for logs that look like `[INFO] dd.trace_id=4887065908816661012 My log message`.
