---
title: Configure Serverless Monitoring for AWS Lambda
description: Datadog, the leading service for cloud-scale monitoring.
breadcrumbs: >-
  Docs > Serverless > Serverless Monitoring for AWS Lambda > Configure
  Serverless Monitoring for AWS Lambda
---

# Configure Serverless Monitoring for AWS Lambda

First, [install](https://docs.datadoghq.com/serverless/installation/) Datadog Serverless Monitoring to begin collecting metrics, traces, and logs. After installation is complete, refer to the following topics to configure your installation to suit your monitoring needs.

- Connect telemetry using tags
- Collect the request and response payloads
- Collect traces from non-Lambda resources
- Configure the Datadog tracer
- Select sampling rates for ingesting APM spans
- Filter or scrub sensitive information from traces
- Enable/disable trace collection
- Connect logs and traces
- Link errors to your source code
- [Submit custom metrics](https://docs.datadoghq.com/serverless/aws_lambda/metrics/#submit-custom-metrics)
- Collect Profiling data
- Send telemetry over PrivateLink or proxy
- Send telemetry to multiple Datadog organizations
- Enable FIPS compliance
- Propagate trace context over AWS resources
- Merge X-Ray and Datadog traces
- Enable AWS Lambda code signing
- Migrate to the Datadog Lambda extension
- Migrating between x86 to arm64 with the Datadog Lambda Extension
- Configure the Datadog Lambda extension for local testing
- Instrument AWS Lambda with the OpenTelemetry API
- Using Datadog Lambda Extension v67+
- Configure Auto-linking for DynamoDB PutItem
- Visualize and model AWS services correctly
- Send logs to Observability Pipelines
- Reload API key secret periodically
- Troubleshoot
- Further Reading

## Enable Threat Detection to observe attack attempts{% #enable-threat-detection-to-observe-attack-attempts %}

Get alerted on attackers targeting your serverless applications and respond quickly.

To get started, first ensure that you have [tracing enabled](https://docs.datadoghq.com/serverless/installation#installation-instructions) for your functions.

To enable threat monitoring, add the following environment variables to your deployment:

```yaml
environment:
  DD_SERVERLESS_APPSEC_ENABLED: true
  AWS_LAMBDA_EXEC_WRAPPER: /opt/datadog_wrapper
```

Redeploy the function and invoke it. After a few minutes, it appears in [AAP views](https://app.datadoghq.com/security/appsec?column=time&order=desc).

To see App and API Protection threat detection in action, send known attack patterns to your application. For example, send an HTTP header with value `acunetix-product` to trigger a [security scanner attack](https://docs.datadoghq.com/security/default_rules/security-scan-detected/) attempt:

```sh
curl -H 'My-AAP-Test-Header: acunetix-product' https://<YOUR_FUNCTION_URL>/<EXISTING_ROUTE>
```

A few minutes after you enable your application and send the attack patterns, **threat information appears in the [Application Signals Explorer](https://app.datadoghq.com/security/appsec?column=time&order=desc)**.

## Connect telemetry using tags{% #connect-telemetry-using-tags %}

Connect Datadog telemetry together through the use of reserved (`env`, `service`, and `version`) and custom tags. You can use these tags to navigate seamlessly across metrics, traces, and logs. Add the extra parameters below for the installation method you use.

{% tab title="Datadog CLI" %}
Ensure you are using the latest version of the [Datadog CLI](https://docs.datadoghq.com/serverless/serverless_integrations/cli) and run the `datadog-ci lambda instrument` command with appropriate extra arguments. For example:

```sh
datadog-ci lambda instrument \
    --env dev \
    --service web \
    --version v1.2.3 \
    --extra-tags "team:avengers,project:marvel"
    # ... other required arguments, such as function names
```

{% /tab %}

{% tab title="Serverless Framework" %}
Ensure you are using the latest version of the [Datadog serverless plugin](https://docs.datadoghq.com/serverless/serverless_integrations/plugin) and apply the tags using the `env`, `service`, `version` and `tags` parameters. For example:

```yaml
custom:
  datadog:
    # ... other required parameters, such as the Datadog site and API key
    env: dev
    service: web
    version: v1.2.3
    tags: "team:avengers,project:marvel"
```

By default, if you don't define `env` and `service`, the plugin automatically uses the `stage` and `service` values from the serverless application definition. To disable this feature, set `enableTags` to `false`.
{% /tab %}

{% tab title="AWS SAM" %}
Ensure you are using the latest version of the [Datadog serverless macro](https://docs.datadoghq.com/serverless/serverless_integrations/macro) and apply the tags using the `env`, `service`, `version` and `tags` parameters. For example:

```yaml
Transform:
  - AWS::Serverless-2016-10-31
  - Name: DatadogServerless
    Parameters:
      # ... other required parameters, such as the Datadog site and API key
      env: dev
      service: web
      version: v1.2.3
      tags: "team:avengers,project:marvel"
```

{% /tab %}

{% tab title="AWS CDK" %}
Ensure you are using the latest version of the [Datadog serverless cdk construct](https://github.com/DataDog/datadog-cdk-constructs) and apply the tags using the `env`, `service`, `version` and `tags` parameters. For example:

```typescript
const datadog = new DatadogLambda(this, "Datadog", {
    // ... other required parameters, such as the Datadog site and API key
    env: "dev",
    service: "web",
    version: "v1.2.3",
    tags: "team:avengers,project:marvel"
});
datadog.addLambdaFunctions([<LAMBDA_FUNCTIONS>]);
```

{% /tab %}

{% tab title="Others" %}
If you are collecting telemetry from your Lambda functions using the [Datadog Lambda extension](https://docs.datadoghq.com/serverless/libraries_integrations/extension/), set the following environment variables on your Lambda functions. For example:

- DD_ENV: dev
- DD_SERVICE: web
- DD_VERSION: v1.2.3
- DD_TAGS: team:avengers,project:marvel

If you are collecting telemetry from your Lambda functions using the [Datadog Forwarder Lambda function](https://docs.datadoghq.com/serverless/libraries_integrations/forwarder/), set the `env`, `service`, `version`, and additional tags as AWS resource tags on your Lambda functions. Ensure the `DdFetchLambdaTags` option is set to `true` on the CloudFormation stack for your Datadog Forwarder. This option defaults to true since version 3.19.0.
{% /tab %}

Datadog can also enrich the collected telemetry with existing AWS resource tags defined on your Lambda functions with a delay of a few minutes.

- If you are collecting telemetry from your Lambda functions using the [Datadog Lambda extension](https://docs.datadoghq.com/serverless/libraries_integrations/extension/), enable the [Datadog AWS integration](https://docs.datadoghq.com/integrations/amazon_web_services/). This feature is meant to enrich your telemetry with **custom** tags. Datadog reserved tags (`env`, `service`, and `version`) must be set through the corresponding environment variables (`DD_ENV`, `DD_SERVICE`, and `DD_VERSION` respectively). Reserved tags can also be set with the parameters provided by the Datadog integrations with the serverless developer tools. This feature does not work for Lambda functions deployed with container images.

- If you are collecting telemetry from your Lambda functions using the [Datadog Forwarder Lambda function](https://docs.datadoghq.com/serverless/libraries_integrations/forwarder/), set the `DdFetchLambdaTags` option to `true` on the CloudFormation stack for your Datadog Forwarder. This option defaults to true since version 3.19.0.

## Collect the request and response payloads{% #collect-the-request-and-response-payloads %}

{% alert level="info" %}
This feature is supported for Python, Node.js, Go, Java, and .NET.
{% /alert %}

Datadog can [collect and visualize the JSON request and response payloads of AWS Lambda functions](https://www.datadoghq.com/blog/troubleshoot-lambda-function-request-response-payloads/), giving you deeper insight into your serverless applications and helping troubleshoot Lambda function failures.

This feature is disabled by default. Follow the instructions below for the installation method you use.

{% tab title="Datadog CLI" %}
Ensure you are using the latest version of the [Datadog CLI](https://docs.datadoghq.com/serverless/serverless_integrations/cli) and run the `datadog-ci lambda instrument` command with the extra `--capture-lambda-payload` argument. For example:

```sh
datadog-ci lambda instrument \
    --capture-lambda-payload true
    # ... other required arguments, such as function names
```

{% /tab %}

{% tab title="Serverless Framework" %}
Ensure you are using the latest version of the [Datadog serverless plugin](https://docs.datadoghq.com/serverless/serverless_integrations/plugin) and set the `captureLambdaPayload` to `true`. For example:

```yaml
custom:
  datadog:
    # ... other required parameters, such as the Datadog site and API key
    captureLambdaPayload: true
```

{% /tab %}

{% tab title="AWS SAM" %}
Ensure you are using the latest version of the [Datadog serverless macro](https://docs.datadoghq.com/serverless/serverless_integrations/macro) and set the `captureLambdaPayload` parameter to `true`. For example:

```yaml
Transform:
  - AWS::Serverless-2016-10-31
  - Name: DatadogServerless
    Parameters:
      # ... other required parameters, such as the Datadog site and API key
      captureLambdaPayload: true
```

{% /tab %}

{% tab title="AWS CDK" %}
Ensure you are using the latest version of the [Datadog serverless cdk construct](https://github.com/DataDog/datadog-cdk-constructs) and set the `captureLambdaPayload` parameter to `true`. For example:

```typescript
const datadog = new DatadogLambda(this, "Datadog", {
    // ... other required parameters, such as the Datadog site and API key
    captureLambdaPayload: true
});
datadog.addLambdaFunctions([<LAMBDA_FUNCTIONS>]);
```

{% /tab %}

{% tab title="Others" %}
Set the environment variable `DD_CAPTURE_LAMBDA_PAYLOAD` to `true` on your Lambda functions.
{% /tab %}

To prevent any sensitive data within request or response JSON objects from being sent to Datadog, you can scrub specific parameters.

To do this, add a new file `datadog.yaml` in the same folder as your Lambda function code. Obfuscation of fields in the Lambda payload is then available through [the replace_tags block](https://docs.datadoghq.com/tracing/configure_data_security/#scrub-sensitive-data-from-your-spans) within `apm_config` settings in `datadog.yaml`:

```yaml
apm_config:
  replace_tags:
    # Replace all the occurrences of "foobar" in any tag with "REDACTED":
    - name: "*"
      pattern: "foobar"
      repl: "REDACTED"
    # Replace "auth" from request headers with an empty string
    - name: "function.request.headers.auth"
      pattern: "(?s).*"
      repl: ""
    # Replace "apiToken" from response payload with "****"
    - name: "function.response.apiToken"
      pattern: "(?s).*"
      repl: "****"
```

As an alternative, you can also populate the `DD_APM_REPLACE_TAGS` environment variable on your Lambda function to obfuscate specific fields:

```yaml
DD_APM_REPLACE_TAGS=[
      {
        "name": "*",
        "pattern": "foobar",
        "repl": "REDACTED"
      },
      {
        "name": "function.request.headers.auth",
        "pattern": "(?s).*",
        "repl": ""
      },
      {
        "name": "function.response.apiToken",
        "pattern": "(?s).*"
        "repl": "****"
      }
]
```

To collect payloads from AWS services, see [Capture Requests and Responses from AWS Services](https://github.com/DataDog/datadog-lambda-extension/issues).

## Collect traces from non-Lambda resources{% #collect-traces-from-non-lambda-resources %}

Datadog can infer APM spans based on the incoming Lambda events for the AWS managed resources that trigger the Lambda function. This can be help visualize the relationship between AWS managed resources and identify performance issues in your serverless applications. See [additional product details](https://www.datadoghq.com/blog/monitor-aws-fully-managed-services-datadog-serverless-monitoring/).

The following resources are currently supported:

- API Gateway (REST API, HTTP API, and WebSocket)
- Function URLs
- SQS
- SNS (SNS messages delivered through SQS are also supported)
- Kinesis Streams (if data is a JSON string or base64 encoded JSON string)
- EventBridge (custom events, where `Details` is a JSON string)
- S3
- DynamoDB

To disable this feature, set `DD_TRACE_MANAGED_SERVICES` to `false`.

### DD_SERVICE_MAPPING{% #dd_service_mapping %}

`DD_SERVICE_MAPPING` is an environment variable that renames upstream non-Lambda [services names](https://docs.datadoghq.com/tracing/glossary/#services). It operates with `old-service:new-service` pairs.

#### Syntax{% #syntax %}

`DD_SERVICE_MAPPING=key1:value1,key2:value2`…

There are two ways to interact with this variable:

#### Rename all services of a type{% #rename-all-services-of-a-type %}

To rename all upstream services associated with an AWS Lambda integration, use these identifiers:

| AWS Lambda Integration | DD_SERVICE_MAPPING Value              |
| ---------------------- | ------------------------------------- |
| `lambda_api_gateway`   | `"lambda_api_gateway:newServiceName"` |
| `lambda_sns`           | `"lambda_sns:newServiceName"`         |
| `lambda_sqs`           | `"lambda_sqs:newServiceName"`         |
| `lambda_s3`            | `"lambda_s3:newServiceName"`          |
| `lambda_eventbridge`   | `"lambda_eventbridge:newServiceName"` |
| `lambda_kinesis`       | `"lambda_kinesis:newServiceName"`     |
| `lambda_dynamodb`      | `"lambda_dynamodb:newServiceName"`    |
| `lambda_url`           | `"lambda_url:newServiceName"`         |
| `lambda_msk`           | `"lambda_msk:newServiceName"`         |

#### Rename specific services{% #rename-specific-services %}

For a more granular approach, use these service-specific identifiers:

| Service     | Identifier   | DD_SERVICE_MAPPING Value                           |
| ----------- | ------------ | -------------------------------------------------- |
| API Gateway | API ID       | `"r3pmxmplak:newServiceName"`                      |
| SNS         | Topic name   | `"ExampleTopic:newServiceName"`                    |
| SQS         | Queue name   | `"MyQueue:newServiceName"`                         |
| S3          | Bucket name  | `"example-bucket:newServiceName"`                  |
| EventBridge | Event source | `"eventbridge.custom.event.sender:newServiceName"` |
| Kinesis     | Stream name  | `"MyStream:newServiceName"`                        |
| DynamoDB    | Table name   | `"ExampleTableWithStream:newServiceName"`          |
| Lambda URLs | API ID       | `"a8hyhsshac:newServiceName"`                      |
| MSK         | Cluster name | `"ExampleCluster:newServiceName"`                  |

#### Examples with description{% #examples-with-description %}

| Command                                                    | Description                                                                                              |
| ---------------------------------------------------------- | -------------------------------------------------------------------------------------------------------- |
| `DD_SERVICE_MAPPING="lambda_api_gateway:new-service-name"` | Renames all `lambda_api_gateway` upstream services to `new-service-name`                                 |
| `DD_SERVICE_MAPPING="08se3mvh28:new-service-name"`         | Renames specific upstream service `08se3mvh28.execute-api.eu-west-1.amazonaws.com` to `new-service-name` |

For renaming downstream services, see `DD_SERVICE_MAPPING` in the [tracer's config documentation](https://docs.datadoghq.com/tracing/trace_collection/library_config/).

## Configure the Datadog tracer{% #configure-the-datadog-tracer %}

To see what libraries and frameworks are automatically instrumented by the Datadog APM client, see [Compatibility Requirements for APM](https://docs.datadoghq.com/tracing/trace_collection/compatibility/). To instrument custom applications, see Datadog's APM guide for [custom instrumentation](https://docs.datadoghq.com/tracing/trace_collection/custom_instrumentation/).

## Select sampling rates for ingesting APM spans{% #select-sampling-rates-for-ingesting-apm-spans %}

To manage the [APM traced invocation sampling rate](https://docs.datadoghq.com/tracing/trace_pipeline/ingestion_controls/#configure-the-service-ingestion-rate) for serverless functions, set the `DD_TRACE_SAMPLING_RULES` environment variable on the function to a value between 0.000 (no tracing of Lambda function invocations) and 1.000 (trace all Lambda function invocations).

**Notes**:

- The use of `DD_TRACE_SAMPLE_RATE` is deprecated. Use `DD_TRACE_SAMPLING_RULES` instead. For instance, if you already set `DD_TRACE_SAMPLE_RATE` to `0.1`, set `DD_TRACE_SAMPLING_RULES` to `[{"sample_rate":0.1}]` instead.
- Overall traffic metrics such as `trace.<OPERATION_NAME>.hits` are calculated based on sampled invocations *only* in Lambda.

For high throughput services, there's usually no need for you to collect every single request as trace data is very repetitive—an important enough problem should always show symptoms in multiple traces. [Ingestion controls](https://docs.datadoghq.com/tracing/guide/trace_ingestion_volume_control#effects-of-reducing-trace-ingestion-volume) help you to have the visibility that you need to troubleshoot problems while remaining within budget.

The default sampling mechanism for ingestion is called [head-based sampling](https://docs.datadoghq.com/tracing/trace_pipeline/ingestion_mechanisms/?tabs=environmentvariables#head-based-sampling). The decision of whether to keep or drop a trace is made at the very beginning of the trace, at the start of the root span. This decision is then propagated to other services as part of their request context, for example as an HTTP request header. Because the decision is made at the beginning of the trace and then conveyed to all parts of the trace, you must configure the sampling rate on the root service to take effect.

After spans have been ingested by Datadog, the Datadog Intelligent Retention Filter indexes a proportion of traces to help you monitor the health of your applications. You can also define custom [retention filters](https://docs.datadoghq.com/tracing/trace_pipeline/trace_retention/) to index trace data you want to keep for longer to support your organization's goals.

Learn more about the [Datadog Trace Pipeline](https://docs.datadoghq.com/tracing/trace_pipeline/).

## Filter or scrub sensitive information from traces{% #filter-or-scrub-sensitive-information-from-traces %}

To filter traces before sending them to Datadog, see [Ignoring Unwanted Resources in APM](https://docs.datadoghq.com/tracing/guide/ignoring_apm_resources/).

To scrub trace attributes for data security, see [Configure the Datadog Agent or Tracer for Data Security](https://docs.datadoghq.com/tracing/configure_data_security/).

## Enable/disable trace collection{% #enabledisable-trace-collection %}

Trace collection through the Datadog Lambda extension is enabled by default.

If you want to start collecting traces from your Lambda functions, apply the configurations below:

{% tab title="Datadog CLI" %}

```sh
datadog-ci lambda instrument \
    --tracing true
    # ... other required arguments, such as function names
```

{% /tab %}

{% tab title="Serverless Framework" %}

```yaml
custom:
  datadog:
    # ... other required parameters, such as the Datadog site and API key
    enableDDTracing: true
```

{% /tab %}

{% tab title="AWS SAM" %}

```yaml
Transform:
  - AWS::Serverless-2016-10-31
  - Name: DatadogServerless
    Parameters:
      # ... other required parameters, such as the Datadog site and API key
      enableDDTracing: true
```

{% /tab %}

{% tab title="AWS CDK" %}

```typescript
const datadog = new DatadogLambda(this, "Datadog", {
    // ... other required parameters, such as the Datadog site and API key
    enableDatadogTracing: true
});
datadog.addLambdaFunctions([<LAMBDA_FUNCTIONS>]);
```

{% /tab %}

{% tab title="Others" %}
Set the environment variable `DD_TRACE_ENABLED` to `true` on your Lambda functions.
{% /tab %}

#### Disable trace collection{% #disable-trace-collection %}

If you want to stop collecting traces from your Lambda functions, apply the configurations below:

{% tab title="Datadog CLI" %}

```sh
datadog-ci lambda instrument \
    --tracing false
    # ... other required arguments, such as function names
```

{% /tab %}

{% tab title="Serverless Framework" %}

```yaml
custom:
  datadog:
    # ... other required parameters, such as the Datadog site and API key
    enableDDTracing: false
```

{% /tab %}

{% tab title="AWS SAM" %}

```yaml
Transform:
  - AWS::Serverless-2016-10-31
  - Name: DatadogServerless
    Parameters:
      # ... other required parameters, such as the Datadog site and API key
      enableDDTracing: false
```

{% /tab %}

{% tab title="AWS CDK" %}

```typescript
const datadog = new DatadogLambda(this, "Datadog", {
    // ... other required parameters, such as the Datadog site and API key
    enableDatadogTracing: false
});
datadog.addLambdaFunctions([<LAMBDA_FUNCTIONS>]);
```

{% /tab %}

{% tab title="Others" %}
Set the environment variable `DD_TRACE_ENABLED` to `false` on your Lambda functions.
{% /tab %}

## Connect logs and traces{% #connect-logs-and-traces %}

If you are using the [Lambda extension](https://docs.datadoghq.com/serverless/libraries_integrations/extension/) to collect traces and logs, Datadog automatically adds the AWS Lambda request ID to the `aws.lambda` span under the `request_id` tag. Additionally, Lambda logs for the same request are added under the `lambda.request_id` attribute. The Datadog trace and log views are connected using the AWS Lambda request ID.

If you are using the [Forwarder Lambda function](https://docs.datadoghq.com/serverless/libraries_integrations/forwarder/) to collect traces and logs, `dd.trace_id` is automatically injected into logs (enabled by default with the environment variable `DD_LOGS_INJECTION`). The Datadog trace and log views are connected using the Datadog trace ID. This feature is supported for most applications using a popular runtime and logger (see the [support by runtime](https://docs.datadoghq.com/tracing/other_telemetry/connect_logs_and_traces/)).

If you are using a runtime or custom logger that isn't supported, follow these steps:

- When logging in JSON, you need to obtain the Datadog trace ID using `dd-trace` and add it to your logs under the `dd.trace_id` field:
  ```javascript
  {
    "message": "This is a log",
    "dd": {
      "trace_id": "4887065908816661012"
    }
    // ... the rest of your log
  }
  ```
- When logging in plaintext, you need to:
  1. Obtain the Datadog trace ID using `dd-trace` and add it to your log.
  1. Clone the default Lambda log pipeline, which is read-only.
  1. Enable the cloned pipeline and disable the default one.
  1. Update the [Grok parser](https://docs.datadoghq.com/logs/log_configuration/parsing/) rules of the cloned pipeline to parse the Datadog trace ID into the `dd.trace_id` attribute. For example, use rule `my_rule \[%{word:level}\]\s+dd.trace_id=%{word:dd.trace_id}.*` for logs that look like `[INFO] dd.trace_id=4887065908816661012 My log message`.

## Link errors to your source code{% #link-errors-to-your-source-code %}

[Datadog source code integration](https://docs.datadoghq.com/integrations/guide/source-code-integration) allows you to link your telemetry (such as stack traces) to the source code of your Lambda functions in your Git repositories.

For instructions on setting up the source code integration on your serverless applications, see the [Embed Git information in your build artifacts section](https://docs.datadoghq.com/integrations/guide/source-code-integration/?tab=go#serverless).

## Collect profiling data{% #collect-profiling-data %}

Datadog's [Continuous Profiler](https://docs.datadoghq.com/profiler/) is available in Preview for Python tracer version 4.62.0 and layer version 62 and earlier. This optional feature is enabled by setting the `DD_PROFILING_ENABLED` environment variable to `true`.

The Continuous Profiler works by spawning a thread that periodically takes a snapshot of the CPU and heap of all running Python code. This can include the profiler itself. If you want the profiler to ignore itself, set `DD_PROFILING_IGNORE_PROFILER` to `true`.

## Send telemetry over PrivateLink or proxy{% #send-telemetry-over-privatelink-or-proxy %}

The Datadog Lambda Extension needs access to the public internet to send data to Datadog. If your Lambda functions are deployed in a VPC without access to public internet, you can [send data over AWS PrivateLink](https://docs.datadoghq.com/agent/guide/private-link/) to the `datadoghq.com` [Datadog site](https://docs.datadoghq.com/getting_started/site/), or [send data over a proxy](https://docs.datadoghq.com/agent/proxy/) for all other sites.

If you are using the Datadog Forwarder, follow these [instructions](https://github.com/DataDog/datadog-serverless-functions/tree/master/aws/logs_monitoring#aws-privatelink-support).

## Send telemetry to multiple Datadog organizations{% #send-telemetry-to-multiple-datadog-organizations %}

If you wish to send data to multiple organizations, you can enable dual shipping using a plaintext API key, AWS Secrets Manager, or AWS KMS.

{% tab title="Plaintext API Key" %}
You can enable dual shipping using a plaintext API key by setting the following environment variables on your Lambda function.

```bash
# Enable dual shipping for metrics
DD_ADDITIONAL_ENDPOINTS={"https://app.datadoghq.com": ["<your_api_key_2>", "<your_api_key_3>"], "https://app.datadoghq.eu": ["<your_api_key_4>"]}
# Enable dual shipping for APM (traces)
DD_APM_ADDITIONAL_ENDPOINTS={"https://trace.agent.datadoghq.com": ["<your_api_key_2>", "<your_api_key_3>"], "https://trace.agent.datadoghq.eu": ["<your_api_key_4>"]}
# Enable dual shipping for APM (profiling)
DD_APM_PROFILING_ADDITIONAL_ENDPOINTS={"https://trace.agent.datadoghq.com": ["<your_api_key_2>", "<your_api_key_3>"], "https://trace.agent.datadoghq.eu": ["<your_api_key_4>"]}
# Enable dual shipping for logs
DD_LOGS_CONFIG_FORCE_USE_HTTP=true
DD_LOGS_CONFIG_ADDITIONAL_ENDPOINTS=[{"api_key": "<your_api_key_2>", "Host": "agent-http-intake.logs.datadoghq.com", "Port": 443, "is_reliable": true}]
```

{% /tab %}

{% tab title="AWS Secrets Manager" %}
The Datadog Extension supports retrieving [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) values automatically for any environment variables prefixed with `_SECRET_ARN`. You can use this to securely store your environment variables in Secrets Manager and dual ship with Datadog.

1. Set the environment variable `DD_LOGS_CONFIG_FORCE_USE_HTTP` on your Lambda function.
1. Add the `secretsmanager:GetSecretValue` permission to your Lambda function IAM role permissions.
1. Create a new secret on Secrets Manager to store the dual shipping metrics environment variable. The contents should be similar to `{"https://app.datadoghq.com": ["<your_api_key_2>", "<your_api_key_3>"], "https://app.datadoghq.eu": ["<your_api_key_4>"]}`.
1. Set the environment variable `DD_ADDITIONAL_ENDPOINTS_SECRET_ARN` on your Lambda function to the ARN from the aforementioned secret.
1. Create a new secret on Secrets Manager to store the dual shipping APM (traces) environment variable. The contents should be **similar** to `{"https://trace.agent.datadoghq.com": ["<your_api_key_2>", "<your_api_key_3>"], "https://trace.agent.datadoghq.eu": ["<your_api_key_4>"]}`.
1. Set the environment variable `DD_APM_ADDITIONAL_ENDPOINTS_SECRET_ARN` on your Lambda function equal to the ARN from the aforementioned secret.
1. Create a new secret on Secrets Manager to store the dual shipping APM (profiling) environment variable. The contents should be **similar** to `{"https://trace.agent.datadoghq.com": ["<your_api_key_2>", "<your_api_key_3>"], "https://trace.agent.datadoghq.eu": ["<your_api_key_4>"]}`.
1. Set the environment variable `DD_APM_PROFILING_ADDITIONAL_ENDPOINTS_SECRET_ARN` on your Lambda function equal to the ARN from the aforementioned secret.
1. Create a new secret on Secrets Manager to store the dual shipping logs environment variable. The contents should be **similar** to `[{"api_key": "<your_api_key_2>", "Host": "agent-http-intake.logs.datadoghq.com", "Port": 443, "is_reliable": true}]`.
1. Set the environment variable `DD_LOGS_CONFIG_ADDITIONAL_ENDPOINTS_SECRET_ARN` on your Lambda function equal to the ARN from the aforementioned secret.

{% /tab %}

{% tab title="AWS KMS" %}
The Datadog Extension supports decrypting [AWS KMS](https://docs.aws.amazon.com/kms/) values automatically for any environment variables prefixed with `_KMS_ENCRYPTED`. You can use this to securely store your environment variables in KMS and dual ship with Datadog.

1. Set the environment variable `DD_LOGS_CONFIG_FORCE_USE_HTTP=true` on your Lambda function.
1. Add the `kms:GenerateDataKey` and `kms:Decrypt` permissions to your Lambda function IAM role permissions.
1. For dual shipping metrics, encrypt `{"https://app.datadoghq.com": ["<your_api_key_2>", "<your_api_key_3>"], "https://app.datadoghq.eu": ["<your_api_key_4>"]}` using KMS and set the `DD_ADDITIONAL_ENDPOINTS_KMS_ENCRYPTED` environment variable equal to its value.
1. For dual shipping traces, encrypt `{"https://trace.agent.datadoghq.com": ["<your_api_key_2>", "<your_api_key_3>"], "https://trace.agent.datadoghq.eu": ["<your_api_key_4>"]}` using KMS and set the `DD_APM_ADDITIONAL_KMS_ENCRYPTED` environment variable equal to its value.
1. For dual shipping profiling, encrypt `{"https://trace.agent.datadoghq.com": ["<your_api_key_2>", "<your_api_key_3>"], "https://trace.agent.datadoghq.eu": ["<your_api_key_4>"]}` using KMS and set the `DD_APM_PROFILING_ADDITIONAL_ENDPOINTS_KMS_ENCRYPTED` environment variable equal to its value.
1. For dual shipping logs, encrypt `[{"api_key": "<your_api_key_2>", "Host": "agent-http-intake.logs.datadoghq.com", "Port": 443, "is_reliable": true}]` using KMS and set the `DD_LOGS_CONFIG_ADDITIONAL_ENDPOINTS_KMS_ENCRYPTED` environment variable equal to its value.

{% /tab %}

For more advanced usage, see the [Dual Shipping guide](https://docs.datadoghq.com/agent/guide/dual-shipping/).

## Enable FIPS compliance{% #enable-fips-compliance %}

{% alert level="info" %}
For a complete overview of FIPS compliance for AWS Lambda functions, refer to the dedicated [AWS Lambda FIPS Compliance](https://docs.datadoghq.com/serverless/aws_lambda/fips-compliance) page.
{% /alert %}

To enable FIPS compliance for AWS Lambda functions, follow these steps:

1. Use a FIPS-compliant extension layer by referencing the appropriate ARN:

{% tab title="AWS GovCLoud" %}

```sh
arn:aws-us-gov:lambda:<AWS_REGION>:002406178527:layer:Datadog-Extension-FIPS:94
arn:aws-us-gov:lambda:<AWS_REGION>:002406178527:layer:Datadog-Extension-ARM-FIPS:94
```

{% /tab %}

{% tab title="AWS Commercial" %}

```sh
arn:aws:lambda:<AWS_REGION>:464622532012:layer:Datadog-Extension-FIPS:94
arn:aws:lambda:<AWS_REGION>:464622532012:layer:Datadog-Extension-ARM-FIPS:94
```

{% /tab %}

For Lambda functions using Python, JavaScript, or Go, set the environment variable `DD_LAMBDA_FIPS_MODE` to `true`. This environment variable:

- In FIPS mode, the Lambda metric helper functions require the FIPS-compliant extension for metric submission
- Uses AWS FIPS endpoints for API key lookups
- Is enabled by default in GovCloud environments

For Lambda functions using Ruby, .NET, or Java, no additional environment variable configuration is needed.

For complete end-to-end FIPS compliance, configure your Lambda function to use the US1-FED Datadog site:

- Set the `DD_SITE` to `ddog-gov.com` (required for end-to-end FIPS compliance) **Note**: While the FIPS-compliant Lambda components work with any Datadog site, only the US1-FED site has FIPS-compliant intake endpoints.

## Propagate trace context over AWS resources{% #propagate-trace-context-over-aws-resources %}

Datadog automatically injects the trace context into outgoing AWS SDK requests and extracts the trace context from the Lambda event. This enables Datadog to trace a request or transaction over distributed services. See [Serverless Trace Propagation](https://docs.datadoghq.com/serverless/distributed_tracing/#trace-propagation).

## Merge X-Ray and Datadog traces{% #merge-x-ray-and-datadog-traces %}

AWS X-Ray supports tracing through certain AWS managed services such as AppSync and Step Functions, which is not supported by Datadog APM natively. You can enable the [Datadog X-Ray integration](https://docs.datadoghq.com/integrations/amazon_xray/) and merge the X-Ray traces with the Datadog native traces. See [additional details](https://docs.datadoghq.com/serverless/distributed_tracing/#trace-merging).

## Enable AWS Lambda code signing{% #enable-aws-lambda-code-signing %}

[Code signing for AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/configuration-codesigning.html) helps to ensure that only trusted code is deployed from your Lambda functions to AWS. When you enable code signing on your functions, AWS validates that all of the code in your deployments is signed by a trusted source, which you define from your code signing configuration.

If your Lambda functions are configured to use code signing, you must add Datadog's Signing Profile ARN to your function's code signing configuration before you can deploy Lambda functions using Lambda Layers published by Datadog.

Datadog's Signing Profile ARN:

```
arn:aws:signer:us-east-1:464622532012:/signing-profiles/DatadogLambdaSigningProfile/9vMI9ZAGLc
```

## Migrate to the Datadog Lambda extension{% #migrate-to-the-datadog-lambda-extension %}

Datadog can collect the monitoring data from your Lambda functions either using the [Forwarder Lambda function](https://docs.datadoghq.com/serverless/libraries_integrations/forwarder/) or the [Lambda extension](https://docs.datadoghq.com/serverless/libraries_integrations/extension/). Datadog recommends the Lambda extension for new installations. If you are unsure, see [Deciding to migrate to the Datadog Lambda extension](https://docs.datadoghq.com/serverless/guide/extension_motivation/).

To migrate, compare the [installation instructions using the Datadog Lambda Extension](https://docs.datadoghq.com/serverless/installation/) against the [instructions using the Datadog Forwarder](https://docs.datadoghq.com/serverless/guide#install-using-the-datadog-forwarder). For your convenience, the key differences are summarized below.

**Note**: Datadog recommends migrating your dev and staging applications first and migrating production applications one by one.

{% alert level="info" %}
The Datadog Lambda extension enables log collection by default. If you are migrating from the Forwarder to the extension, ensure that you remove your log subscription. Otherwise, you may see duplicate logs.
{% /alert %}

{% tab title="Datadog CLI" %}

1. Upgrade `@datadog/datadog-ci` to the latest version
1. Update the `--layer-version` argument and set it to the latest version for your runtime.
1. Set the `--extension-version` argument to the latest extension version. The latest extension version is `94`.
1. Set the required environment variables `DATADOG_SITE` and `DATADOG_API_KEY_SECRET_ARN`.
1. Remove the `--forwarder` argument.
1. If you configured the Datadog AWS integration to automatically subscribe the Forwarder to Lambda log groups, disable that after you migrate *all* the Lambda functions in that region.

{% /tab %}

{% tab title="Serverless Framework" %}

1. Upgrade `serverless-plugin-datadog` to the latest version, which installs the Datadog Lambda Extension by default, unless you set `addExtension` to `false`.
1. Set the required parameters `site` and `apiKeySecretArn`.
1. Set the `env`, `service`, and `version` parameters if you previously set them as Lambda resource tags. The plugin will automatically set them through the Datadog reserved environment variables instead, such as `DD_ENV`, when using the extension.
1. Remove the `forwarderArn` parameter, unless you want to keep the Forwarder for collecting logs from non-Lambda resources and you have `subscribeToApiGatewayLogs`, `subscribeToHttpApiLogs`, or `subscribeToWebsocketLogs` set to `true`.
1. If you configured the Datadog AWS integration to automatically subscribe the Forwarder to Lambda log groups, disable that after you migrate *all* the Lambda functions in that region.

{% /tab %}

{% tab title="AWS SAM" %}

1. Update the `datadog-serverless-macro` CloudFormation stack to pick up the latest version.
1. Set the `extensionLayerVersion` parameter to the latest extension version. The latest extension version is `94`.
1. Set the required parameters `site` and `apiKeySecretArn`.
1. Remove the `forwarderArn` parameter.
1. If you configured the Datadog AWS integration to automatically subscribe the Forwarder to Lambda log groups, disable that after you migrate *all* the Lambda functions in that region.

{% /tab %}

{% tab title="AWS CDK" %}

1. Upgrade `datadog-cdk-constructs` or `datadog-cdk-constructs-v2` to the latest version.
1. Set the `extensionLayerVersion` parameter to the latest extension version. The latest extension version is `94`.
1. Set the required parameters `site` and `apiKeySecretArn`.
1. Set the `env`, `service`, and `version` parameters if you previously set them as Lambda resource tags. The construct will automatically set them through the Datadog reserved environment variables instead, such as `DD_ENV`, when using the extension.
1. Remove the `forwarderArn` parameter.
1. If you configured the Datadog AWS integration to automatically subscribe the Forwarder to Lambda log groups, disable that after you migrate *all* the Lambda functions in that region.

{% /tab %}

{% tab title="Others" %}

1. Upgrade the Datadog Lambda library layer for your runtime to the latest version.
1. Install the latest version of the Datadog Lambda extension.
1. Set the required environment variables `DD_SITE` and `DD_API_KEY_SECRET_ARN`.
1. Set the `DD_ENV`, `DD_SERVICE`, and `DD_VERSION` environment variables if you previously set them as Lambda resource tags.
1. Remove the subscription filter that streams logs from your Lambda function's log group to the Datadog Forwarder.
1. If you configured the Datadog AWS integration to automatically subscribe the Forwarder to Lambda log groups, disable that after you migrate *all* the Lambda functions in that region.

{% /tab %}

## Migrating between x86 to arm64 with the Datadog Lambda Extension{% #migrating-between-x86-to-arm64-with-the-datadog-lambda-extension %}

The Datadog Extension is a compiled binary, available in both x86 and arm64 variants. If you are migrating an x86 Lambda function to arm64 (or arm64 to x86) using a deployment tool such as CDK, Serverless Framework, or SAM, ensure that your service integration (such as API Gateway, SNS, or Kinesis) is configured to use a Lambda function's versions or aliases, otherwise the function may be unavailable for about ten seconds during deployment.

This happens because migrating a Lambda function from x86 to arm64 consists of two parallel API calls, `updateFunction` and `updateFunctionConfiguration`. During these calls, there is a brief window where the Lambda `updateFunction` call has completed and the code is updated to use the new architecture while the `updateFunctionConfiguration` call has not yet completed, so the old architecture is still configured for the Extension.

If you cannot use Layer Versions, Datadog recommends configuring the [Datadog Forwarder](https://docs.datadoghq.com/serverless/guide#install-using-the-datadog-forwarder) during the architecture migration process.

## Configure the Datadog Lambda extension for local testing{% #configure-the-datadog-lambda-extension-for-local-testing %}

Not all Lambda emulators support the AWS Lambda Telemetry API. To test your Lambda function's container image locally with the Datadog Lambda extension installed, you need to set `DD_SERVERLESS_FLUSH_STRATEGY` to `periodically,1` in your local testing environment. Otherwise, the extension waits for responses from the AWS Lambda Telemetry API and blocks the invocation.

## Instrument AWS Lambda with the OpenTelemetry API{% #instrument-aws-lambda-with-the-opentelemetry-api %}

The Datadog tracing library, which is included in the Datadog Lambda Extension upon installation, accepts the spans and traces generated by OpenTelemetry-instrumented code, processes the telemetry, and sends it to Datadog.

You can use this approach if, for example, your code has already been instrumented with the OpenTelemetry API. You may also use this approach if you want to instrument using vendor-agnostic code with the OpenTelemetry API while still gaining the benefits of using the Datadog tracing libraries.

To instrument AWS Lambda with the OpenTelemetry API, set the environment variable `DD_TRACE_OTEL_ENABLED` to `true`. See [Custom instrumentation with the OpenTelemetry API](https://docs.datadoghq.com/tracing/trace_collection/otel_instrumentation/) for more details.

## Using Datadog Lambda Extension v67+{% #using-datadog-lambda-extension-v67 %}

Version 67+ of [the Datadog Extension](https://github.com/DataDog/datadog-lambda-extension) is optimized to significantly reduce cold start duration. To use the optimized extension, set the `DD_SERVERLESS_APPSEC_ENABLED` environment variable to `false`. When the `DD_SERVERLESS_APPSEC_ENABLED` environment variable is set to `true`, the Datadog Extension defaults to the fully compatible older version. You can also force your extension to use the older version by setting `DD_EXTENSION_VERSION` to `compatibility`. Datadog encourages you to report any feedback or bugs by adding an [issue on GitHub](https://github.com/DataDog/datadog-lambda-extension/issues) and tagging your issue with `version/next`.

## Configure Auto-linking for DynamoDB PutItem{% #configure-auto-linking-for-dynamodb-putitem %}

*Available for Python and Node.js runtimes*. When segments of your asynchronous requests cannot propagate trace context, Datadog's [Span Auto-linking](https://docs.datadoghq.com/serverless/aws_lambda/distributed_tracing/#span-auto-linking) feature automatically detects linked spans. To enable Span Auto-linking for [DynamoDB Change Streams](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html)' `PutItem` operation, configure primary key names for your tables.

{% tab title="Python" %}

```python
ddtrace.config.botocore['dynamodb_primary_key_names_for_tables'] = {
    'table_name': {'key1', 'key2'},
    'other_table': {'other_key'},
}
```

{% /tab %}

{% tab title="Node.js" %}

```js
// Initialize the tracer with the configuration
const tracer = require('dd-trace').init({
  dynamoDb: {
    tablePrimaryKeys: {
      'table_name': ['key1', 'key2'],
      'other_table': ['other_key']
    }
  }
})
```

{% /tab %}

{% tab title="Environment variable" %}

```sh
export DD_BOTOCORE_DYNAMODB_TABLE_PRIMARY_KEYS='{
    "table_name": ["key1", "key2"],
    "other_table": ["other_key"]
}'
```

{% /tab %}

This enables DynamoDB `PutItem` calls to be instrumented with span pointers. Many DynamoDB API calls do not include the item's primary key fields as separate values, so they need to be provided to the tracer separately. The configuration above is structured as a dictionary (`dict`) or object keyed by the table names as strings (`str`). Each value is the set of primary key field names (as strings) for the associated table. The set can have exactly one or two elements, depending on the table's primary key schema.

## Visualize and model AWS services by resource name{% #visualize-and-model-aws-services-by-resource-name %}

These versions of the [Node.js](https://github.com/DataDog/datadog-lambda-js/releases/tag/v12.127.0), [Python](https://github.com/DataDog/datadog-lambda-python/releases/tag/v8.113.0), and [Java](https://github.com/DataDog/datadog-lambda-java/releases/tag/v24) Lambda layers released changes to correctly name, model and visualize AWS managed services.

Service names reflect the actual AWS resource name rather than only the AWS service:

- `aws.lambda` → `[function_name]`
- `aws.dynamodb` → `[table_name]`
- `aws.sns` → `[topic_name]`
- `aws.sqs` → `[queue_name]`
- `aws.kinesis` → `[stream_name]`
- `aws.s3` → `[bucket_name]`
- `aws.eventbridge` → `[event_name]`

You may prefer the older service representation model if your dashboards and monitors rely on the legacy naming convention. To restore the previous behavior, set the environment var: `DD_TRACE_AWS_SERVICE_REPRESENTATION_ENABLED=false`

The updated service modeling configuration is recommended.

## Send logs to Observability Pipelines{% #send-logs-to-observability-pipelines %}

Version 87+ of the Datadog Lambda Extension allows users to send logs to Observability Pipelines.

To enable this feature, set these environment variables:

- `DD_OBSERVABILITY_PIPELINES_WORKER_LOGS_ENABLED`: `true`
- `DD_OBSERVABILITY_PIPELINES_WORKER_LOGS_URL`: `<YOUR_OBSERVABILITY_PIPELINE_URL>`

See [Send Datadog Lambda Extension Forwarder Logs to Observability Pipelines](https://docs.datadoghq.com/observability_pipelines/sources/lambda_extension/) for more information.

## Reload API key secret periodically{% #reload-api-key-secret-periodically %}

If you specify the Datadog API key using `DD_API_KEY_SECRET_ARN`, you can also set `DD_API_KEY_SECRET_RELOAD_INTERVAL` to periodically reload the secret. For example, if you set `DD_API_KEY_SECRET_RELOAD_INTERVAL` to `43200`, then the secret is reloaded when the API key is needed to send data, and it has been more than 43200 seconds since the last load.

Example use case: For security, every day (86400 seconds), the API key is rotated and the secret is updated to the new key, and the old API key is kept valid for another day as a grace period. In this case, you can set `DD_API_KEY_SECRET_RELOAD_INTERVAL` to `43200`, so the API key is reloaded during the grace period of the old key.

This is available for version 88+ of the Datadog Lambda Extension.

## Troubleshoot{% #troubleshoot %}

If you have trouble configuring your installations, set the environment variable `DD_LOG_LEVEL` to `debug` for debugging logs. For additional troubleshooting tips, see the [serverless monitoring troubleshooting guide](https://docs.datadoghq.com/serverless/guide/troubleshoot_serverless_monitoring/).

## Further Reading{% #further-reading %}

- [Install Serverless Monitoring for AWS Lambda](https://docs.datadoghq.com/serverless/installation/)
- [Troubleshoot Serverless Monitoring for AWS Lambda](https://docs.datadoghq.com/serverless/troubleshooting/)
- [Datadog GitHub integration](https://docs.datadoghq.com/integrations/github)
