Amazon Lambda
Security Monitoring is now available Security Monitoring is now available

Amazon Lambda

Crawler Crawler

Overview

Amazon Lambda is a compute service that runs code in response to events and automatically manages the compute resources required by that code.

Enable this integration to begin collecting CloudWatch metrics. This page also describes how to set up custom metrics, logging, and tracing for your Lambda functions.

Setup

Installation

If you haven’t already, set up the Amazon Web Services integration first.

Metric collection

AWS Lambda metrics

  1. In the AWS integration tile, ensure that Lambda is checked under metric collection.
  2. Add the following permissions to your Datadog IAM policy to collect Amazon Lambda metrics. For more information on Lambda policies, review the documentation on the AWS website.

    AWS PermissionDescription
    lambda:List*List Lambda functions, metadata, and tags.
    tag:GetResourcesGet custom tags applied to Lambda functions.
  3. Install the Datadog - AWS Lambda integration.

Once this is completed, view all of your Lambda Functions in the Datadog Serverless view. This page brings together metrics, traces, and logs from your AWS Lambda functions running serverless applications into one view. Detailed documentation on this feature can be found in the Datadog Serverless documentation.

Real-time enhanced Lambda metrics

Datadog generates real-time Lambda runtime metrics out-of-the-box for Node.js, Python, and Ruby runtimes.

Using the Datadog Lambda Layers and Datadog Forwarder, Datadog can generate metrics with low latency, several second granularity, and detailed metadata for cold starts and custom tags.

MetricDescription
aws.lambda.enhanced.invocationsMeasures the number of times a function is invoked in response to an event or invocation API call.
aws.lambda.enhanced.errorsMeasures the number of invocations that failed due to errors in the function (response code 4XX).
aws.lambda.enhanced.max_memory_usedMeasures the amount of memory used by the function.
aws.lambda.enhanced.durationMeasures the average elapsed wall clock time from when the function code starts executing as a result of an invocation to when it stops executing.
aws.lambda.enhanced.billed_durationMeasures the billed amount of time the function ran for (100ms increments).
aws.lambda.enhanced.init_durationMeasures the initialization time of a function during a cold start.
aws.lambda.enhanced.estimated_costMeasures the total estimated cost of the function invocation (US dollars).
aws.lambda.enhanced.timeoutsMeasures the number of times a function times out.

These metrics are tagged with the functionname, cold_start, memorysize, region, account_id, allocated_memory, and runtime. These are DISTRIBUTION type metrics, so you can display their count, min, max, sum, and avg.

Enabling enhanced real-time Lambda metrics
  1. Set up or update the Datadog Forwarder to at least version 3.0.0.
  2. Install the Datadog Lambda Layer on the functions for which you’d like these metrics (Layer version 9+ for Python, Layer version 6+ for Node.js, Layer version 5+ for Ruby, package version 0.5.0+ for Go, package version 0.0.2+ for Java).
  3. To automatically tag these metrics with custom tags applied to your Lambda function, ensure that you’re running at least version 3.0.0 of the Datadog Forwarder. Then set the parameter DdFetchLambdaTags to true on the Datadog Forwarder CloudFormation Stack.
  4. Wrap your AWS Lambda function handlers with the Datadog library, as shown in the sample code. The Datadog Serverless plugin can automatically wrap all of your Python and Node.js function handlers for you.
  5. Browse to the Enhanced Lambda Metrics Default Dashboard.

Note: These metrics are enabled by default, but only get sent asynchronously. They are sent to the Datadog Forwarder via CloudWatch Logs, meaning you’ll see an increased volume of logs in CloudWatch. This may affect your AWS bill. To opt out, set the DD_ENHANCED_METRICS environment variable to false on your AWS Lambda functions.

The invocation and error metrics get generated by the Datadog Lambda Layer, and the others are generated by the Datadog Forwarder.

Note: Any tag applied to your Lambda function automatically becomes a new dimension for analyzing your metrics. To surface enhanced metrics tags in Datadog, set the parameter DdFetchLambdaTags to true on the Datadog forwarder CloudFormation stack.

Log collection

  1. If you haven’t already, set up the Datadog Forwarder in your AWS account by following the instructions in the DataDog/datadog-serverless-functions Github repository.
  2. Configure the triggers that cause the Lambda to execute. There are two ways to configure the triggers:

    • Automatically: We manage the log collection Lambda triggers for you if you grant us a set of permissions.
    • Manually: Set up each trigger yourself via the AWS console.
  3. Once done, go to your Datadog Log section to start exploring your logs.

Note: Set the parameter DdFetchLambdaTags to true on the forwarder CloudFormation stack to ensure your logs are tagged with the resource tags on the originating Lambda function.

Note: If you are in AWS us-east-1 region, leverage Datadog-AWS Private Link to forward your logs to Datadog. If you do so, your forwarder function must have the VPCLambdaExecutionRole permission.

Trace collection

Datadog supports distributed tracing for your AWS Lambda functions using either Datadog APM or AWS X-Ray. You can use either set of client libraries to generate traces, and Datadog APM will connect traces from applications running on hosts, containers, and serverless functions automatically.

Datadog APM sends trace data to Datadog in real time, allowing you to monitor traces with little to no latency in the Live Tail view. Datadog APM also uses tail-based sampling to make better sampling decisions.

Visualize your traces on the Serverless page, in App Analytics, and on the Service Map.

Note: Enabling Datadog APM or the AWS X-Ray integration increases the amount of consumed analyzed spans which can impact your bill.

Tracing with Datadog APM

The Datadog Node.js, Python, and Ruby tracing libraries support distributed tracing for AWS Lambda, with more runtimes coming soon. The easiest way to add tracing to your application is with the Datadog Lambda Layer, which includes the Datadog tracing library as a dependency. Set up APM for your runtime using the following steps:

Step 1: Install (or update to) the latest version of the Datadog Forwarder.

Step 2: Install the Datadog Lambda Layer on your function. Alternatively, install the Datadog tracing library for your runtime:

yarn add datadog-lambda-js
yarn add dd-trace

npm install datadog-lambda-js
npm install dd-trace

Step 3: Instrument your code.

const { datadog } = require('datadog-lambda-js');
const tracer = require('dd-trace').init(); // Any manual tracer config goes here.

// This function will be wrapped in a span
const longCalculation = tracer.wrap('calculation-long-number', () => {
    // An expensive calculation goes here
});

// This function will also be wrapped in a span
module.exports.hello = datadog((event, context, callback) => {
    longCalculation();

    callback(null, {
        statusCode: 200,
        body: 'Hello from serverless!'
    });
});

To instrument your Node.js libraries and customize your traces, consult the Datadog Node.js APM documentation.

pip install datadog-lambda

Or, add datadog-lambda to your project’s requirements.txt.

Step 3: Instrument your code.

from datadog_lambda.metric import lambda_metric
from datadog_lambda.wrapper import datadog_lambda_wrapper

from ddtrace import tracer

@datadog_lambda_wrapper
def hello(event, context):
  return {
    "statusCode": 200,
    "body": get_message()
  }

@tracer.wrap()
def get_message():
  return "Hello from serverless!"

To instrument your Python libraries and customize your traces, consult the Datadog Python APM documentation.

Add the following to your Gemfile, or install the gems using your package manager of choice:

gem 'datadog-lambda'
gem 'ddtrace'

Note: ddtrace uses native extensions, which must be compiled for Amazon Linux before being packaged and uploaded to AWS Lambda. For this reason, Datadog recommends using the [Datadog Lambda Layer][51].

Step 3: Instrument your code.

require 'ddtrace'
require 'datadog/lambda'

Datadog::Lambda.configure_apm do |c|
# Enable instrumentation here
end

def handler(event:, context:)
  Datadog::Lambda::wrap(event, context) do
    # Your function code here
    some_operation()
  end
end

# Instrument the rest of your code using ddtrace

def some_operation()
    Datadog.tracer.trace('some_operation') do |span|
        # Do something here
    end
end

To instrument your Ruby libraries and customize your traces, consult the Datadog Ruby APM documentation.

If your use case requires it, you can merge traces generated by the AWS X-Ray integration and the Datadog tracing library. But note that in most cases, you should only need one tracing library. This enables Datadog to make better sampling decisions for high-volume use cases.

If you do want to merge both your AWS X-Ray and Datadog APM traces together, use the following configuration for your runtime:

module.exports.hello = datadog(
    (event, context, callback) => {
        longCalculation();

        callback(null, {
            statusCode: 200,
            body: 'Hello from serverless!'
        });
    },
    { mergeDatadogXrayTraces: true }
);

Set the DD_MERGE_XRAY_TRACES environment variable to True on your Lambda function.

Set the DD_MERGE_DATADOG_XRAY_TRACES environment variable to True on your Lambda function.

Tracing across AWS Lambda and hosts

When applicable, Datadog merges AWS X-Ray traces with native Datadog APM traces. This means that your traces will show the complete picture of requests that cross infrastructure boundaries, whether it be AWS Lambda, containers, on-prem hosts, or managed services.

  1. Enable the AWS X-Ray integration for tracing your Lambda functions.
  2. Add the Datadog Lambda Layer to your Lambda functions.
  3. Set up Datadog APM on your hosts and container-based infrastructure.

Note: For X-Ray and Datadog APM traces to appear in the same flame graph, all services must have the same env tag.

Organizing your infrastructure with tags

Any tag applied to your Lambda function automatically becomes a new dimension on which your can slice and dice your traces.

Tags are especially powerful in Datadog APM, the Service Map, and the Services List, which have first-class support for the env and service tags.

Note: If you are tracing with Datadog APM, set the parameter DdFetchLambdaTags to true on the forwarder CloudFormation stack to ensure your traces are tagged with the resource tags on the originating Lambda function. Lambda function resource tags are automatically surfaced to X-Ray traces in Datadog without any additional configuration.

The env tag

Use env to separate out your staging, development, and production environments. This works for any kind of infrastructure, not just for your serverless functions. As an example, you could tag your production EU Lambda functions with env:prod-eu.

By default, Lambda functions are tagged with env:none in Datadog. Add your own tag to override this.

The service tag

Add the service tag in order to group related Lambda functions into a service. The Service Map and Services List use this tag to show relationships between services and the health of their monitors. Services are represented as individual nodes on the Service Map.

By default, each Lambda function is treated as its own service. Add your own tag to override this.

Note: The default behavior for new Datadog customers is for all Lambda functions to be grouped under the aws.lambda service, and represented as a single node on the Service map. Tag your functions by service to override this.

animated service map of Lambda functions

Serverless Integrations

The following Lambda function integrations provide additional functionality for monitoring serverless applications:

AWS Step Functions

Enable the AWS Step Functions integration to automatically get additional tags on your Lambda metrics to identify which state machines a particular function belongs to. Use these tags to get an aggregated view of your Lambda metrics and logs per Step Function on the Serverless view.

  1. Install the AWS Step Functions integration.
  2. Add the following permissions to your Datadog IAM policy to add additional tags to your Lambda metrics.

    AWS PermissionDescription
    states:ListStateMachinesList active Step Functions.
    states:DescribeStateMachineGet Step Function metadata, and tags.

Amazon EFS for Lambda

Enable Amazon EFS for Lambda to automatically get additional tags on your Lambda metrics to identify which EFS a particular function belongs to. Use these tags to get an aggregated view of your Lambda metrics and logs per EFS on the Serverless view.

  1. Install the Amazon EFS integration.
  2. Add the following permissions to your Datadog IAM policy to collect EFS metrics from Lambda.

    AWS PermissionDescription
    elasticfilesystem:DescribeAccessPointsLists active EFS connected to Lambda functions.
  3. Once done, go to the Serverless view to use the new filesystemid tag on your Lambda functions.

Amazon EFS for Lambda

Lambda@Edge

Use the at_edge, edge_master_name, and edge_master_arn tags to get an aggregated view of your Lambda function metrics and logs as they run in Edge locations.

Custom metrics

Install the Datadog Lambda Layer to collect and send custom metrics. Metrics sent from the Datadog Lambda Layer are automatically aggregated into distributions, so you can graph the avg, sum, max, min, and count. You can also calculate aggregations over a set of tags for the 50th, 75th, 95th, and 99th percentile values on the Distribution Metrics page.

Distribution metrics are designed to instrument logical objects, like services, independent of the underlying hosts. So, they are well-suited for serverless infrastructure because they aggregate metrics server-side instead of locally with an Agent.

Upgrading to distribution metrics

With distribution metrics, you select the aggregation when graphing or querying it instead of specifying it at submission time.

If you previously submitted custom metrics from Lambda without using one of the Datadog Lambda Layers, you’ll need to start instrumenting your custom metrics under new metric names when submitting them to Datadog. The same metric name cannot simultaneously exist as both distribution and non-distribution metric type.

To enable percentile aggregations for your distribution metrics, consult the Distribution Metrics page.

Tagging custom metrics

You should tag your custom metrics when submitting them with the Datadog Lambda Layer. Use the Distribution Metrics page to customize the set of tags applied to your custom metrics.

To add Lambda resource tags to your custom metrics, set the parameter DdFetchLambdaTags to true on the Datadog forwarder CloudFormation stack.

Synchronous vs. asynchronous custom metrics

The Datadog Lambda Layer supports submitting custom metrics in Lambda both synchronously and asynchronously.

Synchronous: The default behavior. This method submits your custom metrics to Datadog via HTTP periodically (every 10 seconds) and at the end of your Lambda invocation. So if the invocation lasts for less than 10 seconds, your custom metrics will be submitted at the end of the invocation.

Asynchronous (recommended): It’s possible to submit your custom metrics with zero latency overhead and have them appear in Datadog in near-real time. To accomplish this, the Lambda Layer emits your custom metrics as a specially-formatted log line, which the Datadog Forwarder parses and submits to Datadog. Logging in AWS Lambda is 100% asynchronous, so this method ensures there is zero latency overhead to your function.

Enabling asynchronous custom metrics
  1. Set the environment variable DD_FLUSH_TO_LOG to True on your Lambda function.
  2. Update your Datadog Forwarder to at least version 1.4.0.

If you are not using Datadog Logs, you can still use asynchronous custom metric submission. Set the environment variable DD_FORWARD_LOG to False on the Datadog log collection AWS Lambda function. This intelligently forwards only custom metrics to Datadog, and not regular logs.

Custom metrics sample code

In your function code, you must import the necessary methods from the Lambda Layer and add a wrapper around your function handler. You do not need to wrap your helper functions.

Note: The arguments to the custom metrics reporting methods have the following requirements:

  • <METRIC_NAME> uniquely identifies your metric and adheres to the metric naming policy.
  • <METRIC_VALUE> MUST be a number (i.e. integer or float).
  • <TAG_LIST> is optional and formatted, for example: ['owner:Datadog', 'env:demo', 'cooltag'].
from datadog_lambda.metric import lambda_metric
from datadog_lambda.wrapper import datadog_lambda_wrapper

# You only need to wrap your function handler (Not helper functions). 
@datadog_lambda_wrapper
def lambda_handler(event, context):
    lambda_metric(
        "coffee_house.order_value",             # Metric name
        12.45,                                  # Metric value
        tags=['product:latte', 'order:online']  # Associated tags
    )
const { datadog, sendDistributionMetric } = require('datadog-lambda-js');

async function myHandler(event, context) {
    sendDistributionMetric(
        'coffee_house.order_value', // Metric name
        12.45, // Metric value
        'product:latte',
        'order:online' // Associated tags
    );
    return {
        statusCode: 200,
        body: 'hello, dog!'
    };
}
// You only need to wrap your function handler (Not helper functions).
module.exports.myHandler = datadog(myHandler);

/* OR with manual configuration options
module.exports.myHandler = datadog(myHandler, {
    apiKey: "my-api-key"
});
*/
package main

import (
  "github.com/aws/aws-lambda-go/lambda"
  "github.com/DataDog/datadog-lambda-go"
)

func main() {
  // You only need to wrap your function handler (Not helper functions). 
  lambda.Start(ddlambda.WrapHandler(myHandler, nil))
  /* OR with manual configuration options
  lambda.Start(ddlambda.WrapHandler(myHandler, &ddlambda.Config{
    BatchInterval: time.Second * 15
    APIKey: "my-api-key",
  }))
  */
}

func myHandler(ctx context.Context, event MyEvent) (string, error) {
  ddlambda.Distribution(
    "coffee_house.order_value",     // Metric name
    12.45,                          // Metric value
    "product:latte", "order:online" // Associated tags
  )
  // ...
}
require 'datadog/lambda'

def handler(event:, context:)
    # You only need to wrap your function handler (Not helper functions).
    Datadog::Lambda.wrap(event, context) do
        Datadog::Lambda.metric(
          'coffee_house.order_value',         # Metric name
          12.45,                              # Metric value
          "product":"latte", "order":"online" # Associated tags
        )
        return { statusCode: 200, body: 'Hello World' }
    end
end
public class Handler implements RequestHandler<APIGatewayV2ProxyRequestEvent, APIGatewayV2ProxyResponseEvent> {
    public Integer handleRequest(APIGatewayV2ProxyRequestEvent request, Context context){
        DDLambda dd = new DDLambda(request, lambda);

        Map<String,String> myTags = new HashMap<String, String>();
            myTags.put("product", "latte");
            myTags.put("order","online");
        
        dd.metric(
            "coffee_house.order_value", // Metric name
            12.45,                      // Metric value
            myTags);                    // Associated tags
    }
}

Emitting asynchronous custom metrics is possible for any language or custom runtime. It works by printing a special JSON-formatted string in your Lambda function that the Datadog Forwarder identifies and submits to Datadog. To use this:

  1. Enable asynchronous custom metrics
  2. Write a reusable function that logs your custom metrics in the following format:

    {
    "m": "Metric name",
    "v": "Metric value",
    "e": "Unix timestamp (seconds)",
    "t": "Array of tags"
    }

For example:

{
    "m": "coffee_house.order_value",
    "v": 12.45,
    "e": 1572273854,
    "t": ["product:latte", "order:online"]
}

Note: These custom metrics are submitted as distributions. If you were previously submitting custom metrics another way, consult the documentation on the implications of upgrading to distributions.

Running in a VPC

The Datadog Lambda Layer requires access to the public internet in order to submit custom metrics synchronously. If your Lambda function is associated with a VPC, ensure that it is instead submitting custom metrics asynchronously or that your function can reach the public internet.

Using Third-Party Libraries

There are a number of open source libraries that make it easy to submit custom metrics to Datadog. However, many have not been updated to use Distribution metrics, which are optimized for Lambda. Distribution metrics allow for server-side aggregations independent of a host or locally-running agent. In a serverless environment where there is no agent, Distribution metrics give you flexible aggregations and tagging.

When evaluating third-party metrics libraries for AWS Lambda, ensure they support Distribution metrics.

[DEPRECATED] Using CloudWatch Logs

This method of submitting custom metrics is no longer supported, and is disabled for all new customers. The recommended way to submit custom metrics from Lambda is with a Datadog Lambda Layer.

This requires the following AWS permissions in your Datadog IAM policy.

AWS PermissionDescription
logs:DescribeLogGroupsList available log groups.
logs:DescribeLogStreamsList available log streams for a group.
logs:FilterLogEventsFetch specific log events for a stream to generate metrics.

[DEPRECATED] To send custom metrics to Datadog from your Lambda logs, print a log line using the following format:

MONITORING|<UNIX_EPOCH_TIMESTAMP>|<METRIC_VALUE>|<METRIC_TYPE>|<METRIC_NAME>|#<TAG_LIST>

Where:

  • MONITORING signals to the Datadog integration that it should collect this log entry.
  • <UNIX_EPOCH_TIMESTAMP> is in seconds, not milliseconds.
  • <METRIC_VALUE> MUST be a number (i.e. integer or float).
  • <METRIC_TYPE> is count, gauge, histogram, or check.
  • <METRIC_NAME> uniquely identifies your metric and adheres to the metric naming policy.
  • <TAG_LIST> is optional, comma separated, and must be preceded by #. The tag function_name:<name_of_the_function> is automatically applied to custom metrics.

Note: The sum for each timestamp is used for counts and the last value for a given timestamp is used for gauges. It is not recommended to print a log statement every time you increment a metric, as this increases the time it takes to parse your logs. Continually update the value of the metric in your code, and print one log statement for that metric before the function finishes.

Datadog Lambda Layer

The Datadog Lambda Layer is responsible for:

  • Generating real-time enhanced Lambda metrics for invocations, errors, cold starts, etc.
  • Submitting custom metrics (synchronously and asynchronously)
  • Automatically propagating tracing headers from upstream requests to downstream services. This enables full distributed tracing across Lambda functions, hosts, containers, and other infrastructure running the Datadog Agent.
  • Packaging the dd-trace library, letting you trace across your Lambda functions with Datadog’s tracing libraries, currently available for Node.js, Python and Ruby with more runtimes coming soon.

Installing and using the Datadog Lambda Layer

See the setup instructions to install and use the Datadog Lambda Layer.

Data Collected

Metrics

aws.lambda.duration
(gauge)
Measures the average elapsed wall clock time from when the function code starts executing as a result of an invocation to when it stops executing.
Shown as millisecond
aws.lambda.duration.maximum
(gauge)
Measures the maximum elapsed wall clock time from when the function code starts executing as a result of an invocation to when it stops executing.
Shown as millisecond
aws.lambda.duration.minimum
(gauge)
Measures the minimum elapsed wall clock time from when the function code starts executing as a result of an invocation to when it stops executing.
Shown as millisecond
aws.lambda.duration.sum
(gauge)
Measures the total execution time of the lambda function executing.
Shown as millisecond
aws.lambda.duration.p80
(gauge)
Measures the p80 elapsed wall clock time from when the function code starts executing as a result of an invocation to when it stops executing.
Shown as millisecond
aws.lambda.duration.p95
(gauge)
Measures the p95 elapsed wall clock time from when the function code starts executing as a result of an invocation to when it stops executing.
Shown as millisecond
aws.lambda.duration.p99
(gauge)
Measures the p99 elapsed wall clock time from when the function code starts executing as a result of an invocation to when it stops executing.
Shown as millisecond
aws.lambda.duration.p99.9
(gauge)
Measures the p99.9 elapsed wall clock time from when the function code starts executing as a result of an invocation to when it stops executing.
Shown as millisecond
aws.lambda.timeout
(gauge)
Measures the amount of allowed execution time for the function before the Lambda runtime stops it.
Shown as second
aws.lambda.errors
(count)
Measures the number of invocations that failed due to errors in the function (response code 4XX).
Shown as error
aws.lambda.invocations
(count)
Measures the number of times a function is invoked in response to an event or invocation API call.
Shown as invocation
aws.lambda.throttles
(count)
Measures the number of Lambda function invocation attempts that were throttled due to invocation rates exceeding the customer's concurrent limits (error code 429). Failed invocations may trigger a retry attempt that succeeds.
Shown as throttle
aws.lambda.iterator_age
(gauge)
Measures the age of the last record for each batch of records processed
Shown as millisecond
aws.lambda.iterator_age.minimum
(gauge)
Measures the minimum age of the last record for each batch of records processed
Shown as millisecond
aws.lambda.iterator_age.maximum
(gauge)
Measures the maximum age of the last record for each batch of records processed
Shown as millisecond
aws.lambda.iterator_age.sum
(gauge)
Measures the sum of the ages of the last record for each batch of records processed
Shown as millisecond
aws.lambda.dead_letter_errors
(count)
Measures the sum of times Lambda is unable to write the failed event payload to your configured Dead Letter Queues.
Shown as error
aws.lambda.concurrent_executions
(gauge)
Measures the average of concurrent executions for a given function at a given point in time.
Shown as execution
aws.lambda.concurrent_executions.minimum
(gauge)
Measures the minimum of concurrent executions for a given function at a given point in time.
Shown as execution
aws.lambda.concurrent_executions.maximum
(gauge)
Measures the maximum of concurrent executions for a given function at a given point in time.
Shown as execution
aws.lambda.concurrent_executions.sum
(gauge)
Measures the sum of concurrent executions for a given function at a given point in time.
Shown as execution
aws.lambda.unreserved_concurrent_executions
(gauge)
Measures the sum of the concurrency of the functions that don't have a custom concurrency limit specified.
Shown as execution
aws.lambda.provisioned_concurrent_executions
(gauge)
Measures the average number of events that are being processed on provisioned concurrency
Shown as execution
aws.lambda.provisioned_concurrent_executions.minimum
(gauge)
Measures the minimum number of events that are being processed on provisioned concurrency
Shown as execution
aws.lambda.provisioned_concurrent_executions.maximum
(gauge)
Measures the maximum number of events that are being processed on provisioned concurrency
Shown as execution
aws.lambda.provisioned_concurrency_invocations
(count)
Measures the number of invocations that are run on provisioned concurrency
Shown as invocation
aws.lambda.provisioned_concurrency_spillover_invocations
(count)
Measures the number of invocations that are run on non-provisioned concurrency when all provisioned concurrency is in use
Shown as invocation
aws.lambda.provisioned_concurrency_utilization
(gauge)
Measures the average fraction of provisioned concurrency in use for a given function at a given point in time
Shown as percent
aws.lambda.provisioned_concurrency_utilization.minimum
(gauge)
Measures the minimum fraction of provisioned concurrency in use for a given function at a given point in time
Shown as percent
aws.lambda.provisioned_concurrency_utilization.maximum
(gauge)
Measures the maximum fraction of provisioned concurrency in use for a given function at a given point in time
Shown as percent

Each of the metrics retrieved from AWS is assigned the same tags that appear in the AWS console, including but not limited to function name, security-groups, and more.

Custom metrics are only tagged with function name.

Events

The AWS Lambda integration does not include any events.

Service Checks

The AWS Lambda integration does not include any service checks.

Troubleshooting

Need help? Contact Datadog support.

Further Reading