OpenTelemetry is an open source observability framework that provides IT teams with standardized protocols and tools for collecting and routing telemetry data.

This page discusses using OpenTelemetry with Datadog Serverless Monitoring for AWS Lambda. For more information, including how to use OpenTelemetry in non-serverless environments, see OpenTelemetry in Datadog.

Instrument AWS Lambda with OpenTelemetry

There are multiple ways to instrument AWS Lambda functions with OpenTelemetry and send the data to Datadog:

OpenTelemetry API support within Datadog tracers

The Datadog tracing library, which is included in the Datadog Lambda Extension upon installation, accepts custom spans and traces created with OpenTelemetry-instrumented code, processes the telemetry, and sends it to Datadog.

You can use this approach if, for example, your main goal is to code has already been instrumented with the OpenTelemetry API. This means you can maintain vendor-neutral instrumentation of all your services, while still taking advantage of Datadog’s native implementation, tagging, and features.

To instrument AWS Lambda with the OpenTelemetry API, set the environment variable DD_TRACE_OTEL_ENABLED to true in your Lambda function, and see Custom instrumentation with the OpenTelemetry API for runtime-specific instructions.

Send OpenTelemetry traces from any OpenTelemetry SDK through the Datadog Lambda Extension

This approach is analogous to OLTP Ingest in the Datadog Agent. It is recommended in situations where tracing support may not be available for your runtime (for example, Rust or PHP).

Note: Sending custom metrics from the OTLP endpoint in the extension is not supported.

  1. Tell OpenTelemetry to export spans to the Datadog Lambda Extension. Then, add OpenTelemetry’s instrumentation for AWS Lambda.

    from opentelemetry.instrumentation.botocore import BotocoreInstrumentor
    from opentelemetry.instrumentation.aws_lambda import AwsLambdaInstrumentor
    from opentelemetry import trace
    from opentelemetry.sdk.trace import TracerProvider
    from opentelemetry.exporter.otlp.trace_exporter import OTLPExporter
    from opentelemetry.sdk.trace.export import SimpleSpanProcessor
    from opentelemetry.resource import Resource
    from opentelemetry.semconv.resource import (
        SERVICE_NAME,
        SemanticResourceAttributes,
    )
    
    # Create a TracerProvider
    tracer_provider = TracerProvider(resource=Resource.create({SERVICE_NAME: <YOUR_SERVICE_NAME>}))
    
    # Add a span processor with an OTLP exporter
    tracer_provider.add_span_processor(
        SimpleSpanProcessor(
            OTLPExporter(endpoint="http://localhost:4318/v1/traces")
        )
    )
    
    # Register the provider
    trace.set_tracer_provider(tracer_provider)
    
    # Instrument AWS SDK and AWS Lambda
    BotocoreInstrumentor().instrument(tracer_provider=tracer_provider)
    AwsLambdaInstrumentor().instrument(tracer_provider=tracer_provider)
    
    // instrument.js
    
    const { NodeTracerProvider } = require("@opentelemetry/sdk-trace-node");
    const { OTLPTraceExporter } = require('@opentelemetry/exporter-trace-otlp-http');
    const { Resource } = require('@opentelemetry/resources');
    const { SemanticResourceAttributes } = require('@opentelemetry/semantic-conventions');
    const { SimpleSpanProcessor } = require('@opentelemetry/sdk-trace-base');
    const provider = new NodeTracerProvider({
        resource: new Resource({
            [ SemanticResourceAttributes.SERVICE_NAME ]: 'rey-app-otlp-dev-node',
        })
    });
    provider.addSpanProcessor(
        new SimpleSpanProcessor(
            new OTLPTraceExporter(
                { url: 'http://localhost:4318/v1/traces' },
            ),
        ),
    );
    provider.register();
    
    const { AwsInstrumentation } = require('@opentelemetry/instrumentation-aws-sdk');
    const { AwsLambdaInstrumentation } = require('@opentelemetry/instrumentation-aws-lambda');
    const { registerInstrumentations } = require('@opentelemetry/instrumentation');
    
    registerInstrumentations({
        instrumentations: [
            new AwsInstrumentation({
                suppressInternalInstrumentation: true,
            }),
            new AwsLambdaInstrumentation({
                disableAwsContextPropagation: true,
            }),
        ],
    });
    
  2. Modify serverless.yml to apply instrumentation at runtime, add the Datadog Extension v53+, and enable OpenTelemetry in the Datadog Extension with the environment variable DD_OTLP_CONFIG_RECEIVER_PROTOCOLS_HTTP_ENDPOINT set to localhost:4318 (for HTTP) or DD_OTLP_CONFIG_RECEIVER_PROTOCOLS_GRPC_ENDPOINT set to localhost:4317 (for gRPC). Do not add the Datadog tracing layer.

    service: <YOUR_SERVICE_NAME>
    
    provider:
      name: aws
      region: <YOUR_REGION>
      runtime: python3.8  # or the Python version you are using
      environment:
        DD_API_KEY: ${env:DD_API_KEY}
        DD_OTLP_CONFIG_RECEIVER_PROTOCOLS_HTTP_ENDPOINT: localhost:4318
      layers:
        - arn:aws:lambda:sa-east-1:464622532012:layer:Datadog-Extension:53
    
    functions:
      python:
        handler: handler.handler
        environment:
          INSTRUMENTATION_FLAG: true
    

    Then, update your Python code accordingly. For example, in handler.py:

    import os
    
    def handler(event, context):
        if os.environ.get('INSTRUMENTATION_FLAG') == 'true':
            # Perform instrumentation logic here
            print("Instrumentation is enabled")
        
        # Your normal handler logic here
        print("Handling the event")
    
    # serverless.yml
    
    service: <YOUR_SERVICE_NAME>
    
    provider:
      name: aws
      region: <YOUR_REGION>
      runtime: nodejs18.x # or the Node.js version you are using
      environment:
        DD_API_KEY: ${env:DD_API_KEY}
        DD_OTLP_CONFIG_RECEIVER_PROTOCOLS_HTTP_ENDPOINT: localhost:4318
      layers:
        - arn:aws:lambda:sa-east-1:464622532012:layer:Datadog-Extension:53
    
    functions:
      node:
        handler: handler.handler
        environment:
          NODE_OPTIONS: --require instrument
    
  3. Deploy.

Further Reading

Additional helpful documentation, links, and articles: