Datadog APM supports NGINX in two configurations:

  • NGINX operated as a proxy with tracing provided by the Datadog module.
  • NGINX as an Ingress Controller for Kubernetes.

NGINX with Datadog module

Datadog provides an NGINX module for distributed tracing.

Module installation

To install the Datadog NGINX module, follow these instructions:

  1. Download the appropriate version from the latest nginx-datadog GitHub release
  2. Choose the tarball corresponding to the specific NGINX version and CPU architecture.

Each release includes two tarballs per combination of NGINX version and CPU architecture. The main tarball contains a single file, ngx_http_datadog_module.so, which is the Datadog NGINX module. The second one is debug symbols, it is optional.

For simplicity, the following script downloads only the module for the latest release:

get_latest_release() {
  curl --silent "https://api.github.com/repos/$1/releases/latest" | jq --raw-output .tag_name
}

get_architecture() {
  case "$(uname -m)" in
    aarch64|arm64)
      echo "arm64"
      ;;
    x86_64|amd64)
      echo "amd64"
      ;;
    *)
      echo ""
      ;;
  esac
}

ARCH=$(get_architecture)

if [ -z "$ARCH" ]; then
    echo 1>&2 "ERROR: Architecture $(uname -m) is not supported."
    exit 1
fi

NGINX_VERSION="1.26.0"
RELEASE_TAG=$(get_latest_release DataDog/nginx-datadog)
TARBALL="ngx_http_datadog_module-${ARCH}-${NGINX_VERSION}.so.tgz"

curl -Lo ${TARBALL} "https://github.com/DataDog/nginx-datadog/releases/download/${RELEASE_TAG}/${TARBALL}"

Extract the ngx_http_datadog_module.so file from the downloaded tarball using tar and place it in the NGINX modules directory, typically located at /usr/lib/nginx/modules.

NGINX configuration with Datadog module

In the topmost section of the NGINX configuration, load the Datadog module.

load_module modules/ngx_http_datadog_module.so;

The default configuration connects to a local Datadog Agent and produces traces for all NGINX locations. Specify custom configuration using the dedicated datadog_* directives described in the Datadog module’s API documentation.

For example, the following NGINX configuration sets the service name to usage-internal-nginx and the sampling rate to 10%.

load_module modules/ngx_http_datadog_module.so;

http {
  datadog_service_name usage-internal-nginx;
  datadog_sample_rate 0.1;

  # servers, locations...
}

Ingress-NGINX Controller for Kubernetes

Datadog offers support for monitoring the Ingress-NGINX controller in Kubernetes. Choose from the following instrumentation methods based on your controller version and requirements:

Controller v1.10.0+ using Datadog’s features

This instrumentation method uses nginx-datadog and leverages Kubernetes init-container mechanism to install the module within the Ingress-NGINX Controller instance.

To instrument Ingress-NGINX v1.10.0+ using Datadog’s module, follow these steps:

1. Verify your Ingress-NGINX version
Check your Ingress-NGINX Controller version and ensure you have the matching Datadog init-container available. The init-container version (datadog/ingress-nginx-injection) must exactly match your controller version to prevent startup issues. For example, if you’re running Ingress-NGINX v1.11.3, you need datadog/ingress-nginx-injection:v1.11.3.

2. Modify your controller’s pod specification
Update the controller pod specification to include the init-container and configure the Datadog Agent host environment variable:

    spec:
      template:
        spec:
          initContainers:
            - name: init-datadog
              image: datadog/ingress-nginx-injection:<MY_INGRESS_NGINX_VERSION>
              command: ['/datadog/init_module.sh', '/opt/datadog-modules']
              volumeMounts:
                - name: nginx-module
                  mountPath: /opt/datadog-modules
          containers:
            - name: controller
              image: registry.k8s.io/ingress-nginx/controller:<MY_INGRESS_NGINX_VERSION>
              env:
                - ...
                - name: DD_AGENT_HOST
                  valueFrom:
                    fieldRef:
                      fieldPath: status.hostIP

Note: For an alternative way to access the Datadog Agent, see the Kubernetes installation guide.

3. Configure Ingress-NGINX
Create or modify the ConfigMap to load the Datadog module:

    kind: ConfigMap
    apiVersion: v1
    ...
    data:
      enable-opentelemetry: "false"
      error-log-level: notice
      main-snippet: |
        load_module /opt/datadog-modules/ngx_http_datadog_module.so;        

4. Apply the ConfigMap
Apply the updated ConfigMap to ensure the Datadog module is correctly loaded.

This configuration ensures that the Datadog module is loaded and ready to trace incoming requests.

1. Verify your Ingress-NGINX version
Check your Ingress-NGINX Controller version and ensure you have the matching Datadog init-container available. The init-container version (datadog/ingress-nginx-injection) must exactly match your controller version to prevent startup issues. For example, if you’re running Ingress-NGINX v1.11.3, you need datadog/ingress-nginx-injection:v1.11.3.

2. Overriding Helm chart values
To customize the Ingress-NGINX Helm chart and load the required Datadog module, create a YAML file or modify an existing one with the following configuration:

values.yaml

controller:
  config:
    main-snippet: "load_module /modules_mount/ngx_http_datadog_module.so;"
  opentelemetry:
    enabled: false
  extraModules:
    - name: nginx-datadog
      image:
        registry: docker.io
        image: datadog/ingress-nginx-injection
        # The tag should match the version of the ingress-nginx controller
        # For example, this will inject the Datadog module for ingress v1.10.0
        # Check <https://hub.docker.com/repository/docker/datadog/ingress-nginx-injection/tags>
        # for the list of all versions supported.
        tag: "v1.10.0"
        distroless: false
  extraEnvs:
    - name: DD_AGENT_HOST
      valueFrom:
        fieldRef:
          fieldPath: status.hostIP

3. Deploy
Install or upgrade the Helm release using the -f flag to apply the custom values created in the previous step.

helm install my-release ingress-nginx/ingress-nginx -f values.yaml

Controller v1.10.0+ using OpenTelemetry

1. Prepare the Datadog Agent
Ensure that your Datadog Agent has gRPC OTLP Ingestion enabled to act as an OpenTelemetry Collector.

2. Configure the Ingress controller
To begin, verify that your Ingress controller’s pod spec has the HOST_IP environment variable set. If not, add the following entry to the env block within the pod’s specification:

- name: HOST_IP
  valueFrom:
    fieldRef:
      fieldPath: status.hostIP
- name: OTEL_EXPORTER_OTLP_ENDPOINT
  value: "http://$(HOST_IP):4317"

Next, enable OpenTelemetry instrumentation for the controller. Create or edit a ConfigMap with the following details:

apiVersion: v1
kind: ConfigMap
metadata:
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
  enable-opentelemetry: "true"
  otel-sampler: AlwaysOn
  # Defaults
  # otel-service-name: "nginx"
  # otel-sampler-ratio: 0.01

1. Prepare the Datadog Agent
Ensure that your Datadog Agent has gRPC OTLP Ingestion enabled to act as an OpenTelemetry Collector.

2. Overriding Helm chart values
To customize the Ingress-NGINX Helm chart and load the required Datadog module, create a YAML file or modify an existing one with the following configuration:

values.yaml

controller:
  opentelemetry:
    enabled: true
  config:
    otel-service-name: "nginx"
    otel-sampler: AlwaysOn
    otel-sampler-ratio: 0.01
  extraEnvs:
    - name: HOST_IP
      valueFrom:
        fieldRef:
          fieldPath: status.hostIP
    - name: OTEL_EXPORTER_OTLP_ENDPOINT
      value: "http://$(HOST_IP):4317"

3. Deploy Install or upgrade the Helm release using the -f flag to apply the custom values created in the previous step.

helm install my-release ingress-nginx/ingress-nginx -f values.yaml

Controller v1.9.0 and older

To enable Datadog tracing, create or edit a ConfigMap to set enable-opentracing: "true" and the datadog-collector-host to which traces should be sent. The name of the ConfigMap is cited explicitly by the Ingress-NGINX Controller container’s command line argument, defaulting to --configmap=<POD_NAMESPACE>/nginx-configuration. If ingress-nginx was installed via Helm chart, the ConfigMap’s name will follow the pattern <RELEASE_NAME>-nginx-ingress-controller.

The Ingress controller manages both the nginx.conf and /etc/nginx/opentracing.json files. Tracing is enabled for all location blocks.

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
data:
  enable-opentracing: "true"
  datadog-collector-host: $HOST_IP
  # Defaults
  # datadog-service-name: "nginx"
  # datadog-collector-port: "8126"
  # datadog-operation-name-override: "nginx.handle"
  # datadog-sample-rate: "1.0"

Additionally, ensure that your controller’s pod spec has the HOST_IP environment variable set. Add this entry to the env: block that contains the environment variables POD_NAME and POD_NAMESPACE.

- name: HOST_IP
  valueFrom:
    fieldRef:
      fieldPath: status.hostIP

To set a different service name per Ingress using annotations:

  nginx.ingress.kubernetes.io/configuration-snippet: |
      opentracing_tag "service.name" "custom-service-name";      

The above overrides the default nginx-ingress-controller.ingress-nginx service name.

Correlating traces to logs

After you’ve enabled APM tracing, you can connect your traces to the corresponding NGINX logs. This correlation links each trace and span to the specific log events generated during that request, allowing you to pivot between them to troubleshoot issues.

Prerequisites

Before you begin, ensure that you have:

  • Enabled APM tracing for NGINX by following the steps earlier in this guide.
  • Configured Datadog log collection for NGINX.

Step 1: Inject trace and span IDs into NGINX logs

Modify your log_format directive to include the trace ID and the span ID. The variables you use depends on your instrumentation method. Select the appropriate tab below for the instructions that match your setup.

If you installed the Datadog NGINX module, use the $datadog_trace_id and $datadog_span_id variables. This value is - for requests that are not traced.

Update your NGINX configuration file (for example, /etc/nginx/nginx.conf):

http {
  log_format main_datadog '$remote_addr - $remote_user [$time_local] "$request" '
                         '$status $body_bytes_sent "$http_referer" '
                         '"$http_user_agent" "$http_x_forwarded_for" '
                         'dd.trace_id="$datadog_trace_id"' 'dd.span_id="$datadog_span_id"';

  access_log /var/log/nginx/access.log main_datadog;
}

If you are using the NGINX OpenTelemetry module, use the $otel_trace_id and $otel_span_id variables. This value is an empty string for requests that are not traced.

Update your NGINX configuration file (for example, /etc/nginx/nginx.conf):

http {
  log_format main_opentelemetry '$remote_addr - $remote_user [$time_local] "$request" '
                               '$status $body_bytes_sent "$http_referer" '
                               '"$http_user_agent" "$http_x_forwarded_for" '
                               'dd.trace_id="$otel_trace_id"' 'dd.span_id="$otel_span_id"' ;

  access_log /var/log/nginx/access.log main_opentelemetry;
}

After saving your changes, reload the NGINX configuration. For example:

sudo nginx -s reload

Step 2: Configure the log pipeline to parse the trace and span IDs

Next, create a pipeline to process the trace ID from your logs. These steps are the same for both instrumentation methods.

  1. In Datadog, navigate to the Log Configuration page.
  2. Hover over your active NGINX pipeline and click the Clone icon to create an editable version.
  3. Click the cloned pipeline.
  4. Click Add Processor.
  5. Select Grok Parser as the processor type.
  6. Define the following parsing rule to extract the trace ID attribute from a log event. This rule works for both the Datadog module and OpenTelemetry outputs:
    extract_correlation_ids %{data} dd.trace_id="%{notSpace:dd.trace_id:nullIf("-")}" dd.span_id="%{notSpace:dd.span_id:nullIf("-")}"
    
  7. Click Create.
  8. Click Add Processor again.
  9. Select Trace ID Remapper as the processor type. This processor associates the parsed ID with its corresponding APM trace.
  10. In the Set trace id attribute(s) field, enter dd.trace_id.
  11. Click Create.
  12. Click Add Processor again.
  13. Select Span ID Remapper as the processor type. This processor associates the parsed ID with its corresponding APM span.
  14. In the Set span id attribute(s) field, enter dd.span_id.
  15. Click Create.
  16. Save and enable your new pipeline.

Once the pipeline is active, new NGINX logs are automatically correlated with their traces and spans.

Further reading