Run Synthetic Tests from Private Locations
New announcements from Dash: Incident Management, Continuous Profiler, and more! New announcements from Dash!

Run Synthetic Tests from Private Locations

The access to this feature is restricted - if you don't have access, reach out to the Datadog support team.

Overview

Private locations allow you to monitor internal-facing applications or any private URLs that aren’t accessible from the public internet. They can also be used to:

  • Create custom Synthetic locations in areas that are mission-critical to your business.
  • Verify application performance in your internal CI environment before you release new features to production with Synthetic CI/CD testing.
  • Compare application performance from both inside & outside your internal network.

Private locations come as Docker containers that you can install wherever makes sense inside of your private network. Once created and installed, you can assign Synthetic tests to your private location just like you would with any regular managed location.

Your private location worker pulls your test configurations from Datadog’s servers using HTTPS, executes the test on a schedule or on-demand, and returns the test results to Datadog’s servers. You can then visualize your private locations test results in a completely identical manner to how you would visualize tests running from managed locations:

Prerequisites

Docker

The private location worker is shipped as a Docker container. The official Docker image is available on Docker Hub. It can run on a Linux based OS or Windows OS if the Docker engine is available on your host and can run in Linux containers mode.

Datadog Private Locations Endpoints

To pull test configurations and push test results, the private location worker needs access to the below Datadog API endpoints.

PortEndpointDescription
443 intake.synthetics.datadoghq.com for version 0.1.6+, api.datadoghq.com/api/ for versions <0.1.5Used by the private location to pull test configurations and push test results to Datadog using an in-house protocol based on AWS Signature Version 4 protocol.
443 intake-v2.synthetics.datadoghq.com for versions >0.2.0Used by the private location to push browser test artifacts (screenshots, errors, resources)

Note: Check if the endpoint corresponding to your Datadog site is available from the host running the worker using curl intake.synthetics.datadoghq.com for version 0.1.6+ (curl https://api.datadoghq.com for versions <0.1.5).

PortEndpointDescription
443 api.datadoghq.eu/api/Used by the private location to pull test configurations and push test results to Datadog using an in-house protocol based on AWS Signature Version 4 protocol.
443 intake-v2.synthetics.datadoghq.eu for versions >0.2.0Used by the private location to push browser test artifacts (screenshots, errors, resources)

Note: Check if the endpoint corresponding to your Datadog site is available from the host running the worker using curl https://api.datadoghq.eu.

Set up your private location

Create your private location

Go in Synthetic Monitoring -> Settings -> Private Locations and click Add Private Location:

Note: Only Admin users can create private locations.

Fill out your private location details: specify your private location’s Name and Description, add any Tags you would like to associate with your private location, and choose one of your existing API Keys. Selecting an API key allows communication between your private location and Datadog. If you don’t have an existing API key, you can click Generate API key and create one on the dedicated page.

Note: Only Name and API key fields are mandatory.

Then click Save Location and Generate Configuration File to create your private location and generate the associated configuration file (visible in Step 3).

Configure your private location

Depending on your internal network set up, you can add initial configuration parameters (proxy and reserved IPs configuration) to your private location configuration file. The parameters added in Step 2 are automatically reflected in the Step 3 configuration file.

Proxy Configuration

If the traffic between your private location and Datadog has to go through a proxy, specify your proxy URL with the following format: http://<YOUR_USER>:<YOUR_PWD>@<YOUR_IP>:<YOUR_PORT> to add the associated proxyDatadog parameter to your generated configuration file.

Advanced proxy configuration options are available.

Blocking Reserved IPs

By default, Synthetic users can create Synthetic tests on endpoints using any IP. If you want to prevent users from creating tests on sensitive internal IPs in your network, toggle the Block reserved IPs button to block a default set of reserved IP ranges (IPv4 address registry and IPv6 address registry) and set the associated enableDefaultBlockedIpRanges parameter to true in your generated configuration file.

If some of the endpoints you are willing to test are located within one or several of the blocked reserved IP ranges, you can add their IPs and/or CIDRs to the allowed lists to add the associated allowedIPRanges parameters to your generated configuration file.

Advanced reserved IPs configuration options are available.

Advanced Configuration

Advanced configuration options are available and can be found by running the below help command:

docker run --rm datadog/synthetics-private-location-worker --help

View your configuration file

After adding the appropriate options to your private location configuration file, you can copy paste the file to your working directory.

Note: The configuration file contains secrets for private location authentication, test configuration decryption, and test result encryption. Datadog does not store the secrets, so store them locally before leaving the Private Locations screen. You need to be able to reference these secrets again if you decide to add more workers, or to install workers on another host.

Install your private location

Launch your private location on:

Run this command to boot your private location worker by mounting your configuration file to the container. Ensure that your <MY_WORKER_CONFIG_FILE_NAME>.json file is in /etc/docker, not the root home folder:

docker run --rm -v $PWD/<MY_WORKER_CONFIG_FILE_NAME>.json:/etc/datadog/synthetics-check-runner.json datadog/synthetics-private-location-worker:latest

Note: If you blocked reserved IPs, make sure to add the NET_ADMIN Linux capabilities to your private location container.

This command starts a Docker container and makes your private location ready to run tests. We recommend running the container in detached mode with proper restart policy.

  1. Create a docker-compose.yml file with:

    version: "3"
    services:
        synthetics-private-location-worker:
            image: datadog/synthetics-private-location-worker:latest
            volumes:
                - PATH_TO_PRIVATE_LOCATION_CONFIG_FILE:/etc/datadog/synthetics-check-runner.json

    Note: If you blocked reserved IPs, make sure to add the NET_ADMIN Linux capabilities to your private location container.

  2. Start your container with:

    docker-compose -f docker-compose.yml up
  1. Create a Kubernetes ConfigMap with the previously created JSON file by executing the following:

    kubectl create configmap private-location-worker-config --from-file=<MY_WORKER_CONFIG_FILE_NAME>.json
  2. Take advantage of deployments to describe the desired state associated with your private locations. Create the following private-location-worker-deployment.yaml file:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: datadog-private-location-worker
      namespace: default
    spec:
      selector:
        matchLabels:
          app: private-location
      template:
        metadata:
          name: datadog-private-location-worker
          labels:
            app: private-location
        spec:
          containers:
          - name: datadog-private-location-worker
            image: datadog/synthetics-private-location-worker
            volumeMounts:
            - mountPath: /etc/datadog/synthetics-check-runner.json
              name: worker-config
              subPath: <MY_WORKER_CONFIG_FILE_NAME>
          volumes:
          - name: worker-config
            configMap:
              name: private-location-worker-config

    Note: If you blocked reserved IPs, make sure to add the NET_ADMIN Linux capabilities to your private location container.

  3. Apply the configuration:

    kubectl apply -f private-location-worker-deployment.yaml

Create a new EC2 task definition matching the below. Make sure to replace each parameter by the corresponding value found in your previously generated pivate location configuration file:

{
    ...
    "containerDefinitions": [
        {
            "command": [
                "--site='...'",
                "--locationID='...'",
                "--accessKey='...'",
                "--datadogApiKey='...'",
                "--secretAccessKey='...'",
                "--privateKey='-----BEGIN RSA PRIVATE KEY-----XXXXXXXX-----END RSA PRIVATE KEY-----'",
                "--publicKey.pem='-----BEGIN PUBLIC KEY-----XXXXXXXX-----END PUBLIC KEY-----'",
                "--publicKey.fingerprint='...'"
            ],
            ...
            "image": "datadog/synthetics-private-location-worker:latest",
            ...
        }
    ],
    ...
    "compatibilities": [
        "EC2"
    ],
    ...
}

Note: If you blocked reserved IPs, make sure to configure a linuxParameters to grant NET_ADMIN capabilities to your private location containers.

Create a new Fargate task definition matching the below. Make sure to replace each parameter by the corresponding value found in your previously generated private location configuration file:

{
    ...
    "containerDefinitions": [
        {
            "command": [
                "--site='...'",
                "--locationID='...'",
                "--accessKey='...'",
                "--datadogApiKey='...'",
                "--secretAccessKey='...'",
                "--privateKey='-----BEGIN RSA PRIVATE KEY-----XXXXXXXX-----END RSA PRIVATE KEY-----'",
                "--publicKey.pem='-----BEGIN PUBLIC KEY-----XXXXXXXX-----END PUBLIC KEY-----'",
                "--publicKey.fingerprint='...'"
            ],
            ...
            "image": "datadog/synthetics-private-location-worker:latest",
            ...
        }
    ],
    ...
    "compatibilities": [
        "EC2",
        "FARGATE"
    ],
    ...
}

Note: The private location firewall option is not supported on AWS Fargate - the enableDefaultBlockedIpRanges parameter can consequently not be set to true.

Because Datadog already integrates with Kubernetes and AWS, it is ready-made to monitor EKS.

  1. Create a Kubernetes ConfigMap with the previously created JSON file by executing the following:

    kubectl create configmap private-location-worker-config --from-file=<MY_WORKER_CONFIG_FILE_NAME>.json
  2. Take advantage of deployments to describe the desired state associated with your private locations. Create the following

    private-location-worker-deployment.yaml file:
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: datadog-private-location-worker
    namespace: default
    spec:
    selector:
      matchLabels:
        app: private-location
    template:
      metadata:
        name: datadog-private-location-worker
        labels:
          app: private-location
      spec:
        containers:
          - name: datadog-private-location-worker
            image: datadog/synthetics-private-location-worker
            volumeMounts:
              - mountPath: /etc/datadog/
                name: worker-config
        volumes:
          - name: worker-config
            configMap:
              name: private-location-worker-config

    Note: If you blocked reserved IPs, make sure to configure a security context to grant NET_ADMIN Linux capabilities to your private location containers.

  3. Apply the configuration:

    kubectl apply -f private-location-worker-deployment.yaml

Set up healthchecks

Add a healthcheck mechanism so your orchestrator can ensure the workers are running correctly.

The /tmp/liveness.date file of private location containers gets updated after every successful poll from Datadog (500ms by default). The container is considered unhealthy if no poll has been performed in a while, for example: no fetch in the last minute.

Use the below configuration to set up healthchecks on your containers with:

healthcheck:
  retries: 3
  test: [
    "CMD", "/bin/sh", "-c", "'[ $$(expr $$(cat /tmp/liveness.date) + 300000) -gt $$(date +%s%3N) ]'"
  ]
  timeout: 2s
  interval: 10s
  start_period: 30s
livenessProbe:
  exec:
    command:
      - /bin/sh
      - -c
      - '[ $(expr $(cat /tmp/liveness.date) + 300000) -gt $(date +%s%3N) ]'
  initialDelaySeconds: 30
  periodSeconds: 10
  timeoutSeconds: 2
  failureThreshold: 3
"healthCheck": {
  "retries": 3,
  "command": [
    "/bin/sh", "-c", "'[ $(expr $(cat /tmp/liveness.date) + 300000) -gt $(date +%s%3N) ]'"
  ],
  "timeout": 2,
  "interval": 10,
  "startPeriod": 30
}
"healthCheck": {
  "retries": 3,
  "command": [
    "/bin/sh", "-c", "'[ $(expr $(cat /tmp/liveness.date) + 300000) -gt $(date +%s%3N) ]'"
  ],
  "timeout": 2,
  "interval": 10,
  "startPeriod": 30
}
livenessProbe:
  exec:
    command:
      - /bin/sh
      - -c
      - '[ $(expr $(cat /tmp/liveness.date) + 300000) -gt $(date +%s%3N) ]'
  initialDelaySeconds: 30
  periodSeconds: 10
  timeoutSeconds: 2
  failureThreshold: 3

Test your internal endpoint

Once at least one private location container starts reporting to Datadog the private location status is set to green:

You can then start testing your first internal endpoint by launching a fast test on one of your internal endpoints and see if you get the expected response:

Launch Synthetic tests from your private locations

If your private location reports correctly to Datadog you should also see an OK health status displayed on private locations list from the Settings page:

You can then go to any of your API or Browser test creation form, and tick your Private locations of interest to have them run your Synthetic test on schedule:

Your private locations can be used just like any other Datadog managed locations: assign Synthetic tests to private locations, visualize test results, get Synthetic metrics, etc.

Scale your private locations

You can easily horizontally scale your private locations by adding or removing workers to it. You can run several containers for one private location with one single configuration file. Each worker would then request N tests to run depending on its number of free slots: when worker 1 is processing tests, worker 2 requests the following tests, etc.

You can also leverage the concurrency parameter value to adjust the number of tests your private location workers can run in parallel.

Hardware Requirements

CPU/Memory

  • Base requirement: 150mCores/150MiB

  • Additional equirement per slot:

Private location test typeRecommended concurrency rangeCPU/Memory recommendation
Private location running both API and Browser testsFrom 1 to 50150mCores/1GiB per slot
Private location running API tests onlyFrom 1 to 20020mCores/5MiB per slot
Private location running Browser tests onlyFrom 1 to 50150mCores/1GiB per slot

Example: For a private location running both API and Browser tests, and with a concurrency set to the default 10, recommendation for a safe usage is ~ 1.5 core (150mCores + (150mCores*10 slots)) and ~ 10GiB memory (150M + (1G*10 slots)).

Disk

The recommendation for disk size is to allocate ~ 10MiB/slot (1MiB/slot for API-only private locations).

Further Reading