Cette page n'est pas encore disponible en français, sa traduction est en cours.
Si vous avez des questions ou des retours sur notre projet de traduction actuel, n'hésitez pas à nous contacter.
A sample application is available on GitHub.

Setup

  1. Install the Datadog Python tracer in your Dockerfile.

    Dockerfile

    RUN pip install --target /dd_tracer/python/ ddtrace

    For more information, see Tracing Python applications.

  2. Install serverless-init as a sidecar.

    Setup

    Install the Datadog CLI client

    npm install -g @datadog/datadog-ci
    

    Install the gcloud CLI and authenticate with gcloud auth login.

    Configuration

    Configure the Datadog site and Datadog API key, and define the service name to use in Datadog.

    export DATADOG_SITE="<DATADOG_SITE>"
    export DD_API_KEY="<DD_API_KEY>"
    export DD_SERVICE="<SERVICE_NAME>"
    

    Instrument

    If you are new to Datadog serverless monitoring, launch the Datadog CLI in interactive mode to guide your first installation for a quick start.

    datadog-ci cloud-run instrument -i
    

    To permanently install Datadog for your production applications, run the instrument command in your CI/CD pipelines after your normal deployment. You can specify multiple services to instrument by passing multiple --service flags.

    datadog-ci cloud-run instrument --project <GCP-PROJECT-ID> --service <CLOUD-RUN-SERVICE-NAME> --region <GCP-REGION>
    

    Additional parameters can be found in the CLI documentation.

    Create a YAML file that contains your configuration. You can use the following example and adapt it to your needs:

    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      name: '<SERVICE_NAME>'
      labels:
        cloud.googleapis.com/location: '<LOCATION>'
        service: '<SERVICE_NAME>'
    spec:
      template:
        metadata:
          labels:
            service: '<SERVICE_NAME>'
          annotations:
            # The maximum number of instances that can be created for this service.
            # https://cloud.google.com/run/docs/reference/rest/v1/RevisionTemplate
            autoscaling.knative.dev/maxScale: '100'
            # The startup CPU boost feature for revisions provides additional CPU during
            # instance startup time and for 10 seconds after the instance has started.
            # https://cloud.google.com/run/docs/configuring/services/cpu#startup-boost
            run.googleapis.com/startup-cpu-boost: 'true'
        spec:
          containers:
            - env:
                - name: DD_SERVICE
                  value: '<SERVICE_NAME>'
              image: '<CONTAINER_IMAGE>'
              name: run-sidecar-1
              ports:
                - containerPort: 8080
                  name: http1
              resources:
                limits:
                  cpu: 1000m
                  memory: 512Mi
              startupProbe:
                failureThreshold: 1
                periodSeconds: 240
                tcpSocket:
                  port: 8080
                timeoutSeconds: 240
              volumeMounts:
                - mountPath: /shared-volume
                  name: shared-volume
            - env:
                - name: DD_SERVERLESS_LOG_PATH
                  value: shared-volume/logs/*.log
                - name: DD_SITE
                  value: '<DATADOG_SITE>'
                - name: DD_ENV
                  value: '<ENV>'
                - name: DD_API_KEY
                  value: '<API_KEY>'
                - name: DD_SERVICE
                  value: '<SERVICE_NAME>'
                - name: DD_VERSION
                  value: '<VERSION>'
                - name: DD_LOG_LEVEL
                  value: debug
                - name: DD_LOGS_INJECTION
                  value: 'true'
                - name: DD_SOURCE
                  value: 'python'
                - name: DD_HEALTH_PORT
                  value: '12345'
              image: gcr.io/datadoghq/serverless-init:latest
              name: serverless-init-1
              resources:
                limits:
                  cpu: 1000m
                  memory: 512Mi
              startupProbe:
                failureThreshold: 3
                periodSeconds: 10
                tcpSocket:
                  port: 12345
                timeoutSeconds: 1
              volumeMounts:
                - mountPath: /shared-volume
                  name: shared-volume
          volumes:
            - emptyDir:
                medium: Memory
                sizeLimit: 512Mi
              name: shared-volume
      traffic:
        - latestRevision: true
          percent: 100
    

    See the Environment Variables for more information.

    In this example, the environment variables, startup health check, and volume mount are already added. If you don’t want to enable logs, remove the shared volume.

    Ensure the container port for the main container is the same as the one exposed in your Dockerfile/service.

    To deploy your container, run:

    gcloud run services replace <FILENAME>.yaml
    

    After deploying your Cloud Run app, you can manually modify your app’s settings to enable Datadog monitoring.

    1. Create a Volume with In-Memory volume type.

    2. Add a new container with image URL: gcr.io/datadoghq/serverless-init:latest.

    3. Add the volume mount to every container in your application. Choose a path such as /shared-volume, and remember it for the next step. For example:

      Volume Mounts tab. Under Mounted volumes, Volume Mount 1. For Name 1, 'shared-logs (In-Memory)' is selected. For Mount path 1, '/shared-volume' is selected.

    4. Add the following environment variables to your serverless-init sidecar container:

      • DD_SERVICE: A name for your service. For example, gcr-sidecar-test.
      • DD_ENV: A name for your environment. For example, dev.
      • DD_SERVERLESS_LOG_PATH: Your log path. For example, /shared-volume/logs/*.log. The path must begin with the mount path you defined in the previous step.
      • DD_API_KEY: Your Datadog API key.

      For a list of all environment variables, including additional tags, see Environment variables.

    Add a service label in Google Cloud

    In your Cloud Run service’s info panel, add a label with the following key and value:

    KeyValue
    serviceThe name of your service. Matches the value provided as the DD_SERVICE environment variable.

    See Configure labels for services in the Cloud Run documentation for instructions.

  3. Set up logs.

    In the previous step, you created a shared volume. Additionally, you set the DD_SERVERLESS_LOG_PATH env var, or it was defaulted to /shared-volume/logs/app.log.

    Now, you will need to configure your logging library to write logs to that file. You can also set a custom format for log/trace correlation and other features. Datadog recommends setting the following environment variables:

    • ENV PYTHONUNBUFFERED=1: Ensure Python outputs appear immediately in container logs instead of being buffered.
    • ENV DD_SOURCE=python: Enable advanced Datadog log parsing.

    Then, update your logging library. For example, you can use Python’s native logging library:

    LOG_FILE = "/shared-logs/logs/app.log"
    os.makedirs(os.path.dirname(LOG_FILE), exist_ok=True)
    
    FORMAT = ('%(asctime)s %(levelname)s [%(name)s] [%(filename)s:%(lineno)d] '
            '[dd.service=%(dd.service)s dd.env=%(dd.env)s dd.version=%(dd.version)s dd.trace_id=%(dd.trace_id)s dd.span_id=%(dd.span_id)s] '
            '- %(message)s')
    
    logging.basicConfig(
        level=logging.INFO,
        format=FORMAT,
        handlers=[
            logging.FileHandler(LOG_FILE),
            logging.StreamHandler(sys.stdout)
        ]
    )
    logger = logging.getLogger(__name__)
    logger.level = logging.INFO
    
    logger.info('Hello world!')

    For more information, see Correlating Python Logs and Traces.

  4. Send custom metrics.

    To send custom metrics, install the DogStatsD client and view code examples.

Environment variables

Unless specified otherwise, all environment variables below should be set in the sidecar container. Only environment variables used to configure the tracer should be set in your main application container.

VariableDescription
DD_API_KEYDatadog API key - Required
DD_SITEDatadog site - Required
DD_LOGS_INJECTIONWhen true, enrich all logs with trace data for supported loggers. See Correlate Logs and Traces for more information. Set in your main application container, not the sidecar container.
DD_SERVICESee Unified Service Tagging. Set in all containers. Recommended
DD_VERSIONSee Unified Service Tagging. Recommended
DD_ENVSee Unified Service Tagging. Recommended
DD_SOURCESet the log source to enable a Log Pipeline for advanced parsing. To automatically apply language-specific parsing rules, set to python, or use your custom pipeline. Defaults to cloudrun.
DD_TAGSAdd custom tags to your logs, metrics, and traces. Tags should be comma separated in key/value format (for example: key1:value1,key2:value2).

Troubleshooting

This integration depends on your runtime having a full SSL implementation. If you are using a slim image, you may need to add the following command to your Dockerfile to include certificates:

RUN apt-get update && apt-get install -y ca-certificates

To have your Cloud Run services appear in the software catalog, you must set the DD_SERVICE, DD_VERSION, and DD_ENV environment variables.

Further reading