Google Cloud Run

Overview

Google Cloud Run is a fully managed serverless platform for deploying and scaling container-based applications. Datadog provides monitoring and log collection for Cloud Run through the Google Cloud integration.

To instrument your Google Cloud Run applications with serverless-init, see Instrument Google Cloud Run with serverless-init.

Setup

Application

Tracing

In your main application, add the dd-trace-js library. See Tracing Node.js applications for instructions.

Set ENV NODE_OPTIONS="--require dd-trace/init". This specifies that the dd-trace/init module is required when the Node.js process starts.

Metrics

The tracing library also collects custom metrics. See the code examples.

Logs

The Datadog sidecar collects logs through a shared volume. To forward logs from your main container to the sidecar, configure your application to write all logs to a location such as shared-volume/logs/*.log using the steps below. You must follow the setup in the GCP UI to add the environment variable DD_SERVERLESS_LOG_PATH and a shared Volume Mount to both the main and sidecar container. If you decide to deploy using YAML or Terraform, the environment variables, health check, and volume mount are already added.

To set up logging in your application, see Node.js Log Collection. To set up trace log correlation, see Correlating Node.js Logs and Traces.

Tracing

In your main application, add the dd-trace-py library. See Tracing Python Applications for instructions. You can also use Tutorial - Enabling Tracing for a Python Application and Datadog Agent in Containers.

Metrics

The tracing library also collects custom metrics. See the code examples.

Logs

The Datadog sidecar collects logs through a shared volume. To forward logs from your main container to the sidecar, configure your application to write all logs to a location such as shared-volume/logs/*.log using the steps below. You must follow the setup in the GCP UI to add the environment variable DD_SERVERLESS_LOG_PATH and a shared Volume Mount to both the main and sidecar container. If you decide to deploy using YAML or Terraform, the environment variables, health check, and volume mount are already added.

To set up logging in your application, see Python Log Collection. Python Logging Best Practices can also be helpful. To set up trace log correlation, see Correlating Python Logs and Traces.

Tracing

In your main application, add the dd-trace-java library. Follow the instructions in Tracing Java Applications or use the following example Dockerfile to add and start the tracing library with automatic instrumentation:

FROM eclipse-temurin:17-jre-jammy
WORKDIR /app
COPY target/cloudrun-java-1.jar cloudrun-java-1.jar


# Add the Datadog tracer
ADD 'https://dtdg.co/latest-java-tracer' dd-java-agent.jar


EXPOSE 8080


# Start the Datadog tracer with the javaagent argument
ENTRYPOINT [ "java", "-javaagent:dd-java-agent.jar", "-jar", "cloudrun-java-1.jar" ]

Metrics

To collect custom metrics, install the Java DogStatsD client.

Logs

The Datadog sidecar collects logs through a shared volume. To forward logs from your main container to the sidecar, configure your application to write all logs to a location such as shared-volume/logs/*.log using the steps below. You must follow the setup in the GCP UI to add the environment variable DD_SERVERLESS_LOG_PATH and a shared Volume Mount to both the main and sidecar container. If you decide to deploy using YAML or Terraform, the environment variables, health check, and volume mount are already added.

To set up logging in your application, see Java Log Collection. To set up trace log correlation, see Correlating Java Logs and Traces.

Tracing

In your main application, add the dd-trace-go library. See Tracing Go Applications for instructions.

Metrics

The tracing library also collects custom metrics. See the code examples.

Logs

The Datadog sidecar collects logs through a shared volume. To forward logs from your main container to the sidecar, configure your application to write all logs to a location such as shared-volume/logs/*.log using the steps below. You must follow the setup in the GCP UI to add the environment variable DD_SERVERLESS_LOG_PATH and a shared Volume Mount to both the main and sidecar container. If you decide to deploy using YAML or Terraform, the environment variables, health check, and volume mount are already added.

To set up logging in your application, see Go Log Collection. To set up trace log correlation, see Correlating Go Logs and Traces.

Tracing

In your main application, add the .NET tracing library. See Tracing .NET Applications for instructions.

Example Dockerfile:

FROM mcr.microsoft.com/dotnet/aspnet:8.0-jammy
WORKDIR /app
COPY ./bin/Release/net8.0/publish /app

ADD https://github.com/DataDog/dd-trace-dotnet/releases/download/v2.56.0/datadog-dotnet-apm_2.56.0_amd64.deb /opt/datadog/datadog-dotnet-apm_2.56.0_amd64.deb
RUN dpkg -i /opt/datadog/datadog-dotnet-apm_2.56.0_amd64.deb
RUN mkdir -p /shared-volume/logs/

ENV CORECLR_ENABLE_PROFILING=1
ENV CORECLR_PROFILER={846F5F1C-F9AE-4B07-969E-05C26BC060D8}
ENV CORECLR_PROFILER_PATH=/opt/datadog/Datadog.Trace.ClrProfiler.Native.so
ENV DD_DOTNET_TRACER_HOME=/opt/datadog/

ENV DD_TRACE_DEBUG=true

ENTRYPOINT ["dotnet", "dotnet.dll"]

Metrics

The tracing library also collects custom metrics. See the code examples.

Logs

The Datadog sidecar collects logs through a shared volume. To forward logs from your main container to the sidecar, configure your application to write all logs to a location such as shared-volume/logs/*.log using the steps below. You must follow the setup in the GCP UI to add the environment variable DD_SERVERLESS_LOG_PATH and a shared Volume Mount to both the main and sidecar container. If you decide to deploy using YAML or Terraform, the environment variables, health check, and volume mount are already added.

To set up logging in your application, see C# Log Collection. To set up trace log correlation, see Correlating .NET Logs and Traces.

In your main application, add the dd-trace-php library. See Tracing PHP Applications for instructions.

Metrics

The tracing library also collects custom metrics. See the code examples.

Logs

The Datadog sidecar collects logs through a shared volume. To forward logs from your main container to the sidecar, configure your application to write all logs to a location such as shared-volume/logs/*.log using the steps below. You must follow the setup in the GCP UI to add the environment variable DD_SERVERLESS_LOG_PATH and a shared Volume Mount to both the main and sidecar container. If you decide to deploy using YAML or Terraform, the environment variables, health check, and volume mount are already added.

To set up logging in your application, see PHP Log Collection. To set up trace log correlation, see Correlating PHP Logs and Traces.

Containers

Sidecar container

  1. In Cloud Run, select Edit & Deploy New Revision.

  2. At the bottom of the page, select Add Container.

  3. For Container image URL, select gcr.io/datadoghq/serverless-init:latest.

  4. Go to Volume Mounts and set up a volume mount for logs. Ensure that the mount path matches your application’s write location. For example:

    Volume Mounts tab. Under Mounted volumes, Volume Mount 1. For Name 1, 'shared-logs (In-Memory)' is selected. For Mount path 1, '/shared-volume' is selected.
  5. Go to Settings and add a startup check.

    • Select health check type: Startup check
    • Select probe type: TCP
    • Port: Enter a port number. Make note of this, as it is used in the next step.
  6. Go to Variables & Secrets and add the following environment variables as name-value pairs:

    • DD_SERVICE: A name for your service. For example, gcr-sidecar-test.
    • DD_ENV: A name for your environment. For example, dev.
    • DD_SERVERLESS_LOG_PATH: Your log path. For example, /shared-volume/logs/*.log.
    • DD_API_KEY: Your Datadog API key.
    • DD_HEALTH_PORT: The port you selected for the startup check in the previous step.

    For a list of all environment variables, including additional tags, see Environment variables.

Main container

  1. Go to Volume Mounts and add the same shared volume as you did for the sidecar container. Note: Save your changes by selecting Done. Do not deploy changes until the final step.
  2. Go to Variables & Secrets and add the same environment variables that you set for the sidecar container. Omit DD_HEALTH_PORT.
  3. Go to Settings. In the Container start up order drop-down menu, select your sidecar.
  4. Deploy your main application.

To deploy your Cloud Run service with a YAML service specification:

  1. Create a YAML file that contains the following:

    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      name: '<SERVICE_NAME>'
      labels:
        cloud.googleapis.com/location: '<LOCATION>'
    spec:
      template:
        metadata:
          annotations:
            autoscaling.knative.dev/maxScale: '100' # The maximum number of instances that can be created for this service. https://cloud.google.com/run/docs/reference/rest/v1/RevisionTemplate
            run.googleapis.com/container-dependencies: '{"run-sidecar-1":["serverless-init-1"]}' # Configure container start order for sidecar deployments https://cloud.google.com/run/docs/configuring/services/containers#container-ordering
            run.googleapis.com/startup-cpu-boost: 'true' # The startup CPU boost feature for revisions provides additional CPU during instance startup time and for 10 seconds after the instance has started. https://cloud.google.com/run/docs/configuring/services/cpu#startup-boost
        spec:
          containers:
            - env:
                - name: DD_SERVERLESS_LOG_PATH
                  value: shared-volume/logs/*.log
                - name: DD_SITE
                  value: '<DATADOG_SITE>'
                - name: DD_ENV
                  value: serverless
                - name: DD_API_KEY
                  value: '<API_KEY>'
                - name: DD_SERVICE
                  value: '<SERVICE_NAME>'
                - name: DD_VERSION
                  value: '<VERSION>'
                - name: DD_LOG_LEVEL
                  value: debug
                - name: DD_LOGS_INJECTION
                  value: 'true'
              image: '<CONTAINER_IMAGE>'
              name: run-sidecar-1
              ports:
                - containerPort: 8080
                  name: http1
              resources:
                limits:
                  cpu: 1000m
                  memory: 512Mi
              startupProbe:
                failureThreshold: 1
                periodSeconds: 240
                tcpSocket:
                  port: 8080
                timeoutSeconds: 240
              volumeMounts:
                - mountPath: /shared-volume
                  name: shared-volume
            - env:
                - name: DD_SERVERLESS_LOG_PATH
                  value: shared-volume/logs/*.log
                - name: DD_SITE
                  value: datadoghq.com
                - name: DD_ENV
                  value: serverless
                - name: DD_API_KEY
                  value: '<API_KEY>'
                - name: DD_SERVICE
                  value: '<SERVICE_NAME>'
                - name: DD_VERSION
                  value: '<VERSION>'
                - name: DD_LOG_LEVEL
                  value: debug
                - name: DD_LOGS_INJECTION
                  value: 'true'
                - name: DD_HEALTH_PORT
                  value: '12345'
              image: gcr.io/datadoghq/serverless-init:latest
              name: serverless-init-1
              resources:
                limits:
                  cpu: 1000m
                  memory: 512Mi # Can be updated to a higher memory if needed
              startupProbe:
                failureThreshold: 3
                periodSeconds: 10
                tcpSocket:
                  port: 12345
                timeoutSeconds: 1
              volumeMounts:
                - mountPath: /shared-volume
                  name: shared-volume
          volumes:
            - emptyDir:
                medium: Memory
                sizeLimit: 512Mi
              name: shared-volume
      traffic: # make this revision and all future ones serve 100% of the traffic as soon as possible, overriding any established traffic split
        - latestRevision: true
          percent: 100
    

    In this example, the environment variables, startup health check, and volume mount are already added. If you don’t want to enable logs, remove the shared volume. Ensure the container port for the main container is the same as the one exposed in your Dockerfile/service.

  2. Supply placeholder values:

    • <SERVICE_NAME>: A name for your service. For example, gcr-sidecar-test. See Unified Service Tagging.
    • <LOCATION>: The region you are deploying your service in. For example, us-central.
    • <DATADOG_SITE>: Your Datadog site, .
    • <API_KEY>: Your Datadog API key.
    • <VERSION>: The version number of your deployment. See Unified Service Tagging.
    • <CONTAINER_IMAGE>: The image of the code you are deploying to Cloud Run. For example, us-docker.pkg.dev/cloudrun/container/hello.
    • <SERVICE_ACCOUNT>: The name of your Google Cloud service account.
  3. Run:

    gcloud run services replace <FILENAME>.yaml
    

To deploy your Cloud Run service with Terraform, use the following example configuration file. In this example, the environment variables, startup health check, and volume mount are already added. If you don’t want to enable logs, remove the shared volume. Ensure the container port for the main container is the same as the one exposed in your Dockerfile/service. If you do not want to allow public access, remove the IAM policy section.

provider "google" {
  project = "<PROJECT_ID>"
  region  = "<LOCATION>"  # example: us-central1
}

resource "google_cloud_run_service" "terraform_with_sidecar" {
  name     = "<SERVICE_NAME>"
  location = "<LOCATION>"

  template {
    metadata {
      annotations = {
        # Correctly formatted container-dependencies annotation
        "run.googleapis.com/container-dependencies" = jsonencode({main-app = ["sidecar-container"]})
      }
    }
    spec {
      # Define shared volume
      volumes {
        name = "shared-volume"
        empty_dir {
          medium = "Memory"
        }
      }

      # Main application container
      containers {
        name  = "main-app"
        image = "<CONTAINER_IMAGE>"

        # Expose a port for the main container
        ports {
          container_port = 8080
        }
        # Mount the shared volume
        volume_mounts {
          name      = "shared-volume"
          mount_path = "/shared-volume"
        }

        # Startup Probe for TCP Health Check
        startup_probe {
          tcp_socket {
            port = 8080
          }
          initial_delay_seconds = 0  # Delay before the probe starts
          period_seconds        = 10   # Time between probes
          failure_threshold     = 3   # Number of failures before marking as unhealthy
          timeout_seconds       = 1  # Number of failures before marking as unhealthy
        }

        # Environment variables for the main container
        env {
          name  = "DD_SITE"
          value = "<DATADOG_SITE>"
        }
        env {
          name  = "DD_SERVERLESS_LOG_PATH"
          value = "shared-volume/logs/*.log"
        }
        env {
          name  = "DD_ENV"
          value = "serverless"
        }
        env {
          name  = "DD_API_KEY"
          value = "<API_KEY>"
        }
        env {
          name  = "DD_SERVICE"
          value = "<SERVICE_NAME>"
        }
        env {
          name  = "DD_VERSION"
          value = "<VERSION>"
        }
        env {
          name  = "DD_LOG_LEVEL"
          value = "debug"
        }
        env {
          name  = "DD_LOGS_INJECTION"
          value = "true"
        }
        env {
          name  = "FUNCTION_TARGET"
          value = "<FUNCTION_NAME>" # only needed for cloud run functions
        }

        # Resource limits for the main container
        resources {
          limits = {
            memory = "512Mi"
            cpu    = "1"
          }
        }
      }

      # Sidecar container
      containers {
        name  = "sidecar-container"
        image = "gcr.io/datadoghq/serverless-init:latest"

        # Mount the shared volume
        volume_mounts {
          name      = "shared-volume"
          mount_path = "/shared-volume"
        }

        # Startup Probe for TCP Health Check
        startup_probe {
          tcp_socket {
            port = 12345
          }
          initial_delay_seconds = 0  # Delay before the probe starts
          period_seconds        = 10   # Time between probes
          failure_threshold     = 3   # Number of failures before marking as unhealthy
          timeout_seconds       = 1
        }

        # Environment variables for the main container
        env {
          name  = "DD_SITE"
          value = "<DATADOG_SITE>"
        }
        env {
          name  = "DD_SERVERLESS_LOG_PATH"
          value = "shared-volume/logs/*.log"
        }
        env {
          name  = "DD_ENV"
          value = "serverless"
        }
        env {
          name  = "DD_API_KEY"
          value = "<API_KEY>"
        }
        env {
          name  = "DD_SERVICE"
          value = "<SERVICE_NAME>"
        }
        env {
          name  = "DD_VERSION"
          value = "<VERSION>"
        }
        env {
          name  = "DD_LOG_LEVEL"
          value = "debug"
        }
        env {
          name  = "DD_LOGS_INJECTION"
          value = "true"
        }
        env {
          name  = "FUNCTION_TARGET"
          value = "<FUNCTION_NAME>" # only needed for cloud run functions
        }
        env {
          name  = "DD_HEALTH_PORT"
          value = "12345"
        }

        # Resource limits for the sidecar
        resources {
          limits = {
            memory = "512Mi"
            cpu    = "1"
          }
        }
      }
    }
  }

  # Define traffic splitting
  traffic {
    percent         = 100
    latest_revision = true
  }
}

# IAM Member to allow public access (optional, adjust as needed)
resource "google_cloud_run_service_iam_member" "invoker" {
  service  = google_cloud_run_service.terraform_with_sidecar.name
  location = google_cloud_run_service.terraform_with_sidecar.location
  role     = "roles/run.invoker"
  member   = "allUsers"
}

Supply placeholder values:

  • <PROJECT_ID>: Your Google Cloud project ID.
  • <LOCATION>: The region you are deploying your service in. For example, us-central1.
  • <SERVICE_NAME>: A name for your service. For example, gcr-sidecar-test. See Unified Service Tagging.
  • <CONTAINER_IMAGE>: The image of the code you are deploying to Cloud Run.
  • <DATADOG_SITE>: Your Datadog site, .
  • <API_KEY>: Your Datadog API key.
  • <VERSION>: The version number of your deployment. See Unified Service Tagging.

Environment variables

VariableDescription
DD_API_KEYDatadog API key - Required
DD_SITEDatadog site - Required
DD_LOGS_INJECTIONWhen true, enrich all logs with trace data for supported loggers in Java, Node, .NET, and PHP. See additional docs for Python, Go, and Ruby.
DD_SERVICESee Unified Service Tagging.
DD_VERSIONSee Unified Service Tagging.
DD_ENVSee Unified Service Tagging.
DD_SOURCESee Unified Service Tagging.
DD_TAGSSee Unified Service Tagging.

Do not use the DD_LOGS_ENABLED environment variable. This variable is only used for the serverless-init install method.

Example application

The following example contains a single app with tracing, metrics, and logs set up.

const tracer = require('dd-trace').init({
 logInjection: true,
});
const express = require("express");
const app = express();
const { createLogger, format, transports } = require('winston');

const logger = createLogger({
 level: 'info',
 exitOnError: false,
 format: format.json(),
 transports: [new transports.File({ filename: `/shared-volume/logs/app.log`}),
  ],
});

app.get("/", (_, res) => {
 logger.info("Welcome!");
 res.sendStatus(200);
});

app.get("/hello", (_, res) => {
 logger.info("Hello!");
 metricPrefix = "nodejs-cloudrun";
 // Send three unique metrics, just so we're testing more than one single metric
 metricsToSend = ["sample_metric_1", "sample_metric_2", "sample_metric_3"];
 metricsToSend.forEach((metric) => {
   for (let i = 0; i < 20; i++) {
     tracer.dogstatsd.distribution(`${metricPrefix}.${metric}`, 1);
   }
 });
 res.status(200).json({ msg: "Sending metrics to Datadog" });
});

const port = process.env.PORT || 8080;
app.listen(port);

app.py

import ddtrace
from flask import Flask, render_template, request
import logging
from datadog import initialize, statsd

ddtrace.patch(logging=True)
app = Flask(__name__)
options = {
   'statsd_host':'127.0.0.1',
   'statsd_port':8125
}
FORMAT = ('%(asctime)s %(levelname)s [%(name)s] [%(filename)s:%(lineno)d] '
         '[dd.service=%(dd.service)s dd.env=%(dd.env)s dd.version=%(dd.version)s dd.trace_id=%(dd.trace_id)s dd.span_id=%(dd.span_id)s] '
         '- %(message)s')
logging.basicConfig(level=logging.DEBUG, filename='app.log', format=FORMAT)
logger = logging.getLogger(__name__)
logger.level = logging.INFO

ddlogs = []

@ddtrace.tracer.wrap(service="dd_gcp_log_forwader")
@app.route('/', methods=["GET"])
def index():
   log = request.args.get("log")
   if log != None:
       with tracer.trace('sending_logs') as span:
           statsd.increment('dd.gcp.logs.sent')
           span.set_tag('logs', 'nina')
           logger.info(log)
           ddlogs.append(log)
   return render_template("home.html", logs=ddlogs)

if __name__ == '__main__':
   tracer.configure(port=8126)
   initialize(**options)
   app.run(debug=True)

Home.html

<!DOCTYPE html>
<html lang="en">
<head>
   <meta charset="UTF-8">
   <meta http-equiv="X-UA-Compatible" content="IE=edge">
   <meta name="viewport" content="width=device-width, initial-scale=1.0">
   <title>Datadog Test</title>
</head>
<body>
   <h1>Welcome to Datadog!💜</h1>
   <form action="">
       <input type="text" name="log" placeholder="Enter Log">
       <button>Add Log</button>
   </form>
   <h3>Logs Sent to Datadog:</h3>
   <ul>
   {% for log in logs%}
       {% if log %}
       <li>{{ log }}</li>
       {% endif %}
   {% endfor %}
   </ul>
</body>
</html>
package com.example.springboot;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

import com.timgroup.statsd.NonBlockingStatsDClientBuilder;
import com.timgroup.statsd.StatsDClient;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;

@RestController
public class HelloController {
   Private static final StatsDClient Statsd = new NonBlockingStatsDClientBuilder().hostname("localhost").build();
   private static final Log logger = LogFactory.getLog(HelloController.class);
   @GetMapping("/")
   public String index() {
       Statsd.incrementCounter("page.views");
       logger.info("Hello Cloud Run!");
       return "💜 Hello Cloud Run! 💜";
   }
}
package main


import (
   "fmt"
   "log"
   "net/http"
   "os"
   "path/filepath"


   "github.com/DataDog/datadog-go/v5/statsd"
   "gopkg.in/DataDog/dd-trace-go.v1/ddtrace"
   "gopkg.in/DataDog/dd-trace-go.v1/ddtrace/tracer"
)


const logDir = "/shared-volume/logs"

var logFile *os.File
var logCounter int
var dogstatsdClient *statsd.Client

func handler(w http.ResponseWriter, r *http.Request) {
   log.Println("Yay!! Main container works")
   span := tracer.StartSpan("maincontainer", tracer.ResourceName("/handler"))
   defer span.Finish()
   logCounter++
   writeLogsToFile(fmt.Sprintf("received request %d", logCounter), span.Context())
   dogstatsdClient.Incr("request.count", []string{"test-tag"}, 1)
}

func writeLogsToFile(log_msg string, context ddtrace.SpanContext) {
   span := tracer.StartSpan(
       "writeLogToFile",
       tracer.ResourceName("/writeLogsToFile"),
       tracer.ChildOf(context))
   defer span.Finish()
   _, err := logFile.WriteString(log_msg + "\n")
   if err != nil {
       log.Println("Error writing to log file:", err)
   }
}

func main() {
   log.Print("Main container started...")

   err := os.MkdirAll(logDir, 0755)
   if err != nil {
       panic(err)
   }
   logFilePath := filepath.Join(logDir, "maincontainer.log")
   log.Println("Saving logs in ", logFilePath)
   logFileLocal, err := os.OpenFile(logFilePath, os.O_WRONLY|os.O_APPEND|os.O_CREATE, 0644)
   if err != nil {
       panic(err)
   }
   defer logFileLocal.Close()

   logFile = logFileLocal

   dogstatsdClient, err = statsd.New("localhost:8125")
   if err != nil {
       panic(err)
   }
   defer dogstatsdClient.Close()

   tracer.Start()
   defer tracer.Stop()

   http.HandleFunc("/", handler)
   log.Fatal(http.ListenAndServe(":8080", nil))
}
using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Mvc.RazorPages;
using Serilog;
using Serilog.Formatting.Json;
using Serilog.Formatting.Compact;
using Serilog.Sinks.File;
using StatsdClient;


namespace dotnet.Pages;


public class IndexModel : PageModel
{
   private readonly static DogStatsdService _dsd;
   static IndexModel()
   {
       var dogstatsdConfig = new StatsdConfig
       {
           StatsdServerName = "127.0.0.1",
           StatsdPort = 8125,
       };


       _dsd = new DogStatsdService();
       _dsd.Configure(dogstatsdConfig);


       Log.Logger = new LoggerConfiguration()
           .WriteTo.File(new RenderedCompactJsonFormatter(), "/shared-volume/logs/app.log")
           .CreateLogger();
   }
   public void OnGet()
   {
       _dsd.Increment("page.views");
       Log.Information("Hello Cloud Run!");
   }
}
<?php


require __DIR__ . '/vendor/autoload.php';


use DataDog\DogStatsd;
use Monolog\Logger;
use Monolog\Handler\StreamHandler;
use Monolog\Formatter\JsonFormatter;


$statsd = new DogStatsd(
   array('host' => '127.0.0.1',
         'port' => 8125,
    )
 );


$log = new logger('datadog');
$formatter = new JsonFormatter();


$stream = new StreamHandler('/shared-volume/logs/app.log', Logger::DEBUG);
$stream->setFormatter($formatter);


$log->pushHandler($stream);


$log->info("Hello Datadog!");
echo '💜 Hello Datadog! 💜';


$log->info("sending a metric");
$statsd->increment('page.views', 1, array('environment'=>'dev'));


?>

Further reading

Additional helpful documentation, links, and articles: