Live Processes
Incident Management is now generally available! Incident Management is now generally available!

Live Processes

Introduction

Datadog’s Live Processes gives you real-time visibility into the process running on your infrastructure. Use Live Processes to:

  • View all of your running processes in one place
  • Break down the resource consumption on your hosts and containers at the process level
  • Query for processes running on a specific host, in a specific zone, or running a specific workload
  • Monitor the performance of the internal and third-party software you run using system metrics at two-second granularity
  • Add context to your dashboards and notebooks

Installation

If you are using Agent 5, follow this specific installation process. If you are using Agent 6 or 7, see the intructions below.

Once the Datadog Agent is installed, enable Live Processes collection by editing the Agent main configuration file by setting the following parameter to true:

process_config:
    enabled: 'true'

The enabled value is a string with the following options:

  • "true": Enable the Process Agent to collect processes and containers.
  • "false" (default): Only collect containers if available.
  • "disabled": Don’t run the Process Agent at all.

Additionally, some configuration options may be set as environment variables.

Note: Options set as environment variables override the settings defined in the configuration file.

After configuration is complete, restart the Agent.

Follow the instructions for the Docker Agent, passing in the following attributes, in addition to any other custom settings as appropriate:

-v /etc/passwd:/etc/passwd:ro
-e DD_PROCESS_AGENT_ENABLED=true

Note:

  • To collect container information in the standard install, the dd-agent user must have permissions to access docker.sock.
  • Running the Agent as a container still allows you to collect host processes.

In the dd-agent.yaml manifest used to create the Daemonset, add the following environmental variables, volume mount, and volume:

 env:
    - name: DD_PROCESS_AGENT_ENABLED
      value: "true"
  volumeMounts:
    - name: passwd
      mountPath: /etc/passwd
      readOnly: true
  volumes:
    - hostPath:
        path: /etc/passwd
      name: passwd

Refer to the standard Daemonset installation and the Docker Agent information pages for further documentation.

Note: Running the Agent as a container still allows you to collect host processes.

Update your datadog-values.yaml file with the following process collection configuration, then upgrade your Datadog Helm chart:

datadog:
    # (...)
    processAgent:
        enabled: true
        processCollection: true

Process Arguments Scrubbing

In order to hide sensitive data on the Live Processes page, the Agent scrubs sensitive arguments from the process command line. This feature is enabled by default and any process argument that matches one of the following words has its value hidden.

"password", "passwd", "mysql_pwd", "access_token", "auth_token", "api_key", "apikey", "secret", "credentials", "stripetoken"

Note: The matching is case insensitive.

Define your own list to be merged with the default one, using the custom_sensitive_words field in datadog.yaml file under the process_config section. Use wildcards (*) to define your own matching scope. However, a single wildcard ('*') is not supported as a sensitive word.

process_config:
    scrub_args: true
    custom_sensitive_words: ['personal_key', '*token', 'sql*', '*pass*d*']

Note: Words in custom_sensitive_words must contain only alphanumeric characters, underscores, or wildcards ('*'). A wildcard-only sensitive word is not supported.

The next image shows one process on the Live Processes page whose arguments have been hidden by using the configuration above.

Set scrub_args to false to completely disable the process arguments scrubbing.

You can also scrub all arguments from processes by enabling the strip_proc_arguments flag in your datadog.yaml configuration file:

process_config:
    strip_proc_arguments: true

Queries

Scoping Processes

Processes are, by nature, extremely high cardinality objects. To refine your scope to view relevant processes, you can use text and tag filters.

Text Filters

When you input a text string into the search bar, fuzzy string search is used to query processes containing that text string in their command lines or paths. Enter a string of two or more characters to see results. Below is Datadog’s demo environment, filtered with the string postgres /9..

Note: /9. has matched in the command path, and postgres matches the command itself.

To combine multiple string searches into a complex query, use any of the following Boolean operators:

OperatorDescriptionExample
ANDIntersection: both terms are in the selected events (if nothing is added, AND is taken by default)java AND elasticsearch
ORUnion: either term is contained in the selected eventsjava OR python
NOT / !Exclusion: the following term is NOT in the event. You may use the word NOT or ! character to perform the same operationjava NOT elasticsearch
equivalent: java !elasticsearch

Use parentheses to group operators together. For example, (NOT (elasticsearch OR kafka) java) OR python .

Tag Filters

You can also filter your processes using Datadog tags, such as host, pod, user, and service. Input tag filters directly into the search bar, or select them in the facet panel on the left of the page.

Datadog automatically generates a command tag, so that you can filter for

  • third-party software (e.g. command:mongod, command:nginx)
  • container management software (e.g. command:docker, command:kubelet)
  • common workloads (e.g. command:ssh, command:CRON)

Aggregating Processes

Tagging enhances navigation. In addition to all existing host-level tags, processes are tagged by user.

Furthermore, processes in ECS containers are also tagged by:

  • task_name
  • task_version
  • ecs_cluster

Processeses in Kubernetes containers are tagged by:

  • pod_name
  • kube_pod_ip
  • kube_service
  • kube_namespace
  • kube_replica_set
  • kube_daemon_set
  • kube_job
  • kube_deployment
  • Kube_cluster

If you have configuration for Unified Service Tagging in place, env, service, and version will also be picked up automatically. Having these tags available will let you tie together APM, logs, metrics, and process data. Note that this setup applies to containerized environments only.

Scatter Plot

Use the scatter plot analytic to compare two metrics with one another in order to better understand the performance of your containers.

To access the scatter plot analytic in the Processes page click on the Show Summary graph button the select the “Scatter Plot” tab:

By default, the graph groups by the command tag key. The size of each dot represents the number of processes in that group, and clicking on a dot displays the individual pids and containers that contribute to the group.

The query at the top of the scatter plot analytic allows you to control your scatter plot analytic:

  • Selection of metrics to display.
  • Selection of the aggregation method for both metrics.
  • Selection of the scale of both X and Y axis (Linear/Log).

Process Monitors

Use the Live Process Monitor to generate alerts based on the count of any group of processes across hosts or tags. You can configure process alerts in the Monitors page. To learn more, see our Live Process Monitor documentation.

Processes in Dashboards and Notebooks

You can graph process metrics in dashboards and notebooks using the Timeseries widget. To configure:

  1. Select Live Processes as a data source
  2. Filter using text strings in the search bar
  3. Select a process metric to graph
  4. Filter using tags in the From field

Autodetected Integrations

Datadog uses process collection to autodetect the technologies running on your hosts. This identifies Datadog integrations that can help you monitor these technologies. These auto-detected integrations are displayed in the Integrations search:

Each integration has one of two status types:

  • + Detected: This integration is not enabled on any host(s) running it.
  • ✓ Partial Visibility: This integration is enabled on some, but not all relevant hosts are running it.

Hosts that are running the integration, but where the integration is not enabled, can be found in the Hosts tab of the integrations tile.

Processes across the Platform

Processes across the Platform

In Live Containers

Live Processes adds extra visibility to your container deployments by monitoring the processes running on each of your containers. Click on a container in the Live Containers page to view its process tree, including the commands it is running and their resource consumption. Use this data alongside other container metrics to determine the root cause of failing containers or deployments.

In APM

In APM Traces, you can click on a service’s span to see the processes running on its underlying infrastructure. A service’s span processes are correlated with the hosts or pods on which the service runs at the time of the request. Analyze process metrics such as CPU and RSS memory alongside code-level errors to distinguish between application-specific and wider infrastructure issues. Clicking on a process will bring you to the Live Processes page. Related processes are not currently supported for serverless and browser traces.

In Network Performance Monitoring

When you inspect a dependency in the Network Overview, you can view processes running on the underlying infrastructure of the endpoints (e.g. services) communicating with one another. Use process metadata to determine whether poor network connectivity (indicated by a high number of TCP retransmits) or high network call latency (indicated by high TCP round-trip time) could be due to heavy workloads consuming those endpoints’ resources, and thus, affecting the health and efficiency of their communication.

Real-time monitoring

While actively working with the Live Processes, metrics are collected at 2s resolution. This is important for highly volatile metrics such as CPU. In the background, for historical context, metrics are collected at 10s resolution.

Additional Information

  • Collection of open files and current working directory is limited based on the level of privilege of the user running dd-process-agent. In the event that dd-process-agent is able to access these fields, they are collected automatically.
  • Real-time (2s) data collection is turned off after 30 minutes. To resume real-time collection, refresh the page.
  • In container deployments, the /etc/passwd file mounted into the docker-dd-agent is necessary to collect usernames for each process. This is a public file and the Process Agent does not use any fields except the username. All features except the user metadata field function without access to this file. Note: Live Processes only uses the host passwd file and does not perform username resolution for users created within containers.

Further Reading