Datadog’s Live Processes gives you real-time visibility into the process running on your infrastructure. Use Live Processes to:
Once the Datadog Agent is installed, enable Live Processes collection by editing the Agent main configuration file by setting the following parameter to
process_config: enabled: 'true'
enabled value is a string with the following options:
"true": Enable the Process Agent to collect processes and containers.
"false"(default): Only collect containers if available.
"disabled": Don’t run the Process Agent at all.
Additionally, some configuration options may be set as environment variables.
Note: Options set as environment variables override the settings defined in the configuration file.
After configuration is complete, restart the Agent.
Follow the instructions for the Docker Agent, passing in the following attributes, in addition to any other custom settings as appropriate:
-v /etc/passwd:/etc/passwd:ro -e DD_PROCESS_AGENT_ENABLED=true
dd-agentuser must have permissions to access
In the dd-agent.yaml manifest used to create the Daemonset, add the following environmental variables, volume mount, and volume:
env: - name: DD_PROCESS_AGENT_ENABLED value: "true" volumeMounts: - name: passwd mountPath: /etc/passwd readOnly: true volumes: - hostPath: path: /etc/passwd name: passwd
Note: Running the Agent as a container still allows you to collect host processes.
Update your datadog-values.yaml file with the following process collection configuration, then upgrade your Datadog Helm chart:
datadog: # (...) processAgent: enabled: true processCollection: true
In order to hide sensitive data on the Live Processes page, the Agent scrubs sensitive arguments from the process command line. This feature is enabled by default and any process argument that matches one of the following words has its value hidden.
"password", "passwd", "mysql_pwd", "access_token", "auth_token", "api_key", "apikey", "secret", "credentials", "stripetoken"
Note: The matching is case insensitive.
Define your own list to be merged with the default one, using the
custom_sensitive_words field in
datadog.yaml file under the
process_config section. Use wildcards (
*) to define your own matching scope. However, a single wildcard (
'*') is not supported as a sensitive word.
process_config: scrub_args: true custom_sensitive_words: ['personal_key', '*token', 'sql*', '*pass*d*']
Note: Words in
custom_sensitive_words must contain only alphanumeric characters, underscores, or wildcards (
'*'). A wildcard-only sensitive word is not supported.
The next image shows one process on the Live Processes page whose arguments have been hidden by using the configuration above.
false to completely disable the process arguments scrubbing.
You can also scrub all arguments from processes by enabling the
strip_proc_arguments flag in your
datadog.yaml configuration file:
process_config: strip_proc_arguments: true
Processes are, by nature, extremely high cardinality objects. To refine your scope to view relevant processes, you can use text and tag filters.
When you input a text string into the search bar, fuzzy string search is used to query processes containing that text string in their command lines or paths. Enter a string of two or more characters to see results. Below is Datadog’s demo environment, filtered with the string
/9. has matched in the command path, and
postgres matches the command itself.
To combine multiple string searches into a complex query, use any of the following Boolean operators:
|Intersection: both terms are in the selected events (if nothing is added, AND is taken by default)||java AND elasticsearch|
|Union: either term is contained in the selected events||java OR python|
|Exclusion: the following term is NOT in the event. You may use the word ||java NOT elasticsearch|
equivalent: java !elasticsearch
Use parentheses to group operators together. For example,
(NOT (elasticsearch OR kafka) java) OR python .
You can also filter your processes using Datadog tags, such as
service. Input tag filters directly into the search bar, or select them in the facet panel on the left of the page.
Datadog automatically generates a
command tag, so that you can filter for
Tagging enhances navigation. In addition to all existing host-level tags, processes are tagged by
Furthermore, processes in ECS containers are also tagged by:
Processes in Kubernetes containers are tagged by:
If you have configuration for Unified Service Tagging in place,
version will also be picked up automatically.
Having these tags available will let you tie together APM, logs, metrics, and process data.
Note that this setup applies to containerized environments only.
Use the scatter plot analytic to compare two metrics with one another in order to better understand the performance of your containers.
To access the scatter plot analytic in the Processes page click on the Show Summary graph button the select the “Scatter Plot” tab:
By default, the graph groups by the
command tag key. The size of each dot represents the number of processes in that group, and clicking on a dot displays the individual pids and containers that contribute to the group.
The query at the top of the scatter plot analytic allows you to control your scatter plot analytic:
Use the Live Process Monitor to generate alerts based on the count of any group of processes across hosts or tags. You can configure process alerts in the Monitors page. To learn more, see our Live Process Monitor documentation.
You can graph process metrics in dashboards and notebooks using the Timeseries widget. To configure:
Datadog uses process collection to autodetect the technologies running on your hosts. This identifies Datadog integrations that can help you monitor these technologies. These auto-detected integrations are displayed in the Integrations search:
Each integration has one of two status types:
Hosts that are running the integration, but where the integration is not enabled, can be found in the Hosts tab of the integrations tile.
Live Processes adds extra visibility to your container deployments by monitoring the processes running on each of your containers. Click on a container in the Live Containers page to view its process tree, including the commands it is running and their resource consumption. Use this data alongside other container metrics to determine the root cause of failing containers or deployments.
In APM Traces, you can click on a service’s span to see the processes running on its underlying infrastructure. A service’s span processes are correlated with the hosts or pods on which the service runs at the time of the request. Analyze process metrics such as CPU and RSS memory alongside code-level errors to distinguish between application-specific and wider infrastructure issues. Clicking on a process will bring you to the Live Processes page. Related processes are not currently supported for serverless and browser traces.
When you inspect a dependency in the Network Overview, you can view processes running on the underlying infrastructure of the endpoints (e.g. services) communicating with one another. Use process metadata to determine whether poor network connectivity (indicated by a high number of TCP retransmits) or high network call latency (indicated by high TCP round-trip time) could be due to heavy workloads consuming those endpoints' resources, and thus, affecting the health and efficiency of their communication.
While actively working with the Live Processes, metrics are collected at 2s resolution. This is important for highly volatile metrics such as CPU. In the background, for historical context, metrics are collected at 10s resolution.
dd-process-agent. In the event that
dd-process-agentis able to access these fields, they are collected automatically.
/etc/passwdfile mounted into the
docker-dd-agentis necessary to collect usernames for each process. This is a public file and the Process Agent does not use any fields except the username. All features except the
usermetadata field function without access to this file. Note: Live Processes only uses the host
passwdfile and does not perform username resolution for users created within containers.
Additional helpful documentation, links, and articles: