이 페이지는 아직 한국어로 제공되지 않으며 번역 작업 중입니다. 번역에 관한 질문이나 의견이 있으시면 언제든지 저희에게 연락해 주십시오.

In containerized environments there are a few differences in how the Agent connects to the JMX server. Autodiscovery features make it possible to dynamically setup these integrations. Use Datadog’s JMX based integrations to collect JMX applications metrics from your pods in Kubernetes.

If you are using the Java tracer for your applications, you can alternatively take advantage of the Java runtime metrics feature to send these metrics to the Agent.

Installation

Use a JMX-enabled Agent

JMX utilities are not installed in the Agent by default. To set up a JMX integration, append -jmx to your Agent’s image tag. For example, gcr.io/datadoghq/agent:latest-jmx.

If you are using Datadog Operator or Helm, the following configurations append -jmx to your Agent’s image tag:

apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog
spec:
  #(...)
  override:
    nodeAgent:
      image:
        jmxEnabled: true
agents:
  image:
    tagSuffix: jmx

Configuration

Use one of the following methods:

Autodiscovery annotations

In this method, a JMX check configuration is applied using annotations on your Java-based Pods. This allows the Agent to automatically configure the JMX check when a new container starts. Ensure these annotations are on the created Pod, and not on the object (Deployment, DaemonSet, etc.) creating the Pod.

Use the following template for Autodiscovery annotations:

apiVersion: v1
kind: Pod
metadata:
  name: <POD_NAME>
  annotations:
    ad.datadoghq.com/<CONTAINER_NAME>.checks: |
      {
        "<INTEGRATION_NAME>": {
          "init_config": {
            "is_jmx": true,
            "collect_default_metrics": true
          },
          "instances": [{
            "host": "%%host%%",
            "port": "<JMX_PORT>"
          }]
        }
      }      
    # (...)
spec:
  containers:
    - name: '<CONTAINER_NAME>'
      # (...)
      env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: JAVA_OPTS
          value: >-
            -Dcom.sun.management.jmxremote
            -Dcom.sun.management.jmxremote.authenticate=false
            -Dcom.sun.management.jmxremote.ssl=false
            -Dcom.sun.management.jmxremote.local.only=false
            -Dcom.sun.management.jmxremote.port=<JMX_PORT>
            -Dcom.sun.management.jmxremote.rmi.port=<JMX_PORT>
            -Djava.rmi.server.hostname=$(POD_IP)            

In this example:

  • <POD_NAME> is the name of your pod.
  • <CONTAINER_NAME> matches the desired container within your pod.
  • <INTEGRATION_NAME> is the name of the desired JMX integration. See the list of available JMX integrations.
  • Set <JMX_PORT> as desired, as long as it matches between the annotations and JAVA_OPTS.

With this configuration, the Datadog Agent discovers this pod and makes a request to the JMX server relative to the %%host%% Autodiscovery template variable—this request resolves to the IP address of the discovered pod. This is why java.rmi.server.hostname is set to the POD_IP address previously populated with the Kubernetes downward API.

Note: The JAVA_OPTS environment variable is commonly used in Java-based container images as a startup parameter (for example, java $JAVA_OPTS -jar app.jar). If you are using a custom application, or if your application does not follow this pattern, set these system properties manually.

Example annotation: Tomcat

The following configuration runs the Tomcat JMX integration against port 9012:

apiVersion: v1
kind: Pod
metadata:
  name: tomcat-test
  annotations:
    ad.datadoghq.com/tomcat.checks: |
      {
        "tomcat": {
          "init_config": {
            "is_jmx": true,
            "collect_default_metrics": true
          },
          "instances": [{
            "host": "%%host%%",
            "port": "9012"
          }]
        }
      }      
spec:
  containers:
    - name: tomcat
      image: tomcat:8.0
      imagePullPolicy: Always
      ports:
        - name: jmx-metrics
          containerPort: 9012
      env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: JAVA_OPTS
          value: >-
            -Dcom.sun.management.jmxremote
            -Dcom.sun.management.jmxremote.authenticate=false
            -Dcom.sun.management.jmxremote.ssl=false
            -Dcom.sun.management.jmxremote.local.only=false
            -Dcom.sun.management.jmxremote.port=9012
            -Dcom.sun.management.jmxremote.rmi.port=9012
            -Djava.rmi.server.hostname=$(POD_IP)            

Custom metric annotation template

If you need to collect additional metrics from these integrations, add them to the init_config section:

ad.datadoghq.com/<CONTAINER_NAME>.checks: |
  {
    "<INTEGRATION_NAME>": {
      "init_config": {
        "is_jmx": true,
        "collect_default_metrics": true,
        "conf": [{
          "include": {
            "domain": "java.lang",
            "type": "OperatingSystem",
            "attribute": {
               "FreePhysicalMemorySize": {
                 "metric_type": "gauge",
                 "alias": "jvm.free_physical_memory"
               } 
            }
          }
        }]
      },
      "instances": [{
        "host": "%%host%%",
        "port": "<JMX_PORT>"
      }]
    }
  }  

See the JMX integration documentation for more information about the formatting for these metrics.

Autodiscovery configuration files

If you need to pass a more complex custom configuration for your Datadog-JMX integration, you can use Autodiscovery Container Identifiers to pass custom integration configuration files as well as a custom metrics.yaml file.

1. Compose configuration file

When using this method, the Agent needs a configuration file and an optional metrics.yaml file for the metrics to collect. These files can either be mounted into the Agent pod or built into the container image.

The configuration file naming convention is to first identify your desired integration name from the prerequisite steps of available integrations. Once this is determined, the Agent needs a configuration file named relative to that integration—or within that integration’s config directory.

For example, for the Tomcat integration, create either:

  • /etc/datadog-agent/conf.d/tomcat.yaml, or
  • /etc/datadog-agent/conf.d/tomcat.d/conf.yaml

If you are using a custom metrics.yaml file, include it in the integration’s config directory. (For example: /etc/datadog-agent/conf.d/tomcat.d/metrics.yaml.)

This configuration file should include ad_identifiers:

ad_identifiers:
  - <CONTAINER_IMAGE>

init_config:
  is_jmx: true
  conf:
    <METRICS_TO_COLLECT>

instances:
  - host: "%%host%%"
    port: "<JMX_PORT>"

Replace <CONTAINER_IMAGE> with the short image name of your desired container. For example, the container image gcr.io/CompanyName/my-app:latest has a short image name of my-app. As the Datadog Agent discovers that container, it sets up the JMX configuration as described in this file.

You can alternatively reference and specify custom identifiers to your containers if you do not want to base this on the short image name.

Like Kubernetes annotations, configuration files can use Autodiscovery template variables. In this case, the host configuration uses %%host%% to resolve to the IP address of the discovered container.

See the JMX integration documentation (as well as the example configurations for the pre-provided integrations) for more information about structuring your init_config and instances configuration for the <METRICS_TO_COLLECT>.

2. Mount configuration file

If you are using Datadog Operator, add an override:

apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
  name: datadog
spec:
  #(...)
  override:
    nodeAgent:
      image:
        jmxEnabled: true
      extraConfd:
        configDataMap:
          <INTEGRATION_NAME>.yaml: |-
            ad_identifiers:
              - <CONTAINER_IMAGE>

            init_config:
              is_jmx: true

            instances:
              - host: "%%host%%"
                port: "<JMX_PORT>"            

In Helm, use the datadog.confd option:

datadog:
  confd:
    <INTEGRATION_NAME>.yaml: |
      ad_identifiers:
        - <CONTAINER_IMAGE>

      init_config:
        is_jmx: true

      instances:
        - host: "%%host%%"
          port: "<JMX_PORT>"      

If you cannot mount these files in the Agent container (for example, on Amazon ECS) you can build an Agent Docker image containing the desired configuration files.

For example:

FROM gcr.io/datadoghq/agent:latest-jmx
COPY <PATH_JMX_CONF_FILE> conf.d/tomcat.d/
COPY <PATH_JMX_METRICS_FILE> conf.d/tomcat.d/

Then use this new custom image as your regular containerized Agent.

3. Expose JMX server

Set up the JMX server in a way that allows the Agent to access it:

spec:
  containers:
    - # (...)
      env:
      - name: POD_IP
        valueFrom:
          fieldRef:
            fieldPath: status.podIP
      - name: JAVA_OPTS
        value: >-
          -Dcom.sun.management.jmxremote
          -Dcom.sun.management.jmxremote.authenticate=false
          -Dcom.sun.management.jmxremote.ssl=false
          -Dcom.sun.management.jmxremote.local.only=false
          -Dcom.sun.management.jmxremote.port=<JMX_PORT>
          -Dcom.sun.management.jmxremote.rmi.port=<JMX_PORT>
          -Djava.rmi.server.hostname=$(POD_IP)             

Available JMX integrations

The Datadog Agent comes with several JMX integrations pre-configured.

Each integration in the above table has a metrics.yaml file predefined to match the expected pattern of the returned JMX metrics per application. Use the listed integration names as <INTEGRATION_NAME> in your Autodiscovery annotations or configuration files.

Alternatively use jmx as your <INTEGRATION_NAME> to set up a basic JMX integration and collect the default jvm.* metrics only.

Further Reading