- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
This tutorial walks you through the steps for enabling tracing on a sample Java application installed in a container. In this scenario, the Datadog Agent is installed on a host.
For other scenarios, including the application and Agent on a host, the application and the Agent in containers or cloud infrastructure, and applications written in different languages, see the other Enabling Tracing tutorials.
See Tracing Java Applications for general comprehensive tracing setup documentation for Java.
If you haven’t installed a Datadog Agent on your machine, install one now.
Go to Integrations > Agent and select your operating system. For example, on most Linux platforms, you can install the Agent by running the following script, replacing <YOUR_API_KEY>
with your Datadog API key:
DD_AGENT_MAJOR_VERSION=7 DD_API_KEY=<YOUR_API_KEY> DD_SITE="datadoghq.com" bash -c "$(curl -L https://install.datadoghq.com/scripts/install_script.sh)"
To send data to a Datadog site other than datadoghq.com
, replace the DD_SITE
environment variable with your Datadog site.
Ensure your Agent is configured to receive trace data from containers. Open its configuration file and ensure apm_config:
is uncommented, and apm_non_local_traffic
is uncommented and set to true
.
Start the Agent service on the host. The command depends on the operating system, for example:
MacOS: launchctl start com.datadoghq.agent
Linux: sudo service datadog-agent start
Verify that the Agent is running and sending data to Datadog by going to Events > Explorer, optionally filtering by the Datadog
Source facet, and looking for an event that confirms the Agent installation on the host:
The code sample for this tutorial is on GitHub at github.com/Datadog/apm-tutorial-java-host. To get started, clone the repository:
git clone https://github.com/DataDog/apm-tutorial-java-host.git
The repository contains a multi-service Java application pre-configured to be run within Docker containers. The sample app is a basic notes app with a REST API to add and change data.
For this tutorial, the docker-compose
YAML files are located in the folder apm-tutorial-java-host/docker
. The instructions that follow assume that your Agent is running on a Linux host, and so use the service-docker-compose-linux.yaml
file. If your Agent is on a macOS or Windows host, follow the same directions but use the service-docker-compose.yaml
file instead. The Linux file contains Linux-specific Docker settings that are described in the in-file comments.
In each of the notes
and calendar
directories, there are two sets of Dockerfiles for building the applications, either with Maven or with Gradle. This tutorial uses the Maven build, but if you are more familiar with Gradle, you can use it instead with the corresponding changes to build commands.
Build the application’s container by running the following from inside the /docker
directory:
docker-compose -f service-docker-compose-linux.yaml build notes
Start the container:
docker-compose -f service-docker-compose-linux.yaml up notes
You can verify that it’s running by viewing the containers with the docker ps
command.
Open up another terminal and send API requests to exercise the app. The notes
application is a REST API that stores data in an in-memory H2 database running in the same container. Send it a few commands:
curl 'localhost:8080/notes'
[]
curl -X POST 'localhost:8080/notes?desc=hello'
{"id":1,"description":"hello"}
curl localhost:8080/notes/1
{"id":1,"description":"hello"}
curl localhost:8080/notes
[{"id":1,"description":"hello"}]
After you’ve seen the application running, stop it so that you can enable tracing on it.
Stop the containers:
docker-compose -f service-docker-compose-linux.yaml down
Remove the containers:
docker-compose -f service-docker-compose-linux.yaml rm
Now that you have a working Java application, configure it to enable tracing.
Add the Java tracing package to your project. Open the notes/dockerfile.notes.maven
file and uncomment the line that downloads dd-java-agent
:
RUN curl -Lo dd-java-agent.jar 'https://dtdg.co/latest-java-tracer'
Within the same notes/dockerfile.notes.maven
file, comment out the ENTRYPOINT
line for running without tracing. Then uncomment the ENTRYPOINT
line, which runs the application with tracing enabled:
ENTRYPOINT ["java" , "-javaagent:../dd-java-agent.jar", "-Ddd.trace.sample.rate=1", "-jar" , "target/notes-0.0.1-SNAPSHOT.jar"]
This automatically instruments the application with Datadog services.
Universal Service Tags identify traced services across different versions and deployment environments so that they can be correlated within Datadog, and so you can use them to search and filter. The three environment variables used for Unified Service Tagging are DD_SERVICE
, DD_ENV
, and DD_VERSION
. For applications deployed with Docker, these environment variables can be added within the Dockerfile or the docker-compose
file.
For this tutorial, the service-docker-compose-linux.yaml
file already has these environment variables defined:
environment:
- DD_SERVICE=notes
- DD_ENV=dev
- DD_VERSION=0.0.1
You can also see that Docker labels for the same Universal Service Tags service
, env
, and version
values are set in the Dockerfile. This allows you also to get Docker metrics once your application is running.
labels:
- com.datadoghq.tags.service="notes"
- com.datadoghq.tags.env="dev"
- com.datadoghq.tags.version="0.0.1"
Open the compose file for the containers, docker/service-docker-compose-linux.yaml
.
In the notes
container section, add the environment variable DD_AGENT_HOST
and specify the hostname of the Agent. For Docker 20.10 and later, use host.docker.internal
to indicate that it’s the host that is also running Docker:
environment:
- DD_AGENT_HOST=host.docker.internal
If your Docker is older than 20.10, run the following command and use the returned IP anywhere that’s configured to host.docker.internal
:
docker network inspect bridge --format='{{(index .IPAM.Config 0).Gateway}}'
On Linux: Observe that the YAML also specifies an extra_hosts
, which allows communication on Docker’s internal network. If your Docker is older than 20.10, remove this extra_hosts
configuration line.
The notes
section of your compose file should look something like this:
notes:
container_name: notes
restart: always
build:
context: ../
dockerfile: notes/dockerfile.notes.maven
ports:
- 8080:8080
extra_hosts: # Linux only
- "host.docker.internal:host-gateway" # Linux only
labels:
- com.datadoghq.tags.service="notes"
- com.datadoghq.tags.env="dev"
- com.datadoghq.tags.version="0.0.1"
environment:
- DD_SERVICE=notes
- DD_ENV=dev
- DD_VERSION=0.0.1
- DD_AGENT_HOST=host.docker.internal
Now that the Tracing Library is installed and the Agent is running, restart your application to start receiving traces. Run the following commands:
docker-compose -f service-docker-compose.yaml build notes
docker-compose -f service-docker-compose.yaml up notes
With the application running, send some curl requests to it:
curl localhost:8080/notes
[]
curl -X POST 'localhost:8080/notes?desc=hello'
{"id":1,"description":"hello"}
curl localhost:8080/notes/1
{"id":1,"description":"hello"}
curl localhost:8080/notes
[{"id":1,"description":"hello"}]
Wait a few moments, and go to APM > Traces in Datadog, where you can see a list of traces corresponding to your API calls:
The h2
is the embedded in-memory database for this tutorial, and notes
is the Spring Boot application. The traces list shows all the spans, when they started, what resource was tracked with the span, and how long it took.
If you don’t see traces after several minutes, check that the Agent is running. Clear any filter in the Traces Search field (sometimes it filters on an environment variable such as ENV
that you aren’t using).
On the Traces page, click on a POST /notes
trace to see a flame graph that shows how long each span took and what other spans occurred before a span completed. The bar at the top of the graph is the span you selected on the previous screen (in this case, the initial entry point into the notes application).
The width of a bar indicates how long it took to complete. A bar at a lower depth represents a span that completes during the lifetime of a bar at a higher depth.
The flame graph for a POST
trace looks something like this:
A GET /notes
trace looks something like this:
The Java tracing library uses Java’s built-in agent and monitoring support. The flag -javaagent:../dd-java-agent.jar
in the Dockerfile tells the JVM where to find the Java tracing library so it can run as a Java Agent. Learn more about Java Agents at https://www.baeldung.com/java-instrumentation.
The dd.trace.sample.rate
flag sets the sample rate for this application. The ENTRYPOINT command in the Dockerfile sets its value to 1
, which means that 100% of all requests to the notes
service are sent to the Datadog backend for analysis and display. For a low-volume test application, this is fine. Do not do this in production or in any high-volume environment, because this results in a very large volume of data. Instead, sample some of your requests. Pick a value between 0 and 1. For example, -Ddd.trace.sample.rate=0.1
sends traces for 10% of your requests to Datadog. Read more about tracing configuration settings and sampling mechanisms.
Notice that the sampling rate flag in the command appears before the -jar
flag. That’s because this is a parameter for the Java Virtual Machine, not your application. Make sure that when you add the Java Agent to your application, you specify the flag in the right location.
Automatic instrumentation is convenient, but sometimes you want more fine-grained spans. Datadog’s Java DD Trace API allows you to specify spans within your code using annotations or code.
The following steps walk you through adding annotations to the code to trace some sample methods.
Open /notes/src/main/java/com/datadog/example/notes/NotesHelper.java
. This example already contains commented-out code that demonstrates the different ways to set up custom tracing on the code.
Uncomment the lines that import libraries to support manual tracing:
import datadog.trace.api.Trace;
import datadog.trace.api.DDTags;
import io.opentracing.Scope;
import io.opentracing.Span;
import io.opentracing.Tracer;
import io.opentracing.tag.Tags;
import io.opentracing.util.GlobalTracer;
import java.io.PrintWriter;
import java.io.StringWriter
Uncomment the lines that manually trace the two public processes. These demonstrate the use of @Trace
annotations to specify aspects such as operationName
and resourceName
in a trace:
@Trace(operationName = "traceMethod1", resourceName = "NotesHelper.doLongRunningProcess")
// ...
@Trace(operationName = "traceMethod2", resourceName = "NotesHelper.anotherProcess")
You can also create a separate span for a specific code block in the application. Within the span, add service and resource name tags and error handling tags. These tags result in a flame graph showing the span and metrics in Datadog visualizations. Uncomment the lines that manually trace the private method:
Tracer tracer = GlobalTracer.get();
// Tags can be set when creating the span
Span span = tracer.buildSpan("manualSpan1")
.withTag(DDTags.SERVICE_NAME, "NotesHelper")
.withTag(DDTags.RESOURCE_NAME, "privateMethod1")
.start();
try (Scope scope = tracer.activateSpan(span)) {
// Tags can also be set after creation
span.setTag("postCreationTag", 1);
Thread.sleep(30);
Log.info("Hello from the custom privateMethod1");
And also the lines that set tags on errors:
} catch (Exception e) {
// Set error on span
span.setTag(Tags.ERROR, true);
span.setTag(DDTags.ERROR_MSG, e.getMessage());
span.setTag(DDTags.ERROR_TYPE, e.getClass().getName());
final StringWriter errorString = new StringWriter();
e.printStackTrace(new PrintWriter(errorString));
span.setTag(DDTags.ERROR_STACK, errorString.toString());
Log.info(errorString.toString());
} finally {
span.finish();
}
Update your Maven build by opening notes/pom.xml
and uncommenting the lines configuring dependencies for manual tracing. The dd-trace-api
library is used for the @Trace
annotations, and opentracing-util
and opentracing-api
are used for manual span creation.
Rebuild the containers (on Linux use service-docker-compose-linux.yaml
):
docker-compose -f service-docker-compose.yaml build notes
docker-compose -f service-docker-compose.yaml up notes
Resend some HTTP requests, specifically some GET
requests.
On the Trace Explorer, click on one of the new GET
requests, and see a flame graph like this:
Note the higher level of detail in the stack trace now that the getAll
function has custom tracing.
For more information, read Custom Instrumentation.
Tracing a single application is a great start, but the real value in tracing is seeing how requests flow through your services. This is called distributed tracing.
The sample project includes a second application called calendar
that returns a random date whenever it is invoked. The POST
endpoint in the Notes application has a second query parameter named add_date
. When it is set to y
, Notes calls the calendar application to get a date to add to the note.
Configure the calendar app for tracing by adding dd-java-agent
to the startup command in the Dockerfile, like you previously did for the notes app. Open calendar/Dockerfile.calendar.maven
and see that it is already downloading dd-java-agent
:
RUN curl -Lo dd-java-agent.jar 'https://dtdg.co/latest-java-tracer'
Within the same calendar/dockerfile.calendar.maven
file, comment out the ENTRYPOINT
line for running without tracing. Then uncomment the ENTRYPOINT
line, which runs the application with tracing enabled:
ENTRYPOINT ["java" , "-javaagent:../dd-java-agent.jar", "-Ddd.trace.sample.rate=1", "-jar" , "target/calendar-0.0.1-SNAPSHOT.jar"]
Open docker/service-docker-compose-linux.yaml
and uncomment the environment variables for the calendar
service to set up the Agent host and Unified Service Tags for the app and for Docker. As you did with the notes
container, set the DD_AGENT_HOST
value to match what your Docker requires, and remove extra_hosts
if not on Linux:
calendar:
container_name: calendar
restart: always
build:
context: ../
dockerfile: calendar/dockerfile.calendar.maven
ports:
- 9090:9090
labels:
- com.datadoghq.tags.service="calendar"
- com.datadoghq.tags.env="dev"
- com.datadoghq.tags.version="0.0.1"
environment:
- DD_SERVICE=calendar
- DD_ENV=dev
- DD_VERSION=0.0.1
- DD_AGENT_HOST=host.docker.internal
extra_hosts: # Linux only
- "host.docker.internal:host-gateway" # Linux only
In the notes
service section, uncomment the CALENDAR_HOST
environment variable and the calendar
entry in depends_on
to make the needed connections between the two apps:
notes:
...
environment:
- DD_SERVICE=notes
- DD_ENV=dev
- DD_VERSION=0.0.1
- DD_AGENT_HOST=host.docker.internal
- CALENDAR_HOST=calendar
depends_on:
- calendar
Build the multi-service application by restarting the containers. First, stop all running containers:
docker-compose -f service-docker-compose-linux.yaml down
Then run the following commands to start them:
docker-compose -f service-docker-compose-linux.yaml build
docker-compose -f service-docker-compose-linux.yaml up
Send a POST request with the add_date
parameter:
curl -X POST 'localhost:8080/notes?desc=hello_again&add_date=y'
{"id":1,"description":"hello_again with date 2022-11-06"}
In the Trace Explorer, click this latest trace to see a distributed trace between the two services:
Note that you didn’t change anything in the notes
application. Datadog automatically instruments both the okHttp
library used to make the HTTP call from notes
to calendar
, and the Jetty library used to listen for HTTP requests in notes
and calendar
. This allows the trace information to be passed from one application to the other, capturing a distributed trace.
If you’re not receiving traces as expected, set up debug mode for the Java tracer. Read Enable debug mode to find out more.
추가 유용한 문서, 링크 및 기사: