Nouvelles annonces sur les technologies sans serveur et réseau ainsi que sur le RUM (Real-User Monitoring) dévoilées à la conférence Dash ! Nouvelles annonces dévoilées à la conférence Dash !

Quick Start - Distributed Tracing

Cette page n'est pas encore disponible en français, sa traduction est en cours.
Si vous avez des questions ou des retours sur notre projet de traduction actuel, n'hésitez pas à nous contacter.

If you have read the first example of tracing and want understand how tracing works further, let’s take the following example which represents a simple API thinker-api and a micro-service behind it thinker-microservice. When the API receives a request with the correct subject parameter, it responds with a thought, otherwise, it responds with an error:

  • Request:

    curl 'localhost:5000/think/?subject=technology&subject=foo_bar'
  • Response:

    {
        "technology": {
            "error": false,
            "quote": "For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled.",
            "author": "Richard Feynman"
        },
        "foo_bar": {
            "error": true,
            "reason": "Subject unknown"
        }
    }

Code used

We have two modules:

  • Thinker API: Catches the users request and forwards it to the thinker-microservice

    import blinker as _
    import requests
    
    from flask import Flask, Response
    from flask import jsonify
    from flask import request as flask_request
    
    from ddtrace import tracer
    from ddtrace.contrib.flask import TraceMiddleware
    
    # Configuring Datadog tracer
    app = Flask('API')
    traced_app = TraceMiddleware(app, tracer, service='thinker-api')
    
    @app.route('/think/')
    def think_handler():
        thoughts = requests.get('http://thinker:8000/', headers={
            'x-datadog-trace-id': str(tracer.current_span().trace_id),
            'x-datadog-parent-id': str(tracer.current_span().span_id),
        }, params={
            'subject': flask_request.args.getlist('subject', str),
        }).content
        return Response(thoughts, mimetype='application/json')
  • Thinker Microservice: Takes a request from thinker-api with one or multiple subjects and answers with a thought if the subject is “technology”:

    import asyncio
    
    from aiohttp import web
    from ddtrace import tracer
    from ddtrace.contrib.aiohttp import trace_app
    
    app = web.Application()
    app.router.add_get('/', handle)
    
    trace_app(app, tracer, service='thinker-microservice')
    app['datadog_trace']['distributed_tracing_enabled'] = True
    
    @tracer.wrap(name='think')
    async def think(subject):
        tracer.current_span().set_tag('subject', subject)
    
        await asyncio.sleep(0.5)
        return thoughts[subject]
    
    thoughts = {
        'technology': Thought(
            quote='For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled.',
            author='Richard Feynman',
        ),
    }
    
    async def handle(request):
        response = {}
        for subject in request.query.getall('subject', []):
            await asyncio.sleep(0.2)
            try:
                thought = await think(subject)
                response[subject] = {
                    'error': False,
                    'quote': thought.quote,
                    'author': thought.author,
                }
            except KeyError:
                response[subject] = {
                    'error': True,
                    'reason': 'Subject unknown'
                }
    
        return web.json_response(response)

The code above is already instrumented. See the dedicated setup documentation to learn how to instrument your application and configure the Datadog Agent.

Datadog APM

Once the code is executed, we start to see data in APM. On the service List, our two services, thinker-api and thinker-microservice, have appeared with some metrics about their performance:

Clicking on thinker-api directs you to it’s automatically generated service dashboard. Here we can see more detailed performance data, as well as a list of all of the resources associated with this particular service:

The first function executed in this example is think_handler(), which handles the request and forwards it to the thinker-microservice service.

Selecting the thinker_handler resource directs you to it’s automatically generated resource dashboard and a list of traces for this particular resource:

Selecting a trace opens the trace panel containing information such as:

  • The timestamp of the request
  • The status of the request (i.e., 200)
  • The different services encountered by the request: (i.e., thinker_hander and thinker-microservice)
  • The time spent by your application processing the traced functions
  • Extra tags such as http.method and http.url

From the above image, we can see how the request is first received by the thinker-api service with the flask.request span, which transmits the processed request to the thinker-microservice service, which executes the function think() twice.

In our code we added:

tracer.current_span().set_tag('subject', subject)

Which allows you to get more context every time think() is called and traced:

  • The first time think is executed, the subject is technology and everything goes well:

  • The second time think is executed, the subject is foo_bar which is not an expected value and leads to an error:

    The specific display of this error is achieved automatically by the Datadog instrumentation, but you can override it with special meaning tag rules.

The Datadog APM allows you to trace all interactions of a request with the different services and resources of any application.

Further Reading