This page is not yet available in Spanish. We are working on its translation.
If you have any questions or feedback about our current translation project, feel free to reach out to us!

Learn how to use Datadog’s integration with the Ragas framework to evaluate your LLM application. For more information about the Ragas integration, including a detailed setup guide, see Ragas Evaluations.

  1. Install required dependencies:

    pip install ragas==0.1.21 openai ddtrace>=3.0.0
    
  2. Create a file named quickstart.py with the following code:

    import os
    from ddtrace.llmobs import LLMObs
    from ddtrace.llmobs.utils import Prompt
    from openai import OpenAI
    
    LLMObs.enable(
        ml_app="test-rag-app",
        agentless_enabled=True,
    )
    
    oai_client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
    
    rag_context = "The First AFL–NFL World Championship Game was an American football game played on January 15, 1967, at the Los Angeles Memorial Coliseum in Los Angeles"
    
    with LLMObs.annotation_context(
        prompt=Prompt(variables={"context": rag_context}),
    ):
        completion = oai_client.chat.completions.create(
            model="gpt-3.5-turbo",
            messages=[
                {"role": "system", "content": "Answer the user's question given the following context information {}".format(rag_context)},
                {"role": "user", "content": "When was the first superbowl?"},
            ],
        )
    
  3. Run the script with the Ragas Faithfulness evaluation enabled:

    DD_LLMOBS_EVALUATORS=ragas_faithfulness DD_ENV=dev DD_API_KEY=<YOUR-DD-API-KEY> DD_SITE=datadoghq.com python quickstart.py
    
  4. View your results in Datadog at https://<YOUR-DATADOG-SITE-URL>/llm/traces?query=%40ml_app%3Atest-rag-app.

Further Reading

Más enlaces, artículos y documentación útiles: