ml_obs.span.llm.input.tokens | Number of tokens in the input sent to the LLM | Distribution | env , error , ml_app , model_name , model_provider , service , version |
ml_obs.span.llm.output.tokens | Number of tokens in the output | Distribution | env , error , ml_app , model_name , model_provider , service , version |
ml_obs.span.llm.prompt.tokens | Number of tokens used in the prompt | Distribution | env , error , ml_app , model_name , model_provider , service , version |
ml_obs.span.llm.completion.tokens | Tokens generated as a completion during the span | Distribution | env , error , ml_app , model_name , model_provider , service , version |
ml_obs.span.llm.total.tokens | Total tokens consumed during the span (input + output + prompt) | Distribution | env , error , ml_app , model_name , model_provider , service , version |
ml_obs.span.llm.input.characters | Number of characters in the input sent to the LLM | Distribution | env , error , ml_app , model_name , model_provider , service , version |
ml_obs.span.llm.output.characters | Number of characters in the output | Distribution | env , error , ml_app , model_name , model_provider , service , version |