Logprobs
Logprobs stands for logarithmic probabilities. It tells you how confident the model is about each token it generates. When enabled, the model returns a list of possible next tokens along with their probabilities (in log scale).
How does it work
For each generated token, the model calculates the likelihood of every possible token in its vocabulary. When logprobs is enabled, it returns the top N most likely tokens (e.g., top 5) and their log-scaled probabilities. You can use this data to understand why the model chose a certain word or how close it was to choosing something else.
When to use logprobs
- When you want to analyze model behavior or confidence
- When doing token-level debugging or prompt evaluation
- When building interactive UIs, autocomplete, or token visualization tools
- When you want to log or audit what the model "almost said"
How to use logprobs
- Set logprobs to an integer in your API call
- The response will include the top 5 alternatives and their log probabilities for each token
- Convert logprobs to probabilities if needed (e.g.,
probability = e^(logprob)
) - Visualize, log, or analyze the data for each token
Tips
- The higher the logprob, the more confident the model is (closer to 0 is better; values are negative)
- Use this to detect low-confidence answers, or see if your prompt leads to uncertainty
- Combine with
echo: true
if you want to see logprobs for both prompt and completion - Not all APIs expose logprobs in chat mode, check if your model supports it