Seed
The seed parameter sets a fixed starting point for the model's random number generator. It allows you to get repeatable results from the same prompt and parameters. Useful for testing, debugging, or consistency.
How does it work
LLMs use randomness when sampling tokens, especially when temperature or top-p are above zero. By setting a seed (an integer), you lock the randomness so that each time you run the same prompt with the same seed and settings, you'll get the same output. Change the seed, and you'll get a different variation.
When to use seed
- When you want consistent outputs for testing or demos
- When debugging prompt behavior and need reproducible results
- When generating multiple variations by looping through different seeds
- When building applications where repeatability is important (e.g. story generation, data augmentation)
How to use seed
- Choose a number. E.g.,
seed: 42
- Add seed to your API call
- Run the same prompt multiple times. You'll get the same output if all parameters match
- Vary the seed to generate alternate outputs while keeping the same prompt
Tips
- Requires temperature > 0, otherwise the output is always deterministic
- Changing any other parameter (e.g. top_p or prompt wording) will change the result, even with the same seed
- Great for A/B testing different prompt variations
- Not supported in all models, check documentation to confirm