Terug naar woordenlijst Technologie

Temperatuur en Top-P-sampling

Hyperparameters die willekeur en variatie in AI-modeluitvoer regelen — de balans tussen creativiteit en precisie.

Understanding Temperature

Temperature is a parameter that controls the randomness of a language model's output by scaling the probability distribution over possible next tokens. A temperature of 0 makes the model deterministic, always choosing the most probable token. Higher temperatures (0.7-1.0) increase randomness, producing more diverse and creative outputs. Values above 1.0 make outputs increasingly random and potentially incoherent.

Top-P (Nucleus) Sampling

For enterprise applications, temperature selection depends on the task. Factual question answering, data extraction, and code generation benefit from low temperatures (0-0.3) that prioritize accuracy. Creative writing, brainstorming, and content generation work better with moderate temperatures (0.5-0.8) that balance quality with variety.

Practical Configuration

Top-P sampling, also called nucleus sampling, offers an alternative approach to controlling output diversity. Instead of scaling all probabilities, Top-P dynamically selects the smallest set of tokens whose cumulative probability exceeds the threshold P. With Top-P of 0.9, the model considers only tokens that together account for 90% of the probability mass, automatically adapting the candidate pool size based on the model's confidence at each step.