Takaisin sanastoon Tekoäly

Zero-shot ja few-shot-oppiminen

Tekoälymallien kyky suorittaa tehtäviä ilman esimerkkejä (zero-shot) tai muutamalla esimerkillä (few-shot) promptissa.

Zero-Shot Learning

Zero-shot learning refers to a model's ability to perform tasks it was never explicitly trained on, without any task-specific examples. A large language model can classify sentiment, translate languages, or summarize documents simply by understanding the instruction, leveraging the broad knowledge acquired during pre-training. This capability emerges in sufficiently large models and eliminates the need for task-specific training data in many scenarios.

Few-Shot Learning

For example, a model can classify customer emails into categories it has never seen labeled examples of, simply by being given the category names and descriptions. This makes zero-shot learning extraordinarily valuable for enterprise applications where labeled training data is scarce or expensive to create.

Business Applications

Few-shot learning improves on zero-shot by providing a small number of examples (typically 1-10) directly in the prompt. These examples demonstrate the desired input-output pattern, helping the model understand the exact format and style expected. Few-shot learning often dramatically improves accuracy over zero-shot approaches, particularly for tasks requiring specific formatting or domain conventions.