Back to glossary Artificial Intelligence

Zero-Shot and Few-Shot Learning

AI capabilities that allow models to perform tasks with no examples (zero-shot) or just a handful of examples (few-shot).

Zero-Shot Learning

Zero-shot learning refers to a model's ability to perform tasks it was never explicitly trained on, without any task-specific examples. A large language model can classify sentiment, translate languages, or summarize documents simply by understanding the instruction, leveraging the broad knowledge acquired during pre-training. This capability emerges in sufficiently large models and eliminates the need for task-specific training data in many scenarios.

For example, a model can classify customer emails into categories it has never seen labeled examples of, simply by being given the category names and descriptions. This makes zero-shot learning extraordinarily valuable for enterprise applications where labeled training data is scarce or expensive to create.

Few-Shot Learning

Few-shot learning improves on zero-shot by providing a small number of examples (typically 1-10) directly in the prompt. These examples demonstrate the desired input-output pattern, helping the model understand the exact format and style expected. Few-shot learning often dramatically improves accuracy over zero-shot approaches, particularly for tasks requiring specific formatting or domain conventions.

Business Applications

These capabilities transform how enterprises deploy AI. Instead of collecting thousands of labeled examples and training custom models, organizations can prototype and deploy AI solutions in hours using carefully crafted prompts. Common applications include document classification, data extraction, content categorization, and intent detection. The trade-off is that purpose-built fine-tuned models typically outperform few-shot approaches for high-volume, well-defined tasks.