Zero-shot and Few-shot Learning

Mastering the art of context-based prompting.

Prompting Without Training

Zero-shot vs Few-shot

Zero-shot Instruction only "Classify as pos/neg" ⚡ Fast, less accurate One-shot "Great!" → Positive Now classify: ... ⚖️ Balanced Few-shot (3-5) "Great!" → Positive "Bad" → Negative "Okay" → Neutral 🎯 Most accurate

One of the most powerful capabilities of LLMs is their ability to perform tasks they were never explicitly trained on, using only instructions or examples in the prompt.

Zero-shot Prompting

Give the model a task description with no examples. Works well for common tasks.

Few-shot Prompting

Provide 2-5 examples of input-output pairs before your actual query. The model learns the pattern from examples and applies it.

Best Practices

  • Use consistent formatting across all examples
  • Include edge cases in your examples
  • Order examples from simple to complex
  • Match the diversity of examples to your expected input distribution

Code Example

Zero-shot gives just the task. Few-shot provides examples first so the model understands the expected format.

python
1from openai import OpenAI
2client = OpenAI()
3
4# Zero-shot
5zero_shot = client.chat.completions.create(
6    model="gpt-4o",
7    messages=[
8        {"role": "user", "content": "Classify the sentiment: 'This product is amazing!' -> "}
9    ]
10)
11
12# Few-shot
13few_shot = client.chat.completions.create(
14    model="gpt-4o",
15    messages=[
16        {"role": "user", "content": """Classify the sentiment as positive, negative, or neutral.
17
18Text: "I love this!" -> positive
19Text: "Terrible experience." -> negative  
20Text: "It's okay." -> neutral
21Text: "Best purchase I've ever made!" -> """}
22    ]
23)

Use Cases

Text classification without training data
Data extraction from unstructured text
Format conversion (JSON, CSV, etc.)
Translation between domain-specific formats

Common Mistakes

Using too many examples (>5) which wastes tokens without improving quality
Examples that contradict each other confuse the model
Not testing with zero-shot first — sometimes it works just as well

Interview Insight

Relevance

High - Most common prompting technique

AI Tutor

Ask about the topic

Sign in Required

Please sign in to use the AI tutor

Sign In