In a Few Words, What is Few-shot Learning?
Imagine that you went to high school in a place where everyone was rich except for you. Your parents had very mediocre jobs, enough to put food on the table and pay for school fees but nothing else.
Further imagine that you didn't want anyone else to know this. How would you cover it up?
First, you would focus your spending on the appearances that cost the least. Buying a car would be out of the question. Having a birthday party at your house would give you away. But you might be able to string together enough odd jobs to buy 3 really nice set of clothes. If you were careful, you could rotate these in just such a way that it would look like you had a full closet.
This is essentially what few-shot learning is doing. It's for those times where you simply don't have enough data to fine-tune your LLM model, but you can come up with a few examples of how you want the LLM to respond. In these cases, you can get surprisingly good results by pre-feeding these to the LLM as part of the prompt, which weights the LLM's response in a similar way as though it had actually been trained on those examples.
To work through a real-life example, check out my post Sassy Food Service Bot.
Subscribe to my newsletter
Read articles from Sam Schneider directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by