In-Context Learning: A Critical Examination

Gerard SansGerard Sans
3 min read

In-Context Learning (ICL) has garnered significant attention in the natural language processing (NLP) community. Initially presented as a remarkable ability of large language models (LLMs) to "learn" from a few examples provided in the input prompt, ICL has sparked both excitement and skepticism. This article argues that the initial framing of ICL, particularly the casual use of the term "learning," is misleading and clashes with fundamental machine learning (ML) principles. We'll explore the issues, examine the evidence, and propose a more cautious and precise approach to understanding this intriguing phenomenon.

The Problem with "Learning"

The core issue lies in the use of the word "learning" to describe ICL. In traditional ML, learning involves modifying a model's internal parameters (weights) based on training data to improve its performance on a given task. ICL, however, involves no such weight updates. The model remains fixed after pre-training, and the observed adaptation to in-context examples is solely due to the interaction between the input and the pre-trained weights. This "learning," if we can even call it that, is entirely ephemeral and context-dependent. Remove the examples from the input, and the "learned" behaviour vanishes. This clashes with any reasonable definition of learning, which implies acquiring knowledge or skills that persist beyond the immediate context.

Evidence and Mechanisms

The evidence for ICL is primarily based on observing improved performance on certain tasks when demonstrations are provided in the input. The mechanisms involved are rooted in the standard operations of transformers:

  • Input and Samples: The input prompt, including the demonstration examples, is tokenized and converted into embeddings.

  • Attention/QKV/MLP: The transformer's attention mechanism (using query, key, and value vectors – QKV) and multilayer perceptron (MLP) layers process the input embeddings, allowing the model to attend to different parts of the input and combine information in complex ways.

  • Latent Space: The activations within the transformer's layers form a latent space representation of the input. The demonstrations influence the activation patterns in this space, shaping how the model processes the query.

However, these mechanisms alone don't explain how the model "learns" from the examples. The observed adaptation is likely due to the input activating and combining pre-existing knowledge encoded in the model's weights, rather than forming entirely new representations.

Strengths and Limitations

Despite the conceptual issues, ICL demonstrates notable strengths in enhancing model performance:

  • Enabling task adaptation without fine-tuning

  • Improving accuracy on specific tasks when provided with relevant examples

  • Facilitating zero-shot and few-shot learning scenarios

The ICL approach also shows promise in allowing models to tackle a wide range of tasks without task-specific training and adapting to new or modified tasks through prompt engineering.

However, the lack of weight updates, the ephemeral nature of the adaptation, and the absence of a clear mechanism for acquiring new knowledge raise serious concerns. Furthermore, the casual use of "learning" creates misconceptions and hinders a deeper understanding of the actual mechanisms at play.

Current Status and Concerns

While the influence of in-context examples on performance is undeniable, the claim that the model "learns" from them is unsubstantiated. The research community needs to be more critical and precise in its terminology. Avoid using speculative terms like "learning" or "reasoning" without clear definitions and supporting evidence. Focus on investigating the actual mechanisms involved, rather than relying on catchy but misleading labels.

Closing Thoughts and A Call for Clarity

Instead of "In-Context Learning," a more accurate term would be "In-Context Adaptation" or "Behavior Steering." This emphasizes the observed behavioral changes without implying the acquisition of new knowledge. Similarly, "prompting" might be a more appropriate term than "teaching."

By adopting a more rigorous and precise approach to understanding and describing ICL, we can foster clearer communication, more effective research, and a more accurate public understanding of AI capabilities in natural language processing. This will not only improve our understanding of these complex phenomena but also facilitate better communication and pedagogy, ensuring that the public's understanding of AI is grounded in sound scientific principles.​​​​​​​​​​​​​​​​

0
Subscribe to my newsletter

Read articles from Gerard Sans directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Gerard Sans
Gerard Sans

I help developers succeed in Artificial Intelligence and Web3; Former AWS Amplify Developer Advocate. I am very excited about the future of the Web and JavaScript. Always happy Computer Science Engineer and humble Google Developer Expert. I love sharing my knowledge by speaking, training and writing about cool technologies. I love running communities and meetups such as Web3 London, GraphQL London, GraphQL San Francisco, mentoring students and giving back to the community.