This paper by Orgad et al. delves into the internal workings of Large Language Models (LLMs) to understand how errors, often termed "hallucinations," are represented. Moving beyond purely external, behavioral analyses, the authors use probing techniq...