Testing ChatGPT on Context retention across turns

George PerdikasGeorge Perdikas
3 min read

Objective:
The goal of this test was to check ChatGPT’s ability to retain context across multiple turns in a conversation. Specifically, we wanted to see if it could follow a thread of related clarifications without losing track or repeating itself.


Methodology

I played a simple "Guess Who" style game with ChatGPT.

Here’s how it worked: I picked a famous historical person and asked ChatGPT to guess who it was by asking only ‘yes or no’ questions. This tested its ability to:

  • Stay within the rules

  • Track previous answers

  • Use logic and memory to narrow down the possibilities


Expected behavior

ChatGPT should:

  • Follow the yes/no rule without breaking the format

  • Avoid repeating questions

  • Use answers from previous turns to ask smarter, more relevant questions

  • Eventually guess the correct person

The Interaction

Me: Hey! We'll play a game! I have a historic person in my mind. You will ask me questions that can only be answered with 'Yes' or 'No'. Shall we start?

ChatGPT: Absolutely, let's play! I'm ready to guess your historic person. Is the person you're thinking of a man?

Me: Yes

ChatGPT: Great! Did this man live after the year 1500?

Me: No

ChatGPT: Is this man known for being a ruler or king?

Me: No

ChatGPT: Was this man known primarily for contributions to science or philosophy?

Me: No

ChatGPT: Is this man famous for his role in religion or spirituality?

Me: No

ChatGPT: Was this man a military leader or known for involvement in wars or battles?

Me: Yes

ChatGPT: Did this man live in ancient Greece or Rome?

Me: Yes

ChatGPT: Was this man a general during the Roman Republic era?

Me: No

ChatGPT: Was this man a famous military leader from ancient Greece?

Me: Yes

ChatGPT: Was this man involved in the Peloponnesian War?

Me: No

ChatGPT: Is this man known for his role in the conquests of Alexander the Great?

Me: No

ChatGPT: Is this man known for fighting in the Persian Wars?

Me: Yes

ChatGPT: Is this man known for leading Greek forces at the Battle of Marathon?

Me: Yes

ChatGPT: I think I’ve got it! Is the person you’re thinking of Miltiades?

Me: Yes!! Well done!


Result

ChatGPT successfully followed the rules, remembered all prior answers, and used deductive reasoning to reach the correct answer — Miltiades — without getting off track or repeating questions.


Conclusion

This was a strong performance. ChatGPT showed that it can:

  • Maintain context across turns

  • Build logical questions based on prior answers

  • Reach accurate conclusions using memory and reasoning

This is exactly the kind of behavior we want to see when testing LLM context tracking.

0
Subscribe to my newsletter

Read articles from George Perdikas directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

George Perdikas
George Perdikas