The Limits of AI in Education: Lessons from a Spanish Tutor Experiment

Gerard SansGerard Sans
9 min read

Artificial Intelligence has made considerable advances across multiple sectors, including education. As we push the boundaries of AI's capabilities, it is imperative to understand both its strengths and limitations to determine how best to integrate it into complex settings like the classroom. This article delves into the readiness of AI in educational environments by examining a theoretical Spanish tutoring experiment. By treating AI as a human-like tutor, we gain valuable insights into where current AI excels and where it falls short in delivering a holistic learning experience.

The Experiment: AI as a Spanish Tutor

Setting the Stage

Our goal: utilising AI to run and manage hour-long Spanish tutoring sessions for children over a three-month period, with weekly classes. The aim was to evaluate how effectively AI could act as a language tutor, supporting students in their homework and language acquisition, while also testing its ability to mimic key functions of a human educator.

Key Components of the Experiment

  1. Curriculum Development: Designing a detailed table of contents and structuring hourly sessions to ensure a consistent flow of content.

  2. AI Configuration: Defining system prompts, constraints, and instructional tone to maintain consistent and coherent interactions.

  3. Session Structure: Organising each lesson with a clear framework—titles, topics, theory explanation, practical exercises, and session summaries.

  4. Progress Tracking: Creating high-level progress reports for parental oversight and analysis.

Strengths of AI in Educational Settings

1. Content Generation and Structuring

AI demonstrates a remarkable capacity to generate structured content efficiently. Its strengths include:

  • Crafting comprehensive course outlines

  • Breaking down complex linguistic concepts into smaller, manageable segments

  • Producing a variety of practice exercises tailored to specific grammatical or vocabulary challenges

2. Interactive Language Practice

The AI tutor proved especially adept in:

  • Facilitating dynamic conversation practice, simulating real-world exchanges

  • Providing immediate, accurate feedback on grammar, pronunciation, and vocabulary usage

  • Offering concise explanations for difficult language concepts, significantly enhancing the student's immediate understanding

3. Customisable Learning Experience

While AI is capable of delivering a structured session, it struggles with flexibility across different sessions, especially when:

  • Adapting to individual learning styles dynamically within or across lessons

  • Modifying exercises based on real-time student performance

  • Monitoring and refining lesson plans over time to suit evolving learner needs

Limitations and Challenges

1. Memory, Context, and Task Constraints

Context Window Limits

  • AI's inability to retain information beyond a certain threshold hinders long-term learning continuity, with new conversations losing context and prior settings.

Lack of Persistent Memory and Task Awareness

  • AI lacks the ability to maintain knowledge of student progress or the overall learning objective across sessions. This becomes particularly problematic because the AI operates in a task-based context rather than being aware of a broader goal or structured learning pathway.

  • Each individual task or subtask is treated as an isolated event, requiring parents to manually reset the scope and context for every session. This results in an inconsistent learning experience, as AI cannot autonomously track or switch between tasks that are part of a larger goal.

Hierarchical Goal Awareness

  • AI does not function with a hierarchical understanding of tasks and goals. This means that for every new task or subtask, parents or supervisors must keep track of the overarching educational objectives, manually setting parameters to ensure consistency. The AI has no mechanism to understand the broader educational goal and manage subtasks accordingly.

  • The lack of goal-awareness significantly impedes its ability to operate independently or manage long-term learning journeys, as it cannot automatically adapt to different tasks within a structured, long-term plan.

2. Personalisation and Progress Tracking

Limited Student-Specific Adaptability and Context Switching

  • Due to its task-based nature, the AI has difficulty adapting lessons across sessions within a cohesive learning framework. It cannot automatically adjust between related tasks or sub-goals in a way that aligns with a broader educational plan. This is particularly problematic for parents or educators who must constantly reset the context for each session to ensure alignment with the student's progress or goals.

  • The lack of awareness about broader learning objectives forces manual intervention not only to personalise learning but also to keep track of how individual tasks relate to a larger goal. The AI cannot autonomously manage or switch between tasks in a structured sequence, which diminishes its effectiveness for long-term educational use.

Implications for AI in Education

Current State: A Complementary Tool

The experiment highlights that while AI shows considerable promise in certain aspects of education, such as content generation, interactive practice, and immediate feedback, it remains far from replacing human educators. One of the most critical limitations is AI's inability to manage multiple tasks or sub-goals within a cohesive framework. AI currently operates in a task-based context, lacking the awareness to handle hierarchical goals and sub-tasks autonomously. It is best positioned as:

  • A supplementary resource for short practice sessions

  • An on-demand assistant for answering specific questions

  • A tool for generating additional exercises or learning materials, though heavily reliant on human oversight for setting and managing broader goals

Technical Analysis: Underlying Limitations of Transformer Models (2017 Onwards)

(Technical disclaimer: This section is for readers interested in the technical foundations behind the limitations of current AI systems, particularly those based on transformer models like GPT. These concepts may be more technical than the rest of the article.)

The transformer architecture, introduced in the 2017 paper "Attention Is All You Need", revolutionised AI by improving its ability to process and generate language. However, the limitations observed in the AI Spanish tutor are directly related to the underlying mechanisms of this architecture. One significant limitation, the task-based context issue, prevents AI from understanding and switching between different tasks that are part of a broader hierarchical goal. Below, we explore this and other limitations and the technical challenges associated with overcoming them.

1. Context Window Constraints

Transformer models use an attention mechanism that assigns "weight" to different parts of the input, allowing the model to understand relationships within text. However, transformers can only process a fixed number of tokens (words or sub-words) at a time—this is known as the context window.

Limitation: In our experiment, AI struggled to maintain context between sessions or retain details from earlier parts of a conversation, a direct consequence of the limited context window. Once the window's limit is reached (usually a few thousand tokens), the model loses track of earlier information.

Challenges: Expanding the context window is non-trivial. Paid versions may increase it temporarily but not indefinitely. Increasing it demands exponentially more memory and computational power, as attention scales quadratically with input length. Research into more efficient attention mechanisms (such as sparse or linear attention) could help increase the window size, but current models remain constrained by this design. External data access is an alternative using RAG or Retrieval Augmented Generation at the expense of additional costs, response time and loss of accuracy.

2. Lack of Persistent Memory and Goal Awareness

Transformers are stateless—they do not retain memory of previous interactions once the session ends, nor are they capable of managing tasks within the framework of a larger goal. This leads to task-based limitations, where the AI treats each interaction in isolation.

Limitation: This lack of goal-awareness and persistent memory requires human users to manually manage each task or sub-task, including re-establishing context for the AI to follow. The AI cannot manage or switch between tasks within a structured learning plan autonomously. For example, in a tutoring environment, if a lesson is part of a broader goal like mastering verb conjugation, the AI cannot track progress or switch between exercises related to that goal without external input.

Challenges: Overcoming this limitation would require significant advancements in hierarchical memory systems or hybrid AI architectures that combine short-term transformer memory with long-term, goal-oriented learning frameworks. This is an active area of research, but solutions remain in the experimental phase. External data access is an alternative using RAG or Retrieval Augmented Generation at the expense of additional costs, response time and loss of accuracy.

3. Time Management and Session Structuring

Transformer-based models do not have a built-in concept of time or session management. They lack awareness of real-world constraints, such as the length of a lesson or the passage of time within a tutoring session.

Limitation: In our experiment, this resulted in difficulties adhering to the hour-long session duration, requiring human oversight to manage time effectively.

Challenges: Adding time-awareness to AI models is not straightforward because transformers do not perceive time or sequential actions in the same way humans do. One approach could involve training models specifically on time-sensitive tasks or incorporating external time management systems that can communicate with the model during sessions.

4. Adaptability and Fine-Tuning Across Sessions

Transformers operate based on statistical patterns learned from large datasets, but this does not always translate into adaptive, student-specific tutoring. They lack the ability to fine-tune lessons in real-time or across multiple sessions unless explicitly reconfigured.

Limitation: The AI struggled to adapt dynamically to different learning styles or adjust its lesson plans based on prior performance, a key issue for personalised learning experiences.

Challenges: Solutions may involve personalised training techniques, such as reinforcement learning, where the model adjusts based on user feedback. However, personalisation at scale is computationally expensive and difficult to implement without overfitting the model to specific users. External data access is a possibility via RAG but is not equivalent to dedicated fine-tuning or training. Real-time learning remains a research area.

This technical section outlines the architectural constraints that limit AI performance in education. Since the development of the transformer model in 2017, researchers have made strides in tackling some of these challenges, but the underlying issues—context limits, memory retention, and real-time adaptability—remain significant barriers. Future improvements in AI education systems will rely on overcoming these technical challenges while maintaining the model's usability and efficiency.

Conclusion

The Spanish Tutor experiment provides valuable insights into the current capabilities and limitations of AI in education. While AI demonstrates impressive abilities in content generation, interactive practice, and immediate feedback, it falls short in crucial areas such as long-term personalisation, autonomous operation, and comprehensive progress tracking.

Following is a table summary including stage, task and AI support.

StageTaskAI Support
InitiationDefine Learning ObjectivesBasic
Set Experiment GoalsBasic
Determine Educational FrameworkBasic
PlanningCreate Course OutlineGreat
Break Down Topics into SessionsGreat
Design Practice ExercisesMedium
Configure AI System PromptsMedium
ExecutionRun Weekly Tutoring SessionsMedium
Facilitate Language PracticeGreat
Provide Grammar & Vocabulary FeedbackGreat
Recap Previous SessionsBasic
MonitoringTrack Student ProgressBasic
Generate Parent ReportsBasic
Adjust AI Configuration Based on FeedbackMedium
ClosureReview Overall AI PerformanceBasic
Evaluate Student Learning OutcomesBasic
Finalise ExperimentBasic

As we continue to develop and integrate AI into educational settings, it's essential to recognise its current role as a powerful complementary tool rather than a standalone replacement for human educators. The future of AI in education lies in addressing these limitations while leveraging its strengths to create more effective, personalised, and engaging learning experiences. Until then, the human touch remains irreplaceable in providing the nuanced, adaptive, and holistic approach necessary for effective education.​​​​​​​​​​​​​​​​

0
Subscribe to my newsletter

Read articles from Gerard Sans directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Gerard Sans
Gerard Sans

I help developers succeed in Artificial Intelligence and Web3; Former AWS Amplify Developer Advocate. I am very excited about the future of the Web and JavaScript. Always happy Computer Science Engineer and humble Google Developer Expert. I love sharing my knowledge by speaking, training and writing about cool technologies. I love running communities and meetups such as Web3 London, GraphQL London, GraphQL San Francisco, mentoring students and giving back to the community.