HRM, are we closer to superintelligence?

Adamay MannAdamay Mann
4 min read

The David vs. Goliath Moment in AI

Imagine if I told you that a 27-million-parameter model could outperform GPT-4 and Claude on complex reasoning tasks. You'd probably think I'd been spending too much time in AI Twitter's echo chamber. But that's exactly what happened when Sapient Intelligence released their Hierarchical Reasoning Model (HRM).

While everyone's been obsessing over scaling laws and throwing more compute at larger models, a small team decided to take a completely different approach: what if we actually tried to mimic how the brain works?

Breaking the Scaling Paradigm

The current AI paradigm is beautifully simple: bigger models, more data, better results. It's worked remarkably well until you hit reasoning tasks that require genuine depth of thought.

HRM throws this conventional wisdom out the window. With just 1,000 training examples per task (no pre-training, no chain-of-thought supervision), it achieved:

  • 40.3% accuracy on ARC-AGI-2 (vs. o3-mini-high's 34.5%)

  • Near-perfect performance on Sudoku-Extreme (while CoT models scored 0%)

  • Optimal pathfinding in complex 30x30 mazes (again, 0% for baseline methods)

This isn't just incremental improvement, it's a paradigm shift.

The Brain-Inspired Architecture That Changes Everything

Two Systems, One Mind

HRM's secret sauce lies in its brain-inspired dual architecture:

  1. High-level (H) module: The slow, deliberate planner (think System 2 from Kahneman's Thinking, Fast and Slow)

  2. Low-level (L) module: The fast, execution-focused processor (System 1)

But here's where it gets interesting, these modules operate at different timescales, just like your brain. The H-module updates slowly, setting strategic direction, while the L-module rapidly processes details within that context.

Hierarchical Convergence: The Innovation That Matters

Traditional recurrent networks suffer from "premature convergence" i.e. they settle into patterns too quickly and stop learning. HRM solves this through something called hierarchical convergence.

Think of it like this: the L-module finds a local solution, the H-module evaluates it, then "resets" the L-module with new context. It's like having a chess master (H) guide a tactical calculator (L) through each phase of the game.

The Neuroscience Connection That's Actually Meaningful

hrm.png

Most "brain-inspired" AI is marketing fluff. HRM is different, how? It demonstrates measurable parallels to actual brain organization.

The researchers found that HRM's modules develop a dimensionality hierarchy that mirrors the human cortex. Higher-level areas (like the prefrontal cortex) need flexible, high-dimensional representations to handle diverse tasks. Lower-level areas can be more specialized.

In HRM, the H-module learned to operate in a 89.95-dimensional space, while the L-module used just 30.22 dimensions—a ratio that closely matches what we see in mouse cortex (2.98 vs. 2.25).

This wasn't programmed in, it emerged from training.

What This Means for Superintelligence?

Current AI scaling hits a wall with complex reasoning. Even o1's chain-of-thought approach requires massive computational overhead for every reasoning step.

HRM suggests a different path: structured intelligence that can:

  • Reason efficiently in latent space (no token-by-token verbalization)

  • Adapt computational depth to problem complexity

  • Learn new reasoning strategies with minimal data

Turing Completeness in Practice

While models like Universal Transformers are theoretically Turing complete, they fail in practice due to convergence issues and memory constraints. HRM overcomes these limitations through:

  • Hierarchical convergence that maintains computational activity

  • O(1) memory training (vs. traditional O(T) for T timesteps)

  • Adaptive computation time that scales resources to problem difficulty

This brings us significantly closer to AI systems that can handle truly complex, multi-step reasoning tasks.

The Bigger Picture: Why This Matters Now

The LLM Reasoning Crisis

Current LLMs excel at pattern matching and knowledge retrieval but struggle with genuine reasoning. Chain-of-thought prompting is, as the HRM paper bluntly states, "a crutch, not a satisfactory solution."

Recent work on inference-time scaling and reasoning improvements has shown some promise, but it's essentially brute-forcing the problem with more compute.

A New Path Forward

HRM represents something different: architectural innovation over raw scaling. It's the kind of breakthrough that could:

  1. Democratize advanced AI (27M parameters vs. billions)

  2. Enable edge deployment of reasoning systems

  3. Reduce the compute requirements for complex tasks

  4. Open new research directions in brain-inspired architectures

The Open Questions

Is This Actually Superintelligence?

Not yet. HRM excels at specific reasoning tasks but lacks the broad knowledge and general capabilities of large language models. However, it demonstrates key ingredients we'd expect in superintelligent systems:

  • Efficient learning from minimal data

  • Adaptive reasoning strategies

  • Hierarchical thinking that mirrors human cognition

  • Generalizable architecture principles

The Integration Challenge

The path forward likely involves combining HRM's reasoning architecture with the broad knowledge of large language models. Imagine GPT-4's knowledge base with HRM's reasoning engine. Yes, that's when things get interesting.

What Comes Next?

Sapient Intelligence has open-sourced HRM, which means we're about to see an explosion of research building on these ideas. Key areas to watch:

  1. Hybrid architectures combining LLMs with hierarchical reasoning

  2. Scaling studies of brain-inspired designs

  3. Real-world applications in scientific reasoning and planning

  4. Neuroscience validation of the architectural principles


What do you think? Is architectural innovation the key to superintelligence, or will raw scaling win out? The next few years or months(idk💁🏻) will be fascinating to watch.


Further Reading

1
Subscribe to my newsletter

Read articles from Adamay Mann directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Adamay Mann
Adamay Mann