What is the future of intelligence
What is the future of intelligence?
After listening to Hassabis reflect on intelligence, one thing became clear: progress toward AGI is not a straight line, and it’s not just about building bigger models. The reality is messier, and more interesting!
1. A decade of progress in a year exposed real weaknesses
Multimodal systems have advanced at remarkable speed. At the same time, this progress revealed an uncomfortable truth: models that can solve Olympiad-level problems may still fail at basic reasoning. Instead of smoothing intelligence, recent gains made its uneven nature obvious.
2. The main obstacle is consistency, not raw ability
Hassabis describes today’s systems as “jagged intelligences” — impressive at certain peaks, unreliable in the gaps. Until AI can reason steadily across domains and recognize when it doesn’t know something, general intelligence remains out of reach.
3. Bigger models alone are not enough
DeepMind is placing its bets evenly: half on scaling compute and data, half on new system designs. Scale helps, but progress also depends on better reasoning, handling uncertainty, and learning over longer time horizons.
4. Language does not equal understanding
Some parts of intelligence can’t be learned from text alone. Physical intuition, spatial reasoning, and interaction with the world require experience. This is why world models and simulation are becoming central to current research.
5. Simulation may teach us why intelligence exists at all
One of the most striking ideas discussed was the use of large-scale simulations to study how intelligence, social behavior, and even consciousness might arise. Running millions of controlled experiments could help explain not just how intelligence works, but why it emerged.
6. AI may be overstated now, and still underestimated later
Hassabis holds two views at once: parts of today’s AI ecosystem are clearly inflated, yet the deeper, long-term effects (especially in science and energy) are still widely misunderstood. The biggest changes may arrive later, but cut much deeper.
Key New Announcements/Concepts:
1. Deepened CFS Partnership
- Hassabis revealed that the collaboration with Commonwealth Fusion Systems is now much deeper than previously understood.
- The work goes beyond advisory roles into plasma containment and advanced materials, positioning fusion research as a real testbed for AI-driven scientific discovery.
2. Genie ↔ SIMA Infinite Training Loop
- For the first time, Hassabis publicly described how Genie (world models) and SIMA (embodied agents) are intended to form an infinite self-improving loop.
- World models generate environments → agents act within them → outcomes refine the world model → repeat.
- This frames embodied learning as central, not auxiliary, to AGI progress.
3. Physics Benchmarking via Game Engines
- A genuinely novel methodological detail: DeepMind is developing A-level physics benchmarks inside game engines.
- The goal is to test whether models actually respect Newtonian laws, not just predict outcomes statistically.
- This signals a shift from language-centric evaluation to grounded physical correctness.
4. Whole-Statement Confidence Scoring
- Hassabis outlined a concrete path to addressing model reliability: Confidence is assessed across thinking steps and planning, validating entire statements, not token-by-token probabilities.
- This is an important evolution toward trustable reasoning systems rather than fluent text generators.
5. “Jagged Intelligence” as a First-Class Concept
- He explicitly used the term “jagged intelligence” to describe how current systems excel in some areas while failing badly in others.
- This terminology formalizes a widely felt but rarely named limitation of state-of-the-art models.
6. World Models Re-emphasized as Core Obsession
- While not new in isolation, Hassabis reinforced that world models remain his longest-standing passion.
- He sharply contrasted spatial, embodied learning with today’s LLM-dominant paradigm, calling out a fundamental gap that still blocks general intelligence.
What I appreciated most about the interview was its tone: ambitious but realistic, hopeful but honest. The future of intelligence won’t arrive in a single dramatic moment. It will come from working through many difficult, often unglamorous problems that we are only beginning to grasp.
Author: Prof May El Barachi
Dean of Computer Science & Full Professor, University of Wollongong in Dubai.
Academic leader in digital innovation, applied AI and industry-aligned technology education.