Key Takeaways

  • DeepMind’s rapid progress comes from combining Google’s talent and compute with a focused ‘startup’ execution.
  • Current AI systems, including large language models, lack continual learning capabilities after initial training.
  • True artificial general intelligence requires improved memory, long-term hierarchical planning, and overcoming “jagged intelligences.”
  • Founders building AI products must account for these existing limitations and plan for future, more adaptive systems.

The Missing Pieces of AGI: Continual Learning and Planning

Demis Hassabis, CEO of DeepMind, pulls no punches. While acknowledging the broad impact of AI, he points out critical gaps in current systems. He attributes DeepMind’s recent acceleration to a clear strategy: “assembling together all the ingredients we already had and then kind of pushing with relentless sort of focus and and and pace um acting almost like a startup really.” This approach has positioned them at the frontier.

However, Hassabis is quick to define what still lies ahead. The current generation of AI, particularly large language models, operates with significant constraints. A primary one is “continual learning.” He states, “These systems don’t learn after you finish training them, after you put them out into the into the world.” This means real-time adaptation and evolving knowledge are not inherent features, demanding constant retraining or workarounds.

Beyond static learning, current AI struggles with foresight. Hassabis identifies the need for “long-term planning, you know, hierarchical planning. These systems are not very good at planning at long time horizons, you know, many years into the future.” They might excel at immediate tasks, but complex, multi-stage objectives remain out of reach for autonomous systems. The vision for future AI must include genuine strategic capability.

Finally, Hassabis coined the term “jagged intelligences.” He observes that current AI is “really amazing at certain things when you pose the question in a certain way but in if you pose a question in a slightly different way they can actually still fail at quite elementary things.” This brittleness means that even highly capable AI can break down with minor contextual shifts, a stark contrast to the robustness expected from a general intelligence.

What to Do With This

If you’re building an AI-powered product, design for the future, not just today’s models. Assume current AI is static and brittle. Start building your data infrastructure and feedback loops today to enable continual learning and adaptability for when those models become available. Do not architecturalize your product around the current “jagged” state of AI; plan for its evolution.