For founders feeling the relentless pace of AI breakthroughs, Nat Friedman has a sharp, counter-intuitive message: “This is the slow part of the singularity right now.” Think about that for a second. The daily drumbeat of new models, capabilities, and applications—the kind that feel like a full-time job just to track—is, in Friedman's view, as slow as it will ever be.
Key Takeaways
- Nat Friedman believes the current blistering pace of AI advancement is actually the "slow part" of the singularity. It’s the calm before the storm.
- This perceived slowness stems from the significant "human effort" still required by researchers and engineers to continuously improve AI models.
- The next phase, the "elbowing up" of progress, will begin when AI systems achieve "self-improvement," automating their own development cycles.
- This self-improvement will remove humans from the critical loop of model refinement, allowing progress to scale to "data center levels" of speed and efficiency.
- Founders must prepare not just for the current rapid change, but for an unprecedented acceleration when AI systems begin to autonomously drive their own evolution.
The Deceptive Calm Before the Storm
If you're a builder in your 20s or 30s, you've lived through what feels like an eternity of AI news in the past two years. Every week brings a new benchmark, a fresh capability, another boundary pushed. Yet, Friedman, a deep insider in the AI world, calls it a crawl. “The improvement of the models still runs through a lot of human effort,” he explains. Behind every "breakthrough" announced, there are thousands of researchers, engineers, and data scientists spending countless hours, refining data, tweaking architectures, and designing experiments. This human bottleneck, as efficient as it might be today, puts a ceiling on true exponential growth.
It feels fast, Friedman concedes, but it's relative. This human-centric development means progress still has "sleep gaps" and limits on how much work can be done concurrently. It's like building a rocket ship with manual tools, no matter how skilled the craftspeople. The output is amazing, but it's not truly automatic, not yet. "It feels sometimes fast, sometimes slow right now," Friedman says, “but like it's probably as slow as it will ever be when we when we start to automate more and more of that process.”
When the Elbow Kicks Up
The current “prime project at every AI lab right now,” according to Friedman, is precisely to eliminate this human bottleneck. The goal is to “remove humans from the loop of all the continuous work that has to be done to make the models improve and to get to self-improvement.” Imagine AI systems that don't just solve problems, but autonomously identify ways to improve their own underlying architecture, training data, and learning algorithms. This isn't just about faster iteration; it's about a fundamental shift in the pace of innovation.
When AI can improve itself, when the human research gap is gone, Friedman forecasts an "elbowing up" of progress. This means scaling the entire development process to "data center scale," where the constraints become compute and energy, not human ingenuity or available time. The jump from human-assisted model improvement to fully autonomous, self-improving AI will be the difference between today's rapid sprints and an entirely new, almost incomprehensible speed of progress. "We are in the singularity," Friedman states, “and we're in the beginning slow part before it elbows up with self-improvement.”
What to Do With This
This week, look closely at your own product's AI development cycle. Identify the manual, human-intensive steps currently required for continuous improvement or new feature development. If AI's "slow part" is this fast, what will happen when those manual steps are automated away by self-improving agents? Don't build for today's AI capabilities; design your systems and business models for an environment where core AI development cycles are measured in hours, not months. Ask: What happens to my moat, my team structure, and my value proposition when the underlying AI technology improves 10x faster than I can currently comprehend?