Key Takeaways
- Large language models (LLMs) are surprisingly easy to swap; professional developers can switch between them with a literal “one click away,” undermining traditional software moats.
- Traditional moats like network effects or economies of scale haven't emerged in the foundation model space, making competition fierce and differentiation difficult.
- Amjad Masad, CEO of Replit, hypothesizes that continuous capital for training future models is the only true, lasting moat, potentially leading to 'natural monopolies' or even government-subsidized competition, mirroring China's EV market strategy.
- Paradoxically, this commoditization of underlying AI technology is a massive win for entrepreneurs, as it creates a fertile ground for building diverse, defensible applications on top of these easily swappable models.
The AI Game of Thrones, Without the Castles
Forget what you thought you knew about building a moat in AI. Amjad Masad, the founder behind Replit’s meteoric growth from $2.5 million to $250 million ARR, pulls apart the typical founder playbook. He sees a high-stakes AI “Game of Thrones” where the foundational models—the LLMs everyone is betting on—lack the defensive moats we usually associate with valuable tech companies.
“The mode of these LLMs is the technology fundamentally commoditizable, right?” Masad asks. “It seems to be very easy to replace these models.” He points to developers casually swapping models: “You know a lot of developers every day professional developers are switching between these models in some cases and if you're in cursor it's like one literally one click away to switch to the to the next model.” This fluid interchangeability means that, for all the hype, the core LLM technology itself might not be the long-term differentiator.
Capital: The New AI Kingmaker
If not network effects or scale, what is the moat in the AI game? Masad has a striking hypothesis: it’s capital, and only capital. “We haven't seen any of those modes sort of emerge from these foundation model companies,” he states. “And so my hypothesis is that maybe the one natural mode is capital.”
Imagine an endless arms race where only those with bottomless pockets can afford to train the next generation of models. This isn't just about big tech firms. Masad even suggests governments could step in, much like China's strategic investment in its EV market. “You can imagine all the big companies have a fighting chance and all the big governments can also participate in this and maybe China as a as a place where you can't really distinguish between country and company and maybe the country is going to be doing to to the um foundation model market what they did to the EV market.” This paints a picture where the ultimate power lies with those who can continuously pour billions into R&D, not necessarily those with the best model today.
Why Commoditization Is Your Friend
This all sounds bleak for the average entrepreneur, right? A winner-take-all scenario dominated by giants. But Masad flips the script. The very fact that core LLMs are becoming a commodity is good news for builders. If one or two companies owned truly indispensable, unswappable models, they’d eventually “come for you,” integrating your best features into their platform and eroding your value.
Instead, Masad believes, “if it ends up being a technology that's kind of easy to replace, I mean there there is a place for a lot of entrepreneurs to participate.” When the underlying LLM is a fungible component, the real competitive battle shifts to the application layer. Founders can build defensible businesses by focusing on unique user experiences, proprietary data sets, distribution, or deeply integrated workflows that solve specific problems, rather than trying to build a better foundational model.
What to Do With This
Stop trying to build a better LLM unless you have nation-state level funding. This week, pick a painful, underserved problem for a specific user segment. Then, instead of fixating on which LLM you use, focus entirely on creating a superior application that solves that problem so well it generates genuine switching costs for your users, regardless of the foundational AI powering it.