Key Takeaways
- The White House is reportedly exploring an "FDA for AI" concept to vet new models for safety, potentially through an executive order, aiming for pre-release proof of safety similar to drug approvals.
- Leading venture capitalists, including Brad Gersonner and David Sacks, strongly oppose this, arguing an "approval regime" would be disastrous for innovation and a cover for regulatory capture.
- Chamath Palihapitiya attributes the regulatory push to a "profound vibe shift" against tech and Silicon Valley, fueled by fears of wealth concentration and poor communication of AI's societal benefits.
- David Sacks dismisses the "FDA for AI" as "fake news," advocating for targeted solutions like Mythos models for cyber threats and implementing "Know Your Customer" (KYC) protocols for powerful AI APIs instead of broad pre-release approvals.
The Disagreement
Reports suggest the White House is considering an "FDA for AI," a system to vet new models for safety before they hit the market. The idea, as one unnamed official put it, is to establish a "clear roadmap" and a “process so that you know they're released to the wild after they've been proven safe just like an FDA drug.” The intention seems to be a proactive approach to potential vulnerabilities.
However, the All-In hosts met this proposal with immediate and sharp skepticism. Brad Gersonner, after speaking with Kevin Hassett, clarified the administration's intent is likely coordination, not an outright approval regime. Still, he warned that “The approval regime, this idea that you're going to have to share every model with an FDA in Washington and they're going to have to pre-approve the model is a disaster.” For Gersonner, this type of oversight would simply throttle the rapid innovation vital to AI's progress.
Chamath Palihapitiya offered a political lens, linking the regulatory push to a broader sentiment. He argued, “I think that there's a pretty profound vibe shift with respect to tech, tech oligarchs, Silicon Valley, and particularly the AI. That vibe shift has already happened on Main Street, and I think that that's starting to seep into Washington.” He sees the move as a reaction to public anxiety and wealth concentration, not necessarily a technical necessity.
David Sacks was even more direct, dismissing the entire concept as misleading. “This whole idea of an FDA for AI, I don't think any senior official supports it... So I think there's a big fake news component.” Sacks pushed for precise, targeted solutions over broad government oversight. He cited specific measures for cyber threats, such as Mythos models, and proposed implementing "Know Your Customer" (KYC) for powerful AI APIs. His argument: rather than halting innovation with pre-release approvals, which he views as a cover for regulatory capture, focus on accountability at the point of access and known vulnerabilities.
Who's Right (and When They're Wrong)
Sacks and Gersonner hit closer to the mark for founders building in AI today. An "FDA for AI" modeled on drug approval—a multi-year, multi-million dollar process—would instantly cripple the industry's pace. AI models evolve weekly, not annually. Subjecting every iteration to bureaucratic pre-approval would render most startups unviable and cede global leadership to less regulated markets. This kind of blanket pre-vetting is an overreaction that fundamentally misunderstands the AI development cycle.
Chamath's point about the "vibe shift" is also spot-on; it explains why this debate is happening. Founders often live in an echo chamber of innovation, but the general public and politicians are grappling with the perceived unknowns and potential downsides of AI. Ignoring this public sentiment or failing to articulate AI's immense benefits only fuels the fire for heavier regulation.
Where the regulatory idea has some merit, it's not in blanket pre-approval, but in Sacks's more targeted approach. Specific, provable harms (like cyber threats or misuse via public APIs) demand specific responses. Requiring KYC for powerful AI APIs, for example, assigns accountability for misuse without stifling development. This is a far more pragmatic and less innovation-stifling path than broad government control over every new model.
What to Do With This
Don't wait for the government to tell you how to operate. If you're building powerful AI models or APIs that could be misused, proactively consider implementing "Know Your Customer" (KYC) protocols or other accountability measures before it becomes a mandate. Understand that the public perception of AI is shifting; educate your customers and stakeholders on your AI's benefits and safety features to build trust and head off the regulatory vibe shift at the pass.