For ambitious builders creating physical AI, the biggest threat to your autonomous system might not be a line of code. It’s a breakdown in trust, according to Peter Lewig and Kassar Ounounas, co-founders of the $15 billion company Applied Intuition.

They argue that in the high-stakes world of autonomous machines, while technical rigor is non-negotiable, the true breaking point for companies like Cruise wasn't a technology failure alone. It was a failure in communication and public trust, compounding the technical issue into a crisis.

Key Takeaways

  • Performance is Safety in Neural Simulation: Developing end-to-end autonomy models requires "neural simulation"—hybrid methods like Gaussian splatting and diffusion that simulate all sensor data. As Peter Lewig states, "Performance is everything. If you can't do enough simulation fast enough and cheap enough, you actually can't get results that are worthwhile." Speed directly impacts your ability to validate safety.
  • Statistical Verification Replaces Checklists: Traditional, requirements-based verification (think Euro NCAP crash tests) is being superseded by statistical approaches. Founders must now prove "nines of reliability" and manage meantime between failures, moving from black-and-white compliance to probabilistic safety assurances.
  • The "Sim to Real" Loop is Non-Negotiable: No simulation perfectly mirrors reality at first. Lewig highlights the constant "sim to real matching" process, where real-world feedback must continuously refine and feed parameters back into your simulator. This isn't a one-time setup; it’s an active, ongoing feedback loop.
  • Trust and Communication Are Your Ultimate Safety Systems: Kassar Ounounas points to the Cruise incident as a case where the primary issue wasn't the initial technical glitch, but “how did the company talk to the regulators and and what was their kind of behavior.” Your company's response to failure can be more damaging than the failure itself.

Neural Simulation: Fast, Cheap, and Safe Enough

Building an AI that controls a physical machine, say, a self-driving car or a robotic arm, is far more complex than training a chatbot. It demands simulation that can mirror the real world with extreme fidelity. Lewig explained that for reinforcement learning on end-to-end models, you need to simulate all sensor data. This isn't just rendering; it’s "neural simulation," a hybrid approach combining methods like Gaussian splatting and diffusion.

The catch? It has to be incredibly performant. Lewig stressed, “If you can't do enough simulation fast enough and cheap enough, you actually can't get results that are worthwhile in the end.” For founders, this means your simulation strategy isn't just about accuracy; it's about sheer throughput and cost-efficiency. If your simulations are too slow or expensive, you simply can't generate enough data to train and validate your systems to the necessary safety levels.

This continuous loop is also crucial. As Lewig cautioned, “At first go, no simulation is is going to represent the real world. There's always a process of this sim to real matching where you actually you need the real world feedback to basically feed into the parameters that are being used in the simulator.” You cannot build safety in a vacuum; real-world data must relentlessly test and improve your synthetic environments.

When Public Trust Becomes Your Biggest Vulnerability

The Applied Intuition founders made it clear that their first core value is "round safety." But safety for physical AI systems extends beyond technical metrics. Verification and validation have shifted from simple requirements to statistical guarantees, aiming for multiple "nines of reliability" and managing meantime between failures. This means understanding and mitigating every possible failure mode, however rare.

However, the real lesson from incidents like the Cruise accident isn't always about a new technical bug. Kassar Ounounas argued, “The Cruise example wasn't a technology failure. The real compounding issue there was just how did the company talk to the regulators and and what was their kind of behavior and I think that became more of the issue.” For founders in physical AI, this is a stark reminder: your reputation, your relationship with regulators, and your public communication strategy are as critical to your survival as your engineering prowess. A technical glitch is manageable; a loss of trust can be company-ending.

What to Do With This

For any high-stakes physical AI you’re building, conduct a "trust vulnerability" exercise. List potential technical failures, then for each, outline the worst-case communication scenario: how would media react? How would regulators respond? Draft a preliminary communication plan now to address these scenarios, focusing on transparent, proactive engagement over defensive secrecy. This isn't just an engineering problem; it’s a strategic communications problem that must be solved before deployment.