For founders and builders in AI, the clock is not just ticking faster; it's practically on hyperdrive. Cat Wu, who heads product for Claude Code at Anthropic, laid out a reality that should shake any PM comfortable with six-month roadmaps: her team ships product features in days, sometimes even a single day. Lenny Rachitsky, host of Lenny's Podcast, observed, “I've never seen anything like the pace you folks at Anthropic are shipping at.” This isn't just a slight speedup; it's a wholesale redefinition of what a Product Manager does.
Key Takeaways
- Anthropic's Claude Code team regularly reduces product feature timelines from months to mere days, embodying an unprecedented shipping velocity in AI.
- The Product Manager role in AI has fundamentally changed, shifting from long-term roadmap creation to enabling immediate, rapid iteration and daily shipping.
- Many PMs have not yet grasped the necessary speed for AI-native development, focusing too much on planning when the core job is now removing barriers to launch.
- Anthropic embraces "research preview" releases to reduce commitment, allowing them to ship ideas quickly, gather feedback, and iterate without locking into long-term support.
- Cat Wu's 3 Steps for PMs to Accelerate AI Product Shipping offers a clear path to achieve this extreme velocity through goal setting, repeatable processes, and cross-functional frameworks.
The Cat Wu's 3 Steps for PMs to Accelerate AI Product Shipping
Step 1: Set Clear Goals: LMs are so general that actually creates a lot of ambiguity in who we're building for, what problems we're trying to solve, what the top use cases are. And so I think a great PM is able to say, okay, our our key user is professional developers. Uh the main problem that we want to solve for this feature is maybe there's like too many permission prompts and people are feeling fatigue. And like the the use case is we we want professional developers at enterprises to safely get to zero permission prompts. And that actually sets a pretty clear goal because it it rules out a lot of potential approaches for reducing permission prompts so that people can uh get a lot more done with one prompt.
Step 2: Establish Repeatable Shipping Processes: Figuring out some repeatable process for getting these features shipped. So uh for cloud code what we do is we actually ship almost all of our features in research preview. We clearly brand this um when we ship something so that users know that this is an early product. This is just an idea. This is just something that we're trying to get feedback on and iterating on and that this might not be supported forever. And what this does is it reduces it reduces our commitment for shipping something. We can just get something out in a week or two.
Step 3: Create Cross-Functional Frameworks: Help create the framework for the team so that they know when to pull in cross functional partners and what those crossunctional partners expectations are. So for example, we have a really tight process between engineering, marketing and docs. So when engineers have a feature that they feel is ready and that we've dog fooded internally, they post it in our evergreen launch room. And then Sarah who leads our docs and Alex who leads PMM and Tar and Lydia on Devril just like jump in and can turn around the the marketing announcement for it the very next day. And because we have this really tight process it lowers the friction for any engineer to ship something and PM is the role that should be setting this up.
When This Works (and When It Doesn't)
Cat Wu notes this framework helps PMs accelerate product shipping by providing clarity on goals, reducing commitment to early releases, and streamlining cross-functional coordination, enabling teams to move at the speed required for AI-native products. This approach thrives in environments where rapid feedback loops are possible and early user input is valued over perfect, polished releases. It's particularly effective for generative AI products where user behavior can be unpredictable and models evolve quickly.
However, this framework might falter in highly regulated industries or for products with critical safety implications, where a "research preview" might not be acceptable. Products requiring extensive hardware integration or rigorous certifications before public release will also struggle to adopt daily shipping. It also demands a strong internal culture of psychological safety, where teams feel comfortable releasing imperfect work knowing the goal is learning, not just perfection.
What to Do With This
If you're building an AI-powered tool for developers, say, an intelligent code completion assistant, use Cat Wu's framework this week. First, define a hyper-specific goal: enable JavaScript developers to complete complex React component props with 90% accuracy in under 2 seconds. Second, launch a micro-feature—like suggesting props for a single common React library—as a clearly labeled "Alpha Snippets" feature within days to a small, internal user group. Finally, set up a 24-hour turnaround for an engineer to post a new snippet idea in a dedicated Slack channel, and for your marketing/developer relations person to draft a quick announcement for your Alpha users and collect immediate feedback, mirroring Anthropic's evergreen launch room process.