Product-Market Fit Is a Perishable Good — Here’s the Operating Manual
Lovable, the AI-powered app builder, hit $200M ARR in under a year — the fastest AI startup ever to cross $100M ARR, doing it in just eight months. You’d think a company growing that fast has PMF locked down. Elena Verna, their Head of Growth, calls PMF at Lovable a “perishable good” — something the team must re-earn every 90 days. The playbook of “find PMF, then scale” is a trap. This article is the operating manual for the treadmill that never stops.
If PMF is a treadmill, the first step is admitting where you actually stand. The honest answer is brutal.
Your Idea Is Already Dead — You Just Don’t Know What Kills It Yet
Ninety to ninety-five percent of new products fail. In my experience training 12,000+ product managers, the reason is almost always the same: the team was wrong about something fundamental — the segment, the job, the value prop, the channel, the economics — and they discovered it too late.
The goal of early stage isn’t to ship a product. It’s to buy knowledge about what will kill your idea.
We don’t launch products — we purchase validated learnings. A pivot isn’t some dramatic reinvention. It’s the surgical act of changing an assumption that turned out to be wrong.
Consider Notion. Their V1 was a programming tool “for non-coders.” Nobody wanted it. Founder Ivan Zhao fired all four employees, moved to Kyoto, and rebuilt from scratch. His critical insight: people don’t want to build software — they want to get stuff done. The wrong assumption was about the Job To Be Done and the core Value Proposition. Notion V2 launched as a modular workspace in 2018 and now sits at a $10B valuation.
If the goal of early stage is to buy knowledge about what kills your idea, the next question is obvious: where do you start digging?
Start With the Job To Be Done
Jobs are the root cause of everything in your product. A product exists to perform a job for a customer, and profit appears only when you create real added value within a specific segment.
The most common mistake I see? Teams skip segment and job selection entirely and jump straight into building a solution. They fall in love with the technology, not the problem.
The correct sequence is: pick a segment, identify the highest-value job within it, model unit economics, validate whether you can deliver enough value, then figure out how to communicate that value.
Wispr Flow lived this. The founders spent years building a hardware voice device. After a brutally honest board meeting in mid-2024, they confronted the truth: the Job was never “own a cool voice gadget.” It was “type faster and more naturally.” They killed the hardware, pivoted to a macOS dictation app, and hit #1 on Product Hunt. Free-to-paid conversion reached roughly 20% — against an industry average of 3-4%. It became possible because they finally matched the right segment with the right Job.
Jobs give you direction. But in 2025, the thing that used to force you to validate before building — expensive code — disappeared overnight.
Free Code Flips Validation Upside Down
The old cycle was linear and expensive: research segments, identify jobs, model economics, and only then — when you had strong conviction — invest in building an MVP. Code was scarce, so you had to be right before you started building.
AI collapsed that constraint. Writing code is becoming practically free. And that changes the entire validation sequence.
The old model said: do research first, find the promising segments and Jobs To Be Done, then build one MVP for the best bet. The new model says: build MVPs for each promising segment in parallel. Not polished products — lightweight probes designed to reach one moment: the actual sale.
Why the sale? Because the most valuable feedback isn’t an interview quote or a survey response. It’s someone pulling out their wallet. At the point of sale, you validate the entire causal chain at once: does the segment exist? Is the job real? Does the value proposition land? Does your communication activate the buyer? Is there willingness to pay?
This is a radical acceleration of hypothesis testing. Instead of running one careful experiment per quarter, you can run a dozen cheap ones per month — each designed to reach the point of transaction as fast as possible. You’re not building products. You’re purchasing validated learnings about what kills your idea.
The danger is obvious: teams mistake building velocity for learning velocity. They ship feature after feature, feeling productive, while core assumptions go untested. More code, less validation.
The right approach: use AI as an accelerator of artifacts — probes, MVPs, prototypes — not as a substitute for learning. Measure your progress in the number of validated assumptions and the quality of signal from real sales, not in features shipped or lines of code pushed.
The Hypothesis Factory
AI has turned the founder into a factory operator. The number of hypotheses you can test per unit of time has exploded. But throughput without validation is just noise — more experiments don’t mean more knowledge.
Your hypothesis factory needs an operating system with clear explicit kill criteria: what would make you kill this bet.
Kill criteria protect you from bad bets. But even good bets expire. The hardest lesson in AI-era product work is that PMF itself has a shelf life.
The PMF Treadmill: Fit Is a Perpetual Engine
Jasper AI was the poster child of AI-first PMF — $120M ARR, $1.5B valuation. Then ChatGPT launched. Traffic dropped 30% in two months. Revenue crashed to $35M. Both co-founders were out. Three pivots in twelve months. Perfect product-market fit, vaporized — not because the product got worse, but because the market shifted beneath them.
This isn’t just an AI-category phenomenon.
In every market, PMF is becoming less durable. Models change, user expectations evolve, competitors spawn faster, and advantages that took years to build evaporate in quarters. The strategy of “found PMF, now optimize and scale” is a trap in any category.
The solution is two loops running simultaneously. Loop one: optimize your current fit — double down on what your foundational cohort loves, squeeze more value from existing assumptions that tested well. Loop two: run a continuous innovation loop — new bets, new segments, new jobs — even while Loop one is working. As Elena Verna puts it, what used to be long-horizon innovation is now a quarterly reality. The hypothesis factory isn’t a phase you grow out of. It’s a perpetual engine.
The treadmill never stops. But now you have the operating manual. Six moves, starting Monday.
The Operating Checklist
- Start with jobs and segments — this is the root cause of everything else.
- Test your riskiest assumptions. Prioritize by: (probability of error × consequences) / cost of test.
- Use AI to build artifacts fast — but measure progress in validated checks, not shipped features.
- Run the factory with an OS: assumption ledger, decision log, staged validation, explicit stop criteria.
- When pivoting, name exactly which assumption changed and why.
- Accept the treadmill. Schedule the next round of re-validation before you need it.
PMF isn’t something you find. It’s something you keep finding. The only question is whether you have a system for it — or whether you’re just running.