Product in 2026
What changes when software is no longer the bottleneck
The cost of turning an idea into working software has collapsed in 18 months. A PM with good judgment and a Claude subscription can ship what used to take a six-engineer team six weeks. I know because I do it.
For twenty years, I've built product by being the connective tissue — the person who translates customer insight into roadmaps, who keeps engineering and design and leadership rowing in the same direction, who ships the thing. That work still matters. But the economics of it have changed in a way that most PMs are underestimating.
Here's what has actually changed: implementation is no longer the rate-limiter. Good ideas are.
For a long time, the bottleneck in shipping a product was "can we build this?" That's been answered. The new bottleneck is "should we build this?" — which is a taste problem, a judgment problem, and a customer-understanding problem. The PMs who win in the next five years are the ones who can:
- See the right problem before anyone else. Be pathologically curious about what people actually do, say, and avoid. Read the forums. Watch the shadow behaviors. Notice when your own behavior changes.
- Generate quality ideas faster than their peers. Quantity is a solved problem — LLMs can produce a hundred mediocre ideas in a minute. The scarce resource is the judgment to throw out 97 of them.
- Build the MVP themselves. Not because you should replace engineers — you shouldn't — but because the feedback loop between idea and working artifact is now tight enough that you can be the first customer of your own product. That's an unfair advantage.
- Drive the executive alignment to ship. The thing AI cannot do: walk into a room of VPs who each own a conflicting OKR, and leave with a signed-off roadmap. Human influence is now the scarce input. Always was, actually.
My last three years at Google were spent building the measurement infrastructure that tells our AI systems whether they're doing their job — the auto-rater platform that evaluates factuality at scale and feeds the reward signal for our grounding models. Twenty-plus teams depend on it, including Gemini and Search. What I learned, building it, is this: the hardest product management problem in AI isn't the model. The model is not the bottleneck. The bottleneck is knowing — with precision — whether the model is doing what you want. Evals are product. Grounding is product. Trust infrastructure is product. Most of the people calling themselves "AI PMs" have never shipped one of those.
I'm an AI-native product manager because I had to become one. The surface area of my job stretched to include model-level decisions, eval design, non-deterministic UX, and the politics of shipping a system whose output you cannot fully predict. Now I build my own tools with AI, run my life with it, and ship prototypes in an afternoon that would have taken a sprint two years ago.
The PMs who will get hired in 2026 are not the ones with the longest resume. They're the ones who can demonstrate — not claim — that they have extended how they work to meet this moment. The cost of proving this is a weekend. The cost of not proving it is everything.
If you're hiring, I'd love to talk.
— Adam
adamlewkovitz@gmail.com