The Bottleneck Moves Faster Than Your Model
As Code gets commoditized, the work upstream rises into view, question, and remodel.
Three years ago, most people you asked would have told you the constraint in building software was writing code.
You could have the clearest product instinct in the room. If engineering was slow, none of it mattered. Shipping speed set the ceiling.
Some started to talk about how code itself had zero marginal value. In product circles especially, you could feel the conversation shift: the skill of writing code was separating from the value of knowing what to write. I ascribe to more of that view today than ever.
We're living through the commoditization of code. Cursor. Claude Code. Lovable. v0. A team of three can now build what previously required a department. Feature cycles have collapsed from quarters to weeks, sometimes days.
The question is how high up that goes. How far does the commoditization reach? How does it affect the building of products, the business of SaaS, the livelihoods of the people who have built careers in software?
Here is what we are seeing. It exposes the layer above.
When code was slow, you could get away with deciding loosely.
Product management could be documented lightly, held mostly in one person's head. The trace diluted invariably with an N-month buffer between a decision and its consequences. You could be wrong about a priority in February and have enough time to present history in a different light by May.
That buffer is gone. And what the buffer was hiding is uncomfortable to look at directly: the deciding layer for most companies was never really tested. It was never under such pressure.
Those N-month buffers did foster an era of collection. Research sits in Notion pages that get read once and forgotten. Call recordings pile up in Gong libraries with no pipeline to product decisions. Loud Sales requests take over the dozen silently churning client needs. Stakeholder feedback arrives through six different channels and gets synthesized by whoever has bandwidth and enough context, which increasingly is a council or nobody.
When shipping was slow, the synthesis delay was invisible against a three-month dev cycle.
When shipping is fast, it becomes the constraint. Teams can build in days. The evidence for what to build still takes weeks to collect, process, and trust.
The result is a failure mode that wasn't as obvious at the old velocity: correct code, wrong product.
Faster generation, with the same decision quality makes worse products, as speed creates the illusion of progress.
In December, a product team showed us their Gong library.
1826 calls from Q4. Each one recorded, logged, tagged by deal stage and call type.
I asked how many had been reviewed for product insights.
"Maybe 20. The ones that got shared in Slack"
Twenty out of 1826.
The other 1.8k were sitting there. Customer pain points in the customer's own words. Competitor mentions with context. Feature requests tied to specific workflows. Signals that could have reordered the roadmap. Nobody had time to get to them. Not because the team didn't care, but because manually synthesizing 1.8k calls is not possible alongside everything else a PM does.
This is not exceptional. It is how most SaaS companies above a certain size actually operate. The call volume outpaces the synthesis capacity. The signals exist. The pipeline to product decisions does not.
What the PM decides from, in practice, is the 20 calls that happened to surface in Slack, plus whatever was raised in the last planning meeting. That is the mental model driving the roadmap. A sample of the signal, not selected for importance. Selected for volume, for whoever spoke up, for what happened to be top of mind.
Andrew Ng has described the PM's mental model as the actual instrument of product decisions. He's right. The model synthesizes. The gut expresses. The survey doesn't tell you what to build; it updates the model, and the model tells you what to build. Good PMs have always operated this way.
What he didn't say, but what follows directly from it: the model is only as accurate as what built it.
Three Failure Modes of Fast Teams
Every product team that has accelerated development with AI tools eventually hits one or more of these. They are not new problems, but structural weaknesses that were always present, hidden by the buffer of slow shipping. No buffer, no mask.
Signal decay. Research, calls, support tickets sit in systems that don't connect to product decisions. The insight that would reorder the roadmap is in a Gong recording nobody processed. Not because people don't care. Because manual synthesis doesn't scale to the volume a real SaaS org generates.
Evidence-free tickets. Work items get created from memory, from political pressure, from "we talked about this in planning" The signal that originally justified the decision is not attached to the work item. When engineering asks "why are we building this?" the answer is a summary from someone's head. When the PM who held that context leaves, the answer is gone.
Context fragmentation. The PM knows why this matters. The engineer doesn't. The designer has a different understanding. Sales is still describing the old version. Every handoff loses fidelity, and the original customer context gets less legible with each translation.
Karpathy described LLMs as having anterograde amnesia: they can't build long-running knowledge after training ends, they start each session from working memory alone. Product organizations have the same condition, except the context window is whatever the PM can hold in working memory during a meeting.
These failure modes existed before AI-accelerated development. The commoditization of code is what made them visible.
What the Fix Looks Like
The answer is not better summarization.
More summaries of signals that don't connect to decisions only create more content that also doesn't connect to decisions. The repo gets fuller. The decisions stay ungrounded. The mental model gets fed from a cleaner-looking pile of context that still misses the same 95% of what customers actually said.
The answer is structure. Not project management structure. Infrastructure for the deciding layer.
A system where signals flow in continuously from wherever they originate, calls, tickets, research, docs, and get classified. Not by a model guessing in the dark, but by a model proposing and a human confirming where it matters.
Signals cluster into insights. Insights map to opportunities. Opportunities connect to the ideas being considered and the initiatives being built.
Every decision carries traceable evidence. Not a note someone wrote. A live connection to what customers said, when, and what it meant.
When something ships, the graph updates.
How the deciding system works
Evidence enters from real operating systems, becomes durable product knowledge, and stays connected all the way into execution
AI proposes. Humans review where confidence is low or stakes are high
What this changes at enterprise scale: a call from November, a ticket from January, a note from February. Three different customers describing the same friction in three different languages across three different systems. Without structure, those three things are invisible to each other, processed by different people, generating at most a summary that gets read once. With structure, they cluster. The insight surfaces. The PM reviews it, confirms it, and the confidence it carries is not "I think" It is: here are the signals, here is the ARR attached to those accounts, here is the chain from what customers said to why we are building this.
At the scale some organizations run at, tens of thousands of signals per month flowing across Gong, Zendesk, and Notion, the discovery problem is not "how do we read more?" It is: when the gem surfaces in call 4,000, do we have the structure to see it?
Most organizations don't. Not due to a lack judgment. The pipeline between signal and decision was just designed for a different velocity.
Karpathy said what LLMs are missing is a scratchpad. A structure that consolidates what was learned into explicit, referenced, persistent form so the next decision doesn't start from zero. He was talking about models. The metaphor holds for product organizations.
The mental model the PM carries is the scratchpad. Right now, for most teams, it gets built from a partial sample of customer signal and reset each quarter when priorities shift. The decisions that come out of it are often right because good PMs compensate with instinct. But instinct built on partial evidence is still a guess, dressed in confidence.
The organization that builds the infrastructure for structured deciding first has an advantage that compounds. Not because it will decide faster, though it will. Because it will decide from the full picture, consistently, in a way that survives team changes and scales to the volume of signal a growing company actually produces.
We have spent the last year building toward this at Zentrik. We shipped the first version of the discovery graph in January. Since then, we have been scaling it with enterprise teams processing tens of thousands of signals per month, learning where the hard problems actually live versus where we were overthinking.
There is a lot more to share. The next few weeks will get specific.
For now: if you are a PM who looked at your process recently and thought "this doesn't work at the speed we're shipping anymore", you are not wrong. It doesn't. That is not a personal failure. That is a structural one.
Structural problems have structural solutions.
More soon.
Jab
Next in the series
We will keep unpacking what the deciding layer needs now that software got faster
Subscribe to get the next essay when it lands.