← Blog
ai

Vibe Coding Is Just Deferred Collapse

·5 min read

Vibe Coding Is Just Deferred Collapse

Harvard Gazette ran a piece this month framing vibe coding as a window into the AI future. Vibe coding — writing code by prompting an AI, accepting what it gives, iterating by feel — was positioned as a literacy shift, similar to how the web changed publishing.

The analogy is interesting. It's wrong in a specific way.

Publishing and software have different failure modes

Publishing without editorial understanding degrades content. Content degrades gracefully — a confusing post is just confusing. Software degrades differently. A codebase where nobody understands the control flow eventually fails under conditions nobody anticipated, in ways nobody can diagnose.

The failure mode of bad content is a bad post. The failure mode of bad code is a production incident at 2 AM where nobody can explain what the system is doing.

What building with AI actually looks like

I've built Ordia using AI tools daily for months. My practice is specific: write the skeleton manually — the interfaces, the data shapes, the structural decisions about where the system divides — then let AI fill in the interior of patterns I've already modeled. Interior, not skeleton.

The reason isn't preference or principle. It's that the skeleton is where understanding lives. The interfaces are a compression of everything I know about the problem. If I don't write them, I don't have that knowledge. I have code that works and no mental model. Those are different things.

Vibe coding produces working code without a model. The distinction matters at 2 AM when the system breaks in production and nobody can explain what it's doing.

The calibration problem

METR's controlled study on senior developers found they were 19% slower on complex, novel tasks when using AI. They predicted they'd be 24% faster. The gap between expectation and outcome is calibration — they were calibrated on AI's performance on simpler tasks and applied that expectation to the hard ones.

Vibe coding practitioners are calibrated on the same incomplete data. The AI is reliable at the center of the distribution: well-specified problems, established patterns, clear interfaces. It is unreliable at the edges — novel situations, unusual failure modes, design decisions that require tradeoff judgment rather than pattern completion.

The edge is where debugging happens. Debugging novel failures in code you don't understand is not faster with AI assistance. It's slower, because the AI has no more context than you do about why this specific combination of inputs broke the system in this specific way.

What the "contribution percentage" signal misses

Companies are now tracking "agent contribution percentage" — the share of code coming from agents rather than human engineers. Some are at 65–80%. The metric treats code production as the bottleneck.

It hasn't been the real bottleneck for a long time. Understanding is the bottleneck. Maintenance is the bottleneck. Reasoning about failure modes is the bottleneck.

A team at 80% agent contribution where the 20% human engineering layer deeply understands what was produced is fine. A team at 80% agent contribution where the humans are also vibe coding — accepting outputs without structural understanding — is accumulating debt at scale.

The metric captures production volume. It captures nothing about understanding. Those two things are not correlated in the way the metric implies.

Where the debt shows up

I started programming at 12. The first years were mostly confusion — building things that didn't work, debugging outputs I didn't understand, reconstructing ideas from scratch when they collapsed. That period built the mental model I work from now.

AI would have removed much of that friction. The friction was the point.

The capacity to reason about novel problems develops through working on novel problems, manually, struggling through the parts that don't resolve cleanly. Vibe coding optimizes away exactly that friction. This doesn't surface immediately — the code works, the demo runs, the features ship.

It surfaces when the system encounters a situation the AI pattern didn't anticipate, and the human is supposed to reason about it. That's when the absence of a mental model becomes expensive.

The insight Harvard is reaching for

The piece frames vibe coding as a literacy shift — the same way the web democratized publishing. That's partly right. The cost of producing working software has dropped, and that will expand who can build things.

But the comparison breaks at the failure mode. A confusing blog post is just confusing. Software that nobody understands fails in production, accumulates technical debt that can't be paid down without understanding the codebase, and eventually requires a rewrite by people who are starting from scratch.

The future vibe coding points toward isn't one where everyone builds software without understanding it. It's one where the developers who built the understanding — who wrote the skeleton, modeled the interfaces, reasoned about the failure modes — are the ones who can work with everything else that was generated. That's not the elimination of engineering. It's a bifurcation.

The thing worth preserving

Working code is not the same as safe code. The AI gives you working code. You accept it. It passes tests. It ships.

Whether it's correct under conditions you haven't tested yet — whether the interfaces are designed in a way that anticipates the changes you'll need to make next quarter — that's not something the AI knows and not something you can evaluate without a mental model.

Vibe coding removes the work of building that model. What it doesn't remove is the need for it.

The debt comes due. The question is just who's holding it when it does.