When Reviewing Is All That's Left, You're Not an Engineer Anymore
When Reviewing Is All That's Left, You're Not an Engineer Anymore
Some teams report 65–80% of new code coming from autonomous agents. Engineers spend more time reviewing AI output than writing. Skills atrophy is actively happening — core programming comprehension skills are degrading on teams that have handed off most generation to AI.
The framing in most discussions is that this is a transition period — that engineers will adapt, that the role will evolve toward "AI wrangling" and review. The concern I have is not that the role is changing. It's what specifically gets lost when review becomes the primary activity.
What Writing Code Actually Does
Writing code is not just output production. It's a process of building an internal model of how a system behaves.
The process of typing out a function — deciding what it takes, what it returns, how it handles edge cases — forces engagement with the problem at a level of detail that reading doesn't replicate. When you write code, you encounter the places where your model of the system is wrong, because the code doesn't work. That friction is information. It's the signal that corrects your model.
When you review AI-generated code, the friction is different. You're not encountering where your model is wrong — you're trying to map an unfamiliar model back onto your expectations. This is a harder cognitive task for some purposes and an easier one for others. But it doesn't build the deep familiarity with the system that comes from writing.
My practice on this is deliberate: write the skeleton and interfaces manually. Let AI fill the interior of understood patterns. The skeleton — the function signatures, the type contracts, the module structure — is where the design happens. Writing it forces me to be specific about what each piece does, what depends on what, where the state lives. That specificity is the understanding. Generating the skeleton would mean inheriting the AI's model of the problem, which is often plausible and wrong for my specific context.
The Atrophy That's Happening
Addy Osmani's April 2026 analysis frames the concern in terms of a generational transition: the engineers who grew up writing code will have the system models. The ones who grew up reviewing AI output may not.
The more immediate concern is engineers currently in the workforce who are shifting toward review-only workflows. The system model built from years of writing degrades when you stop using it. Not quickly — but the ability to hold a complex system model while designing, to spot the subtly wrong abstraction before it's implemented, to reason about failure modes in code you haven't written yet — these are skills that require practice. Review practice doesn't maintain them.
The skills atrophy story isn't "you can't write code anymore." It's "you can't tell when code is structurally wrong." That's the capability that erodes. And structural wrongness is exactly what AI generates most confidently — locally plausible, globally incoherent. The reviewer without the system model can't catch it.
What to Protect
The solution is not to avoid AI tools. The solution is to be explicit about which parts of the work to protect from them.
Design is the first thing to protect. Not as a process step — as a practice. Write the interfaces. Define the contracts. Specify the failure modes. Do that in code, not in a prompt. The interface is not a detail to be generated. It's the commitment the implementation has to satisfy. That commitment has to come from someone with an accurate model of the system.
Implementation is the second thing to think about. Some implementation should be written by hand — not because AI can't generate it, but because the act of writing it is what maintains the model. Deliberately maintaining a high ratio of hand-written code is a structural judgment about where understanding has to be built, not a preference or a habit.
Review is the third thing. Review against the design you wrote. Not against a general standard of "does this look right," but against the specific contract the interface defines. If the implementation satisfies the contract, it's correct. If it doesn't, the failure is locatable.
That workflow uses AI where it helps most — generating implementations of well-defined patterns — while protecting the parts of the work that require maintained system models.
The engineers who remain technically strong in five years are the ones who stayed in the habit of designing. Not as a gatekeeping ritual, but as a practice of building accurate models of systems they're responsible for. The ones who drifted entirely into review will find that when the AI produces a non-obvious structural error, they have nothing to catch it with.
