Prompting Is Not Learning
Prompting Is Not Learning
90% of developers now regularly use AI tools at work. The adoption curve is steep. The assumption built into adoption metrics is that more tool use produces more productivity.
For experienced engineers using AI on understood tasks: true. For developers still building the foundations of their mental models: not straightforwardly true.
Prompting an AI to solve a problem is not the same cognitive act as solving the problem. One produces output. The other builds a model.
What building a model means
I started programming at 12. The first several years were mostly confusion and debugging — building things that didn't work, reconstructing assumptions that turned out to be wrong, writing the same kind of function ten times before I had an intuition for how to write it correctly the first time.
None of that was pleasant. All of it built the internal model I work from now — the one that lets me look at an unfamiliar codebase and identify where the structural problems are, or see that a proposed architecture will create maintenance problems before a line of code is written.
AI would have removed much of that friction. The friction was the point.
The model doesn't form through prompting. It forms through struggle — making mistakes, diagnosing why they're mistakes, rebuilding from a clearer understanding. The AI removes the mistake-and-diagnosis cycle by providing the answer. You get the output without the process that produces understanding.
The METR study as evidence
Senior developers in a controlled experiment were 19% slower on complex, novel tasks when using AI versus working without it. They predicted they'd be 24% faster.
The expectation gap is calibration. They were calibrated on AI's performance on easy tasks — clear specs, existing patterns, established solutions — and applied that calibration to hard tasks where it doesn't hold.
What separates easy from hard is whether the problem is inside or outside your existing model. Easy: you have a model, the AI helps you implement it faster. Hard: you need to build the model first, and prompting an AI doesn't build it.
Junior developers who use AI for learning are practicing on easy tasks — tasks they could have worked through manually and built the model from. They're calibrating their expectations on AI's performance on easy tasks. When they face hard problems later, they'll have the same miscalibration the senior developers showed, but with a weaker underlying model to fall back on.
The skill gap this creates
The concrete outcome is predictable: a developer who used AI for the first three years of their career has generated a lot of code and debugged relatively little. The debugging is where the model forms. They'll be fast on well-specified tasks and slow on novel ones.
This shows up years later, at exactly the wrong time. The hard tasks — the design decisions, the failure mode analysis, the novel problems at the edge of existing patterns — are where senior engineering value is generated. Junior developers who skipped the productive struggle phase arrive at seniority without the capacity that phase was supposed to build.
It's not that they can't do the work. It's that they'll be slower at it than their experience level implies, because experience accumulated through prompting is different from experience accumulated through struggle.
What I do differently
My practice with AI is specific. I write the skeleton manually: the interfaces, the data shapes, the architectural decisions about where the system divides. Then AI fills in the interior of patterns I've already modeled.
The interior filling is fast. The skeleton writing is slow — and that's intentional. The skeleton is where understanding lives. If I let AI write it, I'd have a system that works and no model of why it works that way. The next time I need to modify it, I'd be starting from the AI's reasoning rather than my own.
The deliberate inefficiency in writing the skeleton is the point. It's the work that builds the model. Optimizing it away optimizes away the learning.
The case against artificial difficulty
None of this argues for removing AI assistance from engineering education. It argues for distinguishing which parts of the work are productive struggle versus which are genuinely inefficient.
Productive struggle: working on a design problem without immediately asking the AI for the answer. Debugging an unexpected behavior before looking for hints. Implementing a data structure you understand conceptually, without looking up examples.
Genuinely inefficient: manually formatting code when a linter exists. Rewriting boilerplate from scratch when a well-understood pattern exists. Looking up API syntax you've looked up 50 times.
AI is good at eliminating both kinds of friction, and treating them the same is where the learning deficit comes from. Junior developers who use AI should use it to eliminate the genuinely inefficient parts and preserve the productive struggle parts.
The developers who figure out this distinction early will have both the speed gains from AI and the mental model that comes from working through hard things manually.
The ones who don't will be fast on easy tasks and slow on everything else.
In five years, the hard tasks will still be hard.
