← Blog
software-engineering

Expressing Intent Is Not Engineering

·5 min read

Expressing Intent Is Not Engineering

Capgemini's TechnoVision 2026 report calls the shift from "writing code" to "expressing intent" one of the defining technology trends of the year. The idea is that developers now describe what they want and AI produces the implementation. The role transforms from craftsperson to director.

This framing is popular because it sounds like an upgrade. You're no longer doing the manual work — you're operating at a higher level of abstraction.

It's not an upgrade. It's a rebranding of a capability gap.

What "expressing intent" hides

When you write code, you make dozens of decisions that aren't in the requirement. Where does this state live? What happens when this call fails? What's the performance profile at ten times the load? Does this approach accumulate technical debt at the edge cases, or does it distribute risk evenly? How will someone debug this in two years?

These are not implementation details. They are the engineering. They are the part where judgment operates.

When you express intent to an AI and accept what it produces, these decisions still get made — by the AI, based on the training distribution of codebases it has seen. Not by your understanding of this system's specific constraints. Not by your model of where future failures will appear. By statistical pattern completion.

"Expressing intent" makes this invisible. The decisions still happen. You just don't make them.

The value proposition of an engineer

I've thought about this for a while, and my working definition is this: the value of a software engineer is cross-temporal tradeoff judgment. The ability to evaluate a decision not just by whether it works now, but by what it costs later — in debugging difficulty, in constraint accumulation, in technical debt that compounds over time.

This is not a skill you can express to an AI. It lives in the engineer's model of the specific system, its failure history, its growth trajectory, and the team's capacity to maintain it. It is not formalizable as a prompt.

When I built Ordia, one of the core design decisions was to implement ticket linking and blocker detection with deterministic logic rather than generative AI. Not because generative AI couldn't produce results — it could. Because I needed the inference cost to stay low while the signal accuracy stayed high. That tradeoff required understanding what accuracy actually meant in the context of my users, what cost discipline looks like over three years rather than three months, and what failure modes I could tolerate versus which ones would destroy trust.

No prompt captures that. It required building something real and having a model of why each choice mattered.

Intent without consequence model is just faster speculation

The problem with "expressing intent" as a paradigm is that intent is cheap. Every non-engineer has intent. The product manager has intent. The CEO has intent. The customer has intent. Sometimes all four are in conflict and the engineer has to navigate that in the design.

What separates an engineer from a person with intent is a consequence model. When I say I want to build X, I also carry a model of what X costs, where it will fail, what it prevents us from doing later, and what assumptions it encodes about future usage. That model is the work.

AI tools that receive intent and generate implementation are automating the token generation step. They're not automating the consequence modeling step. That step is not downstream of writing code — it's upstream. It's the thinking that determines whether the code you write (or generate) is the right code.

So "expressing intent" as the engineer's new role is incoherent. The engineer's role was never primarily writing tokens. It was thinking before writing tokens. If AI writes the tokens after you express intent, the thinking still has to happen somewhere. If it's happening in neither the engineer nor the AI, it's not happening at all, and the output is faster speculation.

What this looks like in practice

I use AI tools heavily in daily work. The pattern I've settled on: write the skeleton and interfaces by hand, let AI fill the interiors of understood patterns. The skeleton is where the architecture lives. The interfaces are where the decisions about system boundaries get made. Those parts require that I understand what I'm building.

The interiors — the implementation of a method whose signature I've defined, the body of a function whose inputs and outputs I've specified — these I can delegate. Because I understand the context. Because the decisions that matter have already been made. Because if the AI produces something subtly wrong, I have enough context to catch it.

That's different from expressing a vague intent and accepting whatever ships.

The framing is doing work

"Expressing intent" as a trend narrative benefits AI companies selling tools that want to claim their products elevated the role of the developer rather than displacing parts of it. The framing is not neutral.

The shift is real: AI tools can generate significant amounts of working code from natural language description. What that actually means for the quality and maintainability of what gets built depends entirely on whether anyone in the loop has the consequence model to evaluate what was generated.

In teams where experienced engineers are reviewing AI output with genuine understanding, the tools speed things up. In teams where "expressing intent" replaced the thinking that used to happen before writing, they're accumulating invisible debt.

One of those will show up in production metrics this year. The other will show up in debugging sessions in two years, when nobody can explain why the system behaves the way it does.