← Blog
product-philosophy

Speed and Velocity Are Not the Same Thing

·5 min read

Speed and Velocity Are Not the Same Thing

"Move fast" has been in the software culture long enough to become meaningless. It's used to justify shipping without thinking and to shame teams that take time to think. Both uses are wrong for the same reason: they confuse speed with velocity.

Speed is how fast you produce output. Velocity is how fast you move toward a goal.

In software product development, the goal is validated learning — finding out whether a hypothesis is correct and updating the product accordingly. The relevant measure is how fast you can run that loop: define hypothesis, build the minimum test, measure, decide. Code generation is one input to that loop. It's not the loop itself.

The AI Coding Tools Problem

The AI coding tools narrative has reified speed as the primary virtue. JetBrains' 2026 developer survey shows 74% of developers using specialized AI coding tools, with the primary cited benefit being "faster code completion." Faster code completion is faster output production. It is not faster hypothesis validation unless the hypothesis was "can I write this function?"

A team that ships features at high speed without a clear hypothesis is generating a lot of output. The velocity toward any goal is near-zero — or negative, if each shipped feature adds complexity that makes future work harder.

This is speed without velocity. The AI tools are genuinely helping. They're helping you get to the wrong destination faster.

Where the Bottleneck Actually Is

When I'm working on Ordia's design, the bottleneck is never code. It's the clarity of the problem statement.

What is the specific state-sync failure I'm trying to eliminate? The abandoned branch incident — a developer left without documenting anything, branches held context that existed nowhere else, rebuilding took significant time — was a specific failure with a specific cause. The Ordia feature that addresses it isn't "branch tracking." It's "make branch context persistent and attached to the work item that created it, so the context survives the person who created the branch."

That specificity took time to arrive at. It didn't require writing code. It required thinking accurately about the failure mode, which is a different activity.

Once the problem statement is specific, the code follows quickly. The implementation of "attach branch context to linked ticket on push" is straightforward. AI can help write it. But no AI made the decision about what exactly needed to be built.

The hypothesis validation loop has three parts: define, build, measure. AI tools compress the build phase. They have no effect on the define or measure phases. If the define phase is the bottleneck — and it usually is, for any non-trivial product decision — AI tools don't help with the thing that matters.

What "Speed" Should Mean

The useful definition of speed in product work is: how fast can you get from a question to an answer?

Not how fast you can write code. Not how many features you can ship. How fast you can formulate a clear hypothesis, build the minimum thing that tests it, and get a signal back about whether you were right.

That kind of speed requires clarity about what you're trying to learn. It requires small, focused tests rather than large feature releases. It requires measuring the right thing, not the thing that's easy to measure.

Addy Osmani's April 2026 post on the next two years of software engineering raises the concern that the industry is trading understanding for speed. I'd frame it more precisely: the industry is optimizing output velocity while neglecting hypothesis velocity. Understanding is not an obstacle to speed — it determines whether your speed is taking you somewhere.

The teams producing the best software over the next two years are not the ones who adopted AI tools earliest. They're the ones with the clearest idea of what they're trying to learn from each thing they build. That clarity is the multiplier. AI tools amplify it when it exists. Without it, they just produce more output in the wrong direction, faster.

The Practice

The way to build hypothesis velocity is concrete, not abstract.

Before writing a ticket or opening a cursor window: write down the specific question this feature is supposed to answer. Not the feature description — the question. "Does the user care about automatic ticket linking, or do they want to maintain control over which tickets are linked?" is a question. "Build automatic ticket linking" is a task. Tasks can be completed without the question being answered.

This distinction sounds like extra process. It saves time, because it makes the minimum viable test obvious. If the question is about user preference for automatic linking versus manual control, the minimum test is not a fully implemented automatic-linking system. It might be a settings toggle, or a mock, or a conversation. The fully implemented system is evidence for a question you already answered — or for a question you never formulated.

AI tools make it easy to build the fully implemented system quickly. They don't make it easier to know whether you needed it. The investment in clarity before building remains essential, and the pressure from available tool speed makes it harder to maintain — because the option to "just build it and see" is always one prompt away. That option is not free. It costs the time of the build plus the time of living with the wrong implementation, minus the probability that you happened to guess correctly.