← Blog
ai

How AI Hype Inflates: The Cluely Case

·5 min read

How AI Hype Inflates: The Cluely Case

Cluely AI's CEO admitted in 2026 that the $7M revenue figure shared with TechCrunch was fabricated. The actual numbers were significantly lower. This is not an isolated case. It's a worked example of the mechanics of how AI hype inflates — and why the mechanism is a structural problem, not just an ethics one.

The Mechanism

Step one: ship something that generates signals that look like traction. Users signing up, sessions, any metric that can be described as "growth."

Step two: describe those signals using revenue-adjacent language in contexts where nobody will verify. "$7M in ARR" in a TechCrunch interview is not audited. The journalist is not checking Stripe.

Step three: the narrative becomes the asset. Once a number is in print, it's in investor decks, in competitor research, in the mental model that other founders use to calibrate their own positions. The number doesn't have to be true to have effects.

Step four: the correction, when it comes, is smaller news than the original claim. By then, the funding conversation has happened, the narrative has been set, the number has served its purpose.

None of these steps require the founder to consciously decide to deceive. Step two often starts as optimistic framing. The inflation is incremental. The gap between the optimistic framing and the actual number widens before anyone is checking.

What This Does to the Information Environment

I'm not interested in the Cluely case as a morality story. I'm interested in it as a structural problem that changes the information environment for everyone building in the same space.

When fabricated numbers circulate, they become benchmarks. Solo founders and small teams calibrate their own progress against these figures. Investors use them as data points for what "traction" looks like. The company building honestly — slower, with real numbers — looks like it's underperforming against a fiction.

The honest company with $30K ARR after eight months of focused work looks bad next to the company claiming $7M after six months. The incentive to inflate is created by the inflation of others. This is not a bad actor problem. It's a game theory problem.

MIT Sloan's 2026 AI trend analysis explicitly lists deflation of the AI bubble as one of five key trends. The deflation doesn't happen because the technology fails — it happens when the gap between the narrative and observable reality becomes too large to sustain. That process is already visible.

Where Ordia Sits

Ordia is not generating $7M ARR. Ordia is a coordination layer I'm building because I had a specific problem — daily context-switching between three tools, a codebase that lost its branch metadata when a developer left — and I couldn't find a good existing solution. The metrics I care about are whether the tool is reducing the state-sync overhead it was built to eliminate.

That's not a TechCrunch number. It's the actual signal.

The AI startup market has a significant quantity of press coverage that describes the narrative rather than the product. Press that describes how much has been raised, how fast revenue is growing, how transformative the approach is. Very little of it describes how the thing works, what problem it actually solves, and whether it solves it.

What Changes When the Bubble Deflates

The deflation will be uneven. Products that were real — that solved specific problems for specific users — will survive the recalibration. Products that were primarily narrative — that existed as framing for investment rather than as solutions to problems — won't.

The founders who are positioned well are the ones who know exactly what problem they're solving, have a specific person who has the problem, and can point to a measurable change in that person's situation after using the product. That's a defensible position. It doesn't require a $7M ARR number. It requires a true one.

Building without a philosophy breaks as it scales. Building on fabricated signals breaks earlier, because the signals can't tell you what to build next.

What Honest Signals Look Like

The alternative to fabricated ARR numbers is not no metrics — it's metrics tied to the actual problem. For Ordia, that means: how many status updates did the system propagate automatically that a user would otherwise have made manually? How many times did the blocker detection fire accurately? Is the context-switching overhead decreasing?

These numbers don't go in a TechCrunch headline. They're also the only numbers that tell me whether Ordia is working. Revenue is a lagging indicator of whether you solved a real problem for real people. The leading indicators are whether the problem you set out to solve is actually being solved.

The founders I find credible in 2026 are the ones who can describe their metrics in terms of the problem they're solving, not in terms of the funding narrative. "We've reduced the average status-sync overhead by X minutes per developer per day" is a real number. "$7M ARR" is not, when it isn't.

The market eventually corrects toward real numbers. The founders calibrated to real signals from the start don't need to wait for the correction.