The AI Bubble Is in the Valuations, Not the Technology
The AI Bubble Is in the Valuations, Not the Technology
Databricks CEO Ali Ghodsi said it plainly: "I think it's still very highly bubbly." VCs are privately telling him they're exhausted by the hype cycle. MIT Sloan lists "deflation of the AI bubble" as one of its five key trends for 2026.
In the same period, McKinsey reports software developer demand up 34% since AI coding tools became mainstream.
These facts seem to contradict each other. They don't. They describe different parts of the same system.
What's actually bubbly
The bubble isn't in AI's utility. The tools work. Code gets written faster. Automations that required engineers now run on prompts. This is real.
The bubble is in the gap between "AI works" and "this company is worth $40 billion because AI works." Those are different claims. The first is well-supported. The second requires a specific theory of how the capability translates to defensible revenue at that scale, and most of those theories are underspecified.
The VC exhaustion Ghodsi describes is not exhaustion with AI. It's exhaustion with the pitch pattern: "we use AI" as a sufficient justification for a valuation multiple that previously required years of revenue. The novelty premium on "AI-native" has compressed. Investors who deployed capital on that premium are now holding assets that need to justify themselves on normal metrics.
That's a correction in valuation logic, not a correction in technological reality. Conflating the two leads to bad decisions in both directions — either dismissing AI as hype (wrong) or assuming the bubble proves the technology doesn't matter (also wrong).
Why developer demand is up, not down
AI was supposed to reduce demand for engineers. The opposite has happened. McKinsey's 2026 Tech Workforce Report puts the demand increase at 34% since AI coding tools went mainstream.
The explanation isn't complicated. AI lowered the cost floor for building software. More software is being built. More software in production means more software to be maintained, extended, debugged, and reasoned about. The total volume of engineering work increased.
There's also the review layer. As AI-generated code enters production at scale, enterprises are pushing back. SiliconANGLE reported that rapid AI adoption is "forcing a push for enterprise order and control." Someone has to set the standards for what AI-generated code is acceptable. Someone has to review it, audit it, maintain it when it fails in unexpected ways.
That someone is a senior engineer. The supply of senior engineers who can do that well has not expanded proportionally with the demand. The gap is a compensation signal.
What the bubble means for what to build
I've been building Ordia outside the VC system entirely. Not because VC money is bad — it's useful for specific trajectories. But because the AI funding environment creates pressures that don't match what I'm building.
Infrastructure products — things that work without being noticed, that reduce friction rather than add features, that justify themselves through reliability over time — don't fit the VC growth timeline. They have slow adoption curves and high switching costs once adopted. Those are good long-term properties. They're not good properties for a fund with a 7-year horizon needing to show a 10x return.
Building outside that system means being legible to different metrics: does it work, does it stay working, does it accumulate value over time. The bubble doesn't change those metrics. The bubble is what happens when the metrics get replaced by narrative.
What the bubble deflation means for founders: the window where "AI-native" as a pitch mechanism carried a valuation premium is closing. What replaces it is what always replaces it — does this product solve a real problem at a margin that justifies the business model. That's not a harder question. It's the right question, and it was always the right question.
The signal in the exhaustion
VC exhaustion with AI hype is actually useful information, if you read it correctly.
It doesn't mean AI companies are bad investments. It means the easy part of the cycle — where novelty alone generates interest — is over. What follows is a tighter filter: the companies that survive the compression are the ones where the AI capability is load-bearing to the product, not cosmetic.
"We use GPT-4 to enhance your workflow" is not a defensible moat. "Our product does something that requires AI to work, that wouldn't be possible without it, and that we've built structural knowledge around" is a different claim.
The bubble separates these by making capital more expensive and requiring a clearer story. That's healthy for the ecosystem, even if it's painful for companies that were riding the narrative rather than the capability.
What this looks like from the outside
Building a bootstrapped product without VC funding means watching the AI investment environment from the outside. The view is clarifying.
Inside the funding cycle, every announcement is significant. Every new model release reshapes the competitive landscape. The urgency is structural — capital deployed needs to return, so the timeline pressure is real regardless of product readiness.
From the outside, the signal-to-noise ratio is different. The technology that matters for what I'm building — reliable, low-cost inference for deterministic tasks — has been available and improving steadily. The valuation cycles around it have no effect on whether Ordia works or doesn't.
The bubble deflating means less noise, not less opportunity. The underlying capability is intact. The teams that were building around the hype will have a harder time. The teams that were building around a specific problem will not.
The exhaustion is the market correcting toward the question that was always the right one: what problem does this solve, for whom, and why won't someone else solve it better for less.
That question was answerable before the bubble and it's answerable after. The bubble just made it easier to defer.
