The AI Bubble Question: Two Scenarios for the Largest Technology Buildout in History
There is perhaps no more consequential debate around the technology industry today than whether the current AI infrastructure buildout represents a bubble destined for collapse or the logical, sustainable deployment of mature technology. The numbers are indeed staggering, a root cause of people’s anxiety: hyperscalers are spending north of >$200 billion annually (and growing) on capital expenditures, company valuations are skyrocketing, and power consumption projections for AI datacenters have upended utility planning across the developed world.
The instinctive response from market observers—particularly those who lived through 2000—is to assume that any buildout of this magnitude MUST BE A BUBBLE! We know history may not exactly repeat itself, but we do know it rhymes (ish).
Bubbles and build-outs look eerily similar, and that is what I think is a root cause of those who take a hard line one way or another in this debate. In 2002, on the heels of Perez publishing her work, I had a chat with then CEO of National Semiconductor Brian Halla, who had just read this book, which led me to read it, which clarified how the bubble would lead to a much larger build-out cycle and decades of innovation. The framework is still useful today for anyone looking to be more informed on the economic dynamics of these cycles.
The Perez Framework: A Primer
Perez’s seminal work, Technological Revolutions and Financial Capital, identifies a recurring pattern across five major technological waves since the Industrial Revolution: canals and textiles, railways and steam, steel and electricity, oil and mass production, and finally, information and communications technology (ICT).
Each wave follows a predictable structure. The first half—what Perez calls the “Installation Period”—is characterized by speculative frenzy. Financial capital, driven by the promise of transformative returns, floods into new technologies before their value is proven at scale. This period inevitably ends in a crash: the railway mania of the 1840s, the panic of 1893, the 1929 crash, and most recently, the dot-com bust of 2000.
The second half—the “Deployment Period”—is fundamentally different. The crash clears out speculative excess, new institutional frameworks emerge, and technology spreads deeply into the real economy. This is when society actually captures the value of the new paradigm. The “Golden Age” that follows is characterized by sustainable growth, rising productivity, and broad-based prosperity.
The critical question for AI is simple: which half of the cycle are we in?
Before we answer, we should define what we mean by “crash.” The dot-com bust was a crash in financial claims on future growth. The physical buildout did not vanish; it got repriced, consolidated, and eventually absorbed. Fiber that seemed wildly excessive in 2001 was fully utilized by 2010. The AI question is not merely “will valuations fall?” They will. The question is whether the buildout itself faces a synchronized funding stop that delays deployment for years, or whether it plays out as overbuild followed by price compression and consolidation. These are very different outcomes, and the Perez framework helps us distinguish between them.
Scenario One: AI as a New Wave
The case for AI as a bubble rests on a straightforward interpretation of Perez’s framework: AI represents the beginning of a sixth technological wave, and we are in the speculative Installation Period.
Under this interpretation, the pattern is unmistakable. We have a transformative general-purpose technology (large language models, software agents, and GPU compute), a rush of financial capital chasing speculative returns, soaring valuations disconnected from current earnings, and a widespread belief that “this time is different.” The infrastructure buildout—hundreds of billions of dollars flowing into GPUs, datacenters, and power generation—looks precisely like the canal mania of the 1790s or the fiber-optic overbuilding of the late 1990s.
If this interpretation is correct, a crash is not merely possible but structurally likely. Perez’s framework is clear on this point: Installation Periods have historically ended in crashes. The speculative excess is a feature, not a bug—it provides the risk capital necessary to fund the initial buildout. But that same excess tends to overshoot, and the correction tends to follow.
The bear case, then, is that we are perhaps two to five years away from a major crash. The catalyst could be any number of things: AI revenue growth that disappoints relative to infrastructure spending, a major model capability plateau, or simply the exhaustion of greater fools willing to pay ever-higher multiples (capital becoming nervous). When it comes, the correction will be severe. Valuations will collapse. Many companies (largely AI startups) will fail. And the true deployment of AI—the period when it genuinely transforms the economy—will only begin after the wreckage clears.
This is the orthodox reading of Perez, and it has history on its side.
What you would expect to observe: If we are in an Installation Period headed for a crash, certain patterns should be visible. Capex growth should continue to outpace measurable utilization, with persistent idle capacity across the datacenter fleet. Model capability improvements should slow relative to cost curves, delivering less “more for less” than the investment thesis requires. AI spending should remain narrow, concentrated in a few use cases rather than diffusing broadly into enterprise budgets. And financing should increasingly shift toward leverage and speculation at the margins: neoclouds raising debt, power developers trading on forward contracts, smaller GPU lessors stretching to meet demand that may not materialize.
Scenario Two: The Second Wind of ICT
There is, however, a contrarian interpretation—one that Perez herself has advanced—that suggests the current AI buildout might escape a catastrophic crash altogether.
The argument rests on a different diagnosis of where we are in the cycle: AI is not a new wave but the second half of the current one.
The Deployment Period Interpretation
Perez identifies five historical waves, the most recent being ICT (information communication technologies), which she dates to 1971 (the introduction of the Intel microprocessor). Under this interpretation, the crash already happened—twice, in fact. The dot-com bust of 2000 marked the end of the speculative Installation Period, and the 2008 financial crisis served as a secondary correction. Since then, we have been in the Deployment Period.
If this is correct, AI is not a speculative bet on an unproven paradigm but rather the natural fruition of the Information Age. The paradigm—that software and data would transform every industry—was already accepted. What lacked was the capability to execute on the full vision, including a new form of compute capable of AI-accelerated workloads, which was already mature in the GPU. AI, particularly large language models, represents the technology that finally allows software to perform cognitive tasks that previously required human intelligence.
In the Deployment Period, technology does not require speculative faith; it spreads because it works. Healthcare, legal services, software development, customer service—AI is penetrating these sectors not because investors believe it might someday be valuable but because it is demonstrably valuable today. Growth in this phase is steady, profitable, and sustained because adoption is driven by productivity gains rather than speculative frenzy. This reality is validated by dozens of CTO/CIO surveys that confirm early ROI across business units where AI is deployed in some capacity. We are still very early in the adoption of AI across enterprises, but to believe AI is not in a deployment period, you have to believe there is absolutely zero value being stemmed from AI software.
Under this interpretation, there is no crash coming because the crash already happened. We are not building toward a reckoning; we are building out the infrastructure required to mature an already-accepted paradigm.
What you would expect to observe: If we are in the Deployment Period, a different set of patterns should emerge. Utilization should climb toward steady, high load factors as demand catches up to supply. AI software should shift from pilots and experiments to budget line items, with repeatable ROI and formal procurement processes. Pricing should fall while volumes rise—the classic deployment pattern of deflation plus diffusion. Gross margin pressure should appear in “AI services” as competition intensifies, but net value should rise through adoption and workflow redesign. The technology spreads not because investors believe in it, but because it works. From what we see today, all boxes are checked for this observation.
The Production Capital Anomaly
Even if one accepts that AI represents a new wave rather than a continuation of ICT, there is a structural reason to believe the crash dynamics may be different this time: the source of capital.
Perez’s crash mechanics depend critically on the role of Financial Capital. In her framework, it is banks, venture capitalists, and speculators—actors deploying “other people’s money”—who drive the Installation Period frenzy. They are, by nature, impatient. They seek liquidity events, IPOs, and quick exits. When sentiment turns, they flee. Their collective departure is what triggers the crash.
The AI buildout, however, is being funded primarily by Production Capital—specifically, the retained earnings and balance sheets of the hyperscalers themselves. Microsoft, Google, Amazon, and Meta are not raising speculative capital to fund their AI infrastructure investments. They are largely deploying their own massive cash piles, accumulated from decades of profitable operations in the previous internet era. *Yes neoclouds, AI GPU REITS, etc., are using other people’s capital, but a strong mix of that is also coming from those who have patient capital.
This distinction matters enormously. In Perez’s framework, Production Capital is patient (we should include the investments of companies like NVIDIA as production captial). It is tied to the industry itself and seeks long-term dominance rather than short-term liquidity. Production Capital does not face margin calls. It does not demand exits. It can sustain “over-investment” for years because its time horizon is fundamentally different from that of Financial Capital.
But Production Capital is patient, not unpriced. Hyperscalers are public companies, and their cost of capital and strategic freedom are still set by equity markets. The disciplining mechanism shifts from forced liquidation to slower feedback loops: multiple compression, activist pressure, and internal hurdle rates rising as the cycle turns. Microsoft, Google, and Amazon can absorb years of AI losses in a way that a venture-backed startup cannot, but they cannot absorb them forever without consequence.
The implication is that even if the AI buildout is excessive relative to near-term demand—even if current spending cannot be justified by current revenue—the funders are not looking for a quick flip. They can absorb years of suboptimal returns while waiting for the market to mature. This patient capital smooths out what would otherwise be a sharp spike and crash into something more closely resembling a gradual build and consolidation.
The crash mechanism, in other words, may simply be absent.
The Hybrid Case: New Wave, Old Capital
There is also a third possibility, and it may be the most realistic: AI is a new wave in capability, but it is being financed and deployed with the institutional muscle of the old one.
In Perez’s terms, the technology may look like an Installation while the capital structure behaves more like a Deployment. AI models are genuinely new. They represent a discontinuity in what software can do. But the companies building the infrastructure are not just scrappy startups funded by speculators chasing an IPO. This cycle is being led by the largest, most profitable enterprises in history, deploying retained earnings accumulated over two decades of internet dominance.
That combination does not eliminate overbuild. It changes its form. Instead of a sudden stop—the synchronized funding collapse that defines a classic Perez crash—you get a grinding repricing. Unit economics for compute fall. Marginal operators consolidate or fail. Utilization rises as prices drop. And adoption marches steadily forward as the cost curve declines.
This is not the catastrophic crash of 2000, where funding vanished, and deployment stalled for years. In a hybrid scenario, it is something messier: a protracted adjustment period where the buildout continues but returns disappoint, where winners compound while losers are absorbed, and where the technology diffuses into the economy even as the stocks that represent it underperform.
Two Frameworks, One Question
The question of whether AI is a bubble ultimately reduces to a diagnostic question: what kind of capital is funding the buildout, and what phase of the technology cycle are we in? The answer is not black and white, but the framework clarifies the range of outcomes.
Consider two axes: the phase of the cycle (Installation vs. Deployment) and the dominant capital type (Financial vs. Production). The combination determines the shape of what comes next.
An installation funded by Financial Capital produces the classic bubble crash. Speculative money floods in, overshoots, and then flees. The 2000 dot-com bust is the template. Valuations collapse, funding dries up, and deployment stalls for years until the wreckage clears.
Installation funded by Production Capital produces something different: overbuild followed by consolidation. The buildout may still be excessive, but the funders can absorb losses longer. The correction is a grind, not a cliff. Prices fall, marginal players fail, and the technology diffuses as it becomes cheaper.
Deployment funded by Financial Capital still sees froth at the edges—speculative bets on the next application, overvalued startups, crowded trades—but the core infrastructure continues to expand because the paradigm is already proven. Winners compound while the froth burns off.
Deployment funded by Production Capital is the most durable configuration: patient capital deploying a proven paradigm at scale. Growth is steady, price deflation enables broad diffusion, and the volatility is cyclical rather than existential.
Where does AI sit? The honest answer is that reasonable people disagree. The technology feels like a new wave. The capital structure looks like deployment. The outcome is likely somewhere in between: not the catastrophic crash of 2000, but not the smooth ascent that hyperscaler stock prices have priced in either.
Conclusion: The Shape of the Buildout
None of this suggests that individual companies cannot fail, that valuations cannot correct, or that the AI market will not experience significant volatility. It will. The shakeout among AI startups has already begun, and the rationalization will intensify. Equity drawdowns are not just possible but probable.
But a systemic crash—a 2000-style collapse that wipes out 80% of sector value and delays deployment by a decade—requires a specific set of conditions: speculative Financial Capital driving the buildout, an unproven paradigm, and no external stabilizers. The current AI buildout lacks the first condition entirely and partially lacks the other two.
What remains is something closer to a sustained buildout punctuated by repricing: massive, capital-intensive, occasionally excessive, but without the synchronized funding collapse that defines a true Perez crash. The infrastructure gets built. The prices fall. The technology spreads. The stocks may disappoint even as the paradigm succeeds.
The “bubble” may not be a bubble at all. It may simply be what it looks like when the Information Age finally comes of age—and when the companies that won the last era are determined to win the next one too.