Bubble or Buildout? A Guide for the Next Data Center Era
Is AI a bubble or a buildout? This piece lays out both sides: the “bubbly” signs (massive, front-loaded capex near ~$400B+ in 2025, a handful of winners, fast-rising frontier training costs) and the “durable” signs (cash-funded spend, early multi-billion AI revenue, and fast-falling inference costs). It then offers a simple Bubble Test—three dials to watch: utilization and payback (~4–5 years or less), capex-to-revenue discipline (≈25% ceiling), and power timelines. If those dials hold, we’re in a multi-year infrastructure upgrade; if they slip, the overbuild risk rises.
The easy story is “this looks bubbly.” Spending is massive AND front-loaded, the narrative is high, and a handful of winners dominate the landscape. Hyperscaler capital spending is tracking to roughly $424B in 2025 and stepping toward ~$500B in 2026, after topping $100B in a single quarter earlier this year—numbers that rhyme with past build-outs where the investment arrived well ahead of broad payback. The market is also concentrated: the top ten tech names now make up >40% of the S&P 500’s value, which amplifies both upside and drawdown risk. Meanwhile, the cost to train at the frontier keeps rising because leading labs are still scaling compute roughly 4–5× per year—great for progress, but it concentrates who can play (GPU rich or GPU poor) and raises the bar for ROI. Finally, power is the governor on this cycle: U.S. AI data-center demand is around ~5 GW today and modeled to exceed 50 GW (~100GW + globally) by 2030. If generation, interconnects, or permitting lag, then revenue ramps slip and expensive gear sits under-utilized. This is the root of many concerns.
There is, however, also a strong “not a bubble” case that’s easy to miss if you only stare at the spend. First, the buyers can actually afford this. On current estimates, that ~$424B of capex in 2025 is covered by ~$720B of combined cash from operations, even after dividends—meaning spend is largely cash-funded rather than debt-fueled (for now). Valuation context helps too: index-level multiples are elevated but not extreme by historical standards, and the top-ten tech names trade at a ~32× median forward P/E versus ~51× for the leaders at the 2000 peak. Second, we’re seeing real monetization. One hyperscaler has disclosed >$13B run-rate for its AI services; others say “billions” or “multi-billions.” More specifically, Cloud revenues for AWS, Azure, and GCP combined are estimated to be $280B in 2025. AI workloads make up ~10% of these revenues as total cloud AI revs are ~$25-30B.
A simple capex-to-revenue read-through on one stack implies roughly ~30% annual yield and ~3.5-year payback on recent AI builds—hardly speculative if those trends hold. And while frontier training gets pricier, inference costs are collapsing: token prices have fallen from $30/$60 per million input/output tokens in 2023 to small-model options at $0.05/$0.20, with software squeezing ~90% more tokens per GPU year over year. That’s exactly what you want to see for mainstream adoption.
The middle ground is probably closest to reality. This looks less like a speculative mania and more like a multi-year infrastructure retooling—from CPU-first to accelerated compute—paced by real-world bottlenecks (HBM, GPUs, storage, power) that could naturally meter supply. Near-term, spending is still being revised up: data-center capex is modeled ~+60% Y/Y in 2025 and ~+30% in 2026 with upside risk to 40%+; the accelerator market could compound 40–50% off an estimated ~$200B base in 2025. Those aren’t bubble-style “buy anything” numbers; they’re “there’s real demand, but execution matters” numbers.
How to separate productive buildout from overbuild? A few pillars are worth watching. Utilization and payback: track disclosed AI service run-rates against prior capex; if paybacks stretch past ~4–5 years or utilization dips, that’s a yellow light. Capital-intensity discipline: keep an eye on capex-to-revenue. When this ratio moves persistently into the high-20s without revenue catching up, risk rises; when it stays tethered to growth, the flywheel is working. (One simple framing: assuming ~25% intensity by the late 2020s implies very large—yet still internally financed—spend levels.) Power timelines: grid connections and generation will make or break delivery schedules; persistent slippage there strands assets, but it also prevents a classic supply glut—another reason this cycle doesn’t map 1:1 to the dot-com era.
The “bubble” framing is useful as a risk checklist or heightened awareness but not as an all cards on the table verdict. We acknowledge, spending is huge, concentrated, and ahead of broad-based returns. But the buyers are cash-rich, early paybacks are showing up in the numbers, unit economics are improving where it matters (inference), and physical constraints pace the rollout. Net: There are highly encouraging signposts for economic returns. If utilization improves, capital intensity stays disciplined, and power comes online roughly on plan, this buildout looks like a durable replacement cycle rather than a blow-off top. If those dials turn the wrong way—utilization disappoints, capex outruns revenue, power slips—then the argument flips. For now, cautious optimism with a tight focus on those three tracking points feels like the right balanced perspecitve to maintain.
