Research Archive
AI Infrastructure Economics: The $2-for-$1 Problem
The Number That Matters The AI infrastructure buildout is the largest capital deployment cycle in technology history. Combined hyperscaler spending exceeded $400 billion in 2025, with 2026 projections pushing toward ~$600 billion. The bubble bears point to this spending and declare it unsustainable, arguing the whole market will collapse because AI is not yet profitable.…
The Reordering To AI/HPC: TSMC’s 2026 Outlook and Beyond
TSMC’s 4Q25 results confirm what we have been saying for months: the AI infrastructure buildout is not slowing down, and TSMC sits at the center of it. Revenue of $33.7B beat the high end of guidance. Gross margin of 62.3% exceeded the guide by over 100 basis points. Management guided to close to 30% USD…
NVIDIA CES 2026: The Vera Rubin Platform and the Economics of AI Infrastructure
Note: I had the chance to attend both the CES 2026 Financial Analyst QA and the Industry Analyst QA, as well as time with management. I’m sharing nuggets learned from the QA and my meetings, as well as things from the NVIDIA CES 2026 announcements and management commentary that stood out and are telling about…
The AI Infrastructure Gigacycle: A Primer for 2026
Note for paid subscribers: As we kick off the Dilligence Stack, I felt we needed to publish some anchor reports to set the foundation we will build upon. So, this report is quite long but has needed depth in each section. Each section will get its own deep dive in the coming months as well.…
The AI Bubble Question: Two Scenarios for the Largest Technology Buildout in History
There is perhaps no more consequential debate around the technology industry today than whether the current AI infrastructure buildout represents a bubble destined for collapse or the logical, sustainable deployment of mature technology. The numbers are indeed staggering, a root cause of people’s anxiety: hyperscalers are spending north of >$200 billion annually (and growing) on…
The GPU’s Second Act: From Pixels to Tokens
The Architecture Graphics Built The GPU exists because graphics demanded a very specific kind of silicon—hardware capable of running identical mathematical operations across massive data volumes in parallel, repeatedly, without stalling. What looked like “drawing pictures” was always a continuous simulation under tight latency constraints. When neural networks arrived, they leaned on the same core…