The Semiconductor Gigacycle
The semiconductor industry has experienced cycles of varying magnitude throughout its history. The PC era brought sustained growth. The smartphone revolution created what many called a supercycle. The cloud computing buildout extended that expansion further.
What is happening now is something categorically different.
The artificial intelligence infrastructure buildout represents the largest total addressable market expansion the semiconductor industry has ever experienced—a giga cycle that dwarfs previous periods of growth in both absolute dollar terms and in the breadth of its impact across every segment of the value chain.
The numbers tell a story of unprecedented scale. Global semiconductor revenues will climb from roughly $650 billion in 2024 to more than $1 trillion by decade’s end, with several forecasts now pulling the trillion-dollar mark forward into 2028-2029.
This expansion is not driven by a single product category or geographic market. This is a fundamental restructuring of the industry’s trajectory, driven by infrastructure requirements that touch every category of semiconductor technology simultaneously.
The semiconductor industry will never be the same. The shape of the category has been changed forever.
What Makes a Giga Cycle Different
The PC era primarily benefited microprocessors and commodity memory. The smartphone revolution concentrated gains in mobile application processors and NAND storage. The cloud buildout created sustained demand for server processors and networking equipment.
The AI infrastructure buildout is different because the architectural requirements of training and inference workloads create simultaneous bottlenecks across compute, memory, networking, and storage.
There is no single category absorbing the majority of new spending. Every segment is constrained. Every segment is expanding.
By 2026, data-processing silicon will surpass half of total semiconductor revenue for the first time—codifying the shift toward data-center and AI workloads as the new center of gravity for the industry.
The Scale of TAM Expansion
The magnitude of upward revisions to industry forecasts over the past eighteen months has been extraordinary.
Lisa Su, AMD’s CEO, now views the AI hardware TAM—encompassing CPU, GPU, ASIC, and networking—as exceeding $1 trillion by 2030. At AMD’s November 2025 Analyst Day, she framed the opportunity bluntly: “The market is accelerating at a pace that we just did not understand until over the last few years. There’s no question, data center is the largest growth opportunity out there.”
AMD is targeting more than 35% overall revenue CAGR—and around 60% in data-center revenue—over the next several years to pursue it. The company sees a clear path to tens of billions in annual AI revenue by 2027 and over $100 billion in data-center revenue over the next three to five years.
Current consensus projections show the dedicated AI accelerator market alone reaching roughly $300-350 billion by 2029-2030—up from under $100 billion in 2024. When you aggregate accelerators, CPUs, networking, and HBM, total datacenter silicon spending approaches $900 billion to $1 trillion by decade’s end.
The Trillion-Dollar Infrastructure Buildout
Jensen Huang has been explicit about the scale of what’s coming. During NVIDIA’s Q2 2026 earnings call, he laid out the math: “Over the next five years, we’re going to scale into a $3 to $4 trillion AI infrastructure opportunity. We are still at the very beginning of this buildout.”
He emphasized this wasn’t speculation: “$3 to $4 trillion is fairly sensible for the next five years.”
The demand for compute is real and unprecedented. Capital expenditures by the top four cloud service providers—Amazon, Google, Microsoft, and Meta—doubled to roughly $600 billion annually in just two years.
NVIDIA has communicated $500 billion in cumulative Blackwell and Rubin revenue, including networking, through calendar year 2026—a level of revenue visibility unprecedented in semiconductor history. The demand is diversifying beyond the top US hyperscalers to include sovereign AI factories and enterprise deployments. The customer base is broadening as the dollar opportunity expands.
Compute Infrastructure: The Primary Driver
GPU Market Dynamics
The GPU market remains the largest dollar category within the AI infrastructure buildout. NVIDIA’s GPU shipments are projected to grow approximately 85% in 2025, followed by another 50-60% increase in 2026. The company is targeting additional growth in calendar year 2027, with projections putting NVIDIA on pace to generate north of $600 billion annually by 2030.
These are physical volumes that require massive expansion in manufacturing capacity, packaging throughput, and memory availability.
At the system level, AI servers are becoming a trillion-dollar category in their own right. The AI server market is expected to grow from roughly $140 billion in 2024 to $800-850 billion by 2030—a compound annual growth rate exceeding 30% (could be conservative).
The net effect is that a tiny fraction of total wafer volume is driving an outsized share of industry revenue: AI chips represented less than 0.2% of wafer starts in 2024 yet already generated roughly 20% of semiconductor revenue. An unprecedented level of silicon value density.
AMD’s trajectory shows similar growth dynamics. Lisa Su described the demand as “insatiable”—with the company projecting 80% annual growth in AI data-center chip revenue and overall revenue expansion of 35% per year through 2030.
The Rise of Custom Silicon
The custom silicon market is accelerating at a pace that positions ASICs to challenge the dominance of general-purpose GPUs, driven by a structural shift in hyperscaler capital allocation and their desire to control more of their silicon stack for their own primary workloads, as well as cost and efficiency for third parties.
ASIC revenue for key leaders is expanding at a 119% compound annual growth rate through 2027, significantly outpacing the 82% projected for AI GPUs. Custom silicon is forecast to rise from just 2% of hyperscaler capital expenditure in 2023 to 13% by 2027.
Broadcom exemplifies this expansion. CEO Hock Tan has set an ambitious goal: building the custom business to north of $100 billion by decade’s end. Broadcom’s AI sales are now projected to reach $90 billion by fiscal 2030—potentially even $120 billion in aggressive scenarios. The company recently disclosed a $10 billion AI infrastructure order from a hyperscaler customer (widely reported to be OpenAI), focused on custom ASICs expected to significantly boost revenue in fiscal 2026 and 2027.
Tan has disclosed that custom AI chip demand from just three major customers could hit $60 to $90 billion by 2027. Google, Meta, and OpenAI are already deep customers. Apple and ARM may be joining the pipeline.
This growth is not zero-sum. The total addressable market is expanding rapidly enough that custom silicon and merchant GPUs can grow simultaneously, accommodating diverse compute requirements.
CPU Market Renaissance
The CPU market is experiencing an AI-driven renaissance. The server CPU TAM is expected to grow at an 18% compound annual growth rate through 2030, reaching approximately $60 billion, up from $26 billion in 2025.
This expansion is driven by both increased demand for agentic AI on general-purpose servers and the architectural requirements of integrated rack-scale systems like NVIDIA’s NVL72, which require 36 host CPUs per rack compared to the traditional one or two found in standard servers. This likely compounds even more as more accelerators scale per rack to 144 and beyond, and as more DPUs are added per accelerator rack.
While we are working on our own model for CPU growth, estimating the market’s potential to grow from mid-20 billion to 60 billion in 2030, AMD echoes those estimates and projects that the AI inflection alone will drive roughly $30 billion of incremental CPU revenue by 2030, with explicit targets to exceed 50% share in data-center CPUs over that timeframe. This market expansion benefits Intel and Arm as well in both general-purpose, specific-purpose, and AI head node platforms.
The Connectivity Fabric: Networking Infrastructure
The concept of the AI Factory relies fundamentally on a high-performance connectivity fabric that integrates massive clusters of accelerators into a unified supercomputer. As AI workloads scale to clusters exceeding 100,000 accelerators, the speed, reach, and power efficiency of the interconnects become as critical as the compute units themselves.
The networking silicon TAM, excluding storage, is forecast to reach approximately $75 billion by 2030. AI data-center switches alone will grow from roughly $4 billion in 2024 to about $19 billion by 2030—nearly 30% compound annual growth.
The optical interconnect market is on a similar trajectory. Projections cluster around $22-27 billion for the optical-transceiver market by 2030, with more aggressive scenarios pushing beyond $30 billion by the early 2030s as 800G and 1.6T deployments accelerate.
The transition to 1.6 terabit networking is creating supply-chain tightness that mirrors the constraints seen in compute. AI network port counts are projected to grow to approximately 150 million by 2029, implying a roughly 40-50% compound annual growth rate. Demand for advanced laser and modulator components continues to far outstrip supply, with multiple vendors effectively sold out of leading-edge optical components well into 2026.
Memory and Storage: The Capacity Bottleneck
High Bandwidth Memory Supercycle
High Bandwidth Memory has emerged as the primary enabler of accelerated computing, with demand far outstripping supply as GPU clusters scale.
Global HBM industry revenue is projected to roughly double to the mid-$30-billion range in 2025. The HBM TAM will grow roughly fourfold from around $16 billion in 2024 to more than $100 billion by 2030—a market larger than the entire DRAM industry (including HBM) in 2024.
By 2030, HBM is expected to contribute on the order of half of total DRAM industry revenue, up from less than 20% today.
The manufacturing complexity creates spillover effects across the broader memory market. HBM3E consumes approximately three times the wafer supply of standard DDR5 to produce the same number of bits. This ratio is expected to rise to 4:1 for HBM4, naturally constraining the overall industry supply of non-HBM products and tightening the broader memory market.
Enterprise Storage Explosion
The shift toward Retrieval Augmented Generation and inference models is driving an explosion in demand for warm storage. AI server demand is driving a 7- to 11-times increase in enterprise SSD demand per server through 2030. Data creation is expected to roughly quadruple from 2024 levels to more than 500 zettabytes by 2029.
Enterprise SSDs are expected to approach the mid-$40-billion range by 2030, up from the mid-teens billions today. By 2026, server SSDs are expected to surpass smartphones to become the largest application segment for NAND flash—formalizing AI servers as the primary consumer of leading-edge storage.
The System-Level Memory Supercycle
The current environment represents a system-level memory supercycle where DRAM and NAND benefit simultaneously. Hyperscalers are forecasting approximately 50% growth in server DRAM in 2026. Multiple forecasts expect DRAM and NAND supply constraints to persist through 2027 as AI capex layers on top of traditional workloads.
Unlike past cycles, supply expansion is challenging due to space limitations and the extended production times required for advanced HBM and stacked NAND. Technology moats and supply constraints are driving price inelasticity, where prices rise sharply during strong demand.
Wafer-Fab Equipment and Advanced Packaging
Wafer-fab equipment spending is already reflecting the shift. 300mm fab-equipment investment is expected to surpass $100 billion for the first time in 2025 and climb to roughly $140 billion by 2028, with cumulative 300mm WFE spending in the high-$300-billion range from 2026 through 2028.
Advanced-process capacity (7nm and below) is on track to grow nearly 70% through 2028, with advanced-node process equipment alone expected to exceed $50 billion in annual spending by 2028.
The back-end of the line is seeing its own supercycle. Test equipment sales surged more than 20% in 2025 to record levels—outgrowing even WFE—while assembly and packaging tools climbed high single digits. TSMC’s CoWoS capacity alone is expected to expand by well over 60% from the end of 2025 to the end of 2026.
Global semiconductor companies are planning roughly $1 trillion in new fabrication plants through 2030. Companies in the US ecosystem alone have announced more than half a trillion dollars in private-sector investment in domestic capacity.
Greenfield Economics: Everyone Benefits
The defining characteristic of the semiconductor giga cycle is that the market expansion is large enough to create greenfield opportunities across every segment of the value chain.
This is not a zero-sum competition. The bottleneck-driven nature of AI systems ensures that value capture spreads to memory, storage, networking, and packaging simultaneously.
AI is expected to add over $500 billion in incremental revenue to the semiconductor TAM over the next few years. Data centers will require about $6.7 trillion in cumulative capex by 2030, of which approximately $5 trillion will be dedicated to AI-ready facilities. Annual data-center infrastructure spending is surging toward the $900-billion level at peak.
What Makes This Moment Different
The PC era primarily benefited a narrow set of companies. The smartphone era concentrated gains among mobile-focused suppliers.
The AI giga cycle is lifting the entire semiconductor ecosystem—from logic to memory, from networking to packaging, and the entire WFE supporting supply chain.
The greenfield nature of the demand means that new entrants and established players alike can find growth. The constraints across every category mean that pricing power is broadly distributed. The synchronized expansion means that the traditional pattern of one segment’s gain being another segment’s loss has been suspended.
When compute is constrained, memory and networking benefit. When memory is constrained, compute and storage benefit. The synchronized nature of the demand ensures that constraints in one area do not reduce spending in another—they increase it.
Everyone is benefiting from the largest market expansion in semiconductor history.