The AI Semiconductor Opportunity
This is a highlight summary of Episode 26 of The Circuit with Ben Bajarin and Jay Goldberg.
Industry analysts recently debated the impact of AI on the semiconductor market and which companies stand to benefit most from this disruptive trend.
They see three critical questions shaping the AI chip sector:
- Will AI spur net new semiconductor spending or just cannibalize existing CPU/GPU share? Consensus suggests AI should expand the market, with data center operators already accelerating expansion plans to support AI workloads.
- How will the market for inference processing evolve? While Nvidia dominates training silicon today, most growth will likely shift to inference at the edge. This creates an opening for custom accelerators and on-device processing.
- Can anyone challenge Nvidia in training? Though its CUDA software advantage is diminishing, Nvidia’s full-stack capabilities will be hard to replicate. However, large cloud vendors want to avoid reliance on one vendor and will explore alternatives.
The rapid shift to generative AI models like DALL-E and ChatGPT will further accelerate the need for specialized inference processing. Pushing more computation to the edge through devices like phones, PCs and cars may offer the only viable path given data center constraints.
Apple and Microsoft are well positioned to enable on-device AI thanks to their control of operating systems like iOS and Windows. Android fragmentation poses a greater challenge for Google.
While the AI chip opportunity remains hard to quantify, it should grow notably faster than overall semiconductor spending in coming years as more enterprise workloads incorporate AI functionality. Though training may stay concentrated, the expanding inference market promises to spread the benefits more widely.