AMD Advancing AI Event: All Eyes on MI300X

December 7, 2023 / Ben Bajarin

Yesterday, AMD held an event launching a range of new products focused on AI as the event was themed Advancing AI.


Key Takeaways

  • increased its AI Accelerator TAM outlook to $400B+ in 2027 (up from $150B) driven by a faster pace of AI infrastructure buildout
  • MI300X Official Launch with detailed specs and customer testimonials
  • Advancements in the software ecosystem continue to mature around AMD platforms
  • Ryzen AI, Ruben 8040 and NPUs as AI accelerators on SoC

 

What’s Significant

AMD today, made great strides in poking at the narrative that Nvidia is going to own the AI semiconductor platform for the next decade the same way Intel owned the PC boom. This debate looms large within the industry as well as in the investor community and therefore positioning against this is both a challenge and an opportunity.  The first significant observation is related to the market upside CEO Lisa Su shared. We knew going into the event that AMD saw roundly ~$2 billion of AI accelerator upside. From comments shared on stage it seems the only thing holding that number back is availability of chips. But from the standpoint of the AI accelerator TAM, AMD updated their market outlook from $150B to $400B which again may only be limited by supply chain capacity. To put that into perspective, and this is why every semiconductor CEO is stating AI as the largest driver of TAM expansion for semis we have ever seen, but the entirety of the semiconductor industry is ~$600B annually today.  If just the AI accelerator market grows that to nearly $1 trillion alone, then you can see the true shape of this upside to the semiconductor industry as a whole.

With that market context, the official launch of the MI300X was the star of the show.  AMD was very intentional in positioning this product as an AI accelerator but one that was on par with Nvidia’s H100 in terms of training but far exceeded an H100 when it comes to inference.  This was key for a few reasons. The first is Nvidia is strong in training but weaker in inference.  Part of that is due to the current stage of the AI cycle is predominantly in training. But the our view is when the market moves to inference it will be far more competitive bringing opportunity to both AMD and Intel. This is why being competitive in training is a plus, but they did not show it beating an H100 in training but rather focused the MI300X’s differentiation to be on inference based AI workloads. This speak to positioning with where the market is going vs. just where it is today.

AI workloads require software platform maturity so that AI developers can train, tune, and optimize their models.The team has demonstrated exceptional advancement in their comprehensive software approach, notably in the areas of computing and networking ecosystem advancement, as well as in securing new customer commitments. They have been meticulously refining their software, with a keen emphasis on artificial intelligence (AI) frameworks, and the enhancement of their computing and networking infrastructure.

Within the scope of their full-stack software strategy, particularly with developments like ROCm 6, they’ve established a robust library and an array of tools crafted for their AI software stack, particularly tailored for optimized performance. This approach underlines the team’s commitment to creating an integrated and efficient stack that addresses various software needs end-to-end.

Their customer relationship strategy is noteworthy, with the announcement of pivotal partnerships with major cloud service providers—such as Meta, Microsoft, and Oracle—and with original equipment manufacturers (OEMs) and original design manufacturers (ODMs) including industry leaders like Lenovo, Dell, and Supermicro. These relationships highlight their dedication to fostering strong industry connections and signaling their software and service’s robustness and reliability for high-demand business operations.

Lastly, as we have been pointing out, 2024 is going to be the year of the AI PC. While architecture improvements to client silicon by way of the CPU/GPU will continue to progress, it is the NPU that will be talked about by silicon vendors as the dedicated AI accelerator for client devices. AMD spent far less time on their client silicon product other than to highlight the continued relationship with Microsoft as Windows itself starts to adopt more AI specific features by way of Copilot. We emphasize it is just the tip of the iceberg with regard to AI PCs and 2024 will set the stage for a much bigger narrative in 2025 in our view.

Join the newsletter and stay up to date

Trusted by 80% of the top 10 Fortune 500 technology companies