Nvidia Q4 2023 Earnings: Sustainable and Defendable Growth In Sight

November 22, 2023 / Ben Bajarin

As expected, Nvidia had a beat and raise. Below are key takeaways from the quarter and the call.


Key Takeaways

  • Nvidia’s guide and commentary should quell any concerns about growth into 2024
  • A key question from investors was on fending off competitors, CEO Jensen Huang articulated compelling cases for Nvidia’s competitive advantage.
  • AI Factories and Sovereign AI Infrastructure are new growth concepts
  • China revenue will be an issue, but commentary stated it will be more than offset by data center growth

The growth outlook was well summed up by this succinct comment from Nvidia’s CEO, Jensen Huang: “Generative AI is the largest TAM expansion of software and hardware that we’ve seen in several decades.”

Over the course of the next few years, all data centers, cloud service providers, bare metal colocation facilities, and on-premise server hardware will need to be refreshed due to the computing demands on AI. While being TAM expansive, the dollars up for grabs are both hardware refresh dollars and additional dollars to implement AI-specific solutions holistically in every computing part of the stack.

A main question, and narrative theme, surrounding Nvidia is how long they can remain the dominant beneficiary of AI hardware refresh in the data center. There were two answers shared by Jensen Huang worth highlighting.

First, he mentioned the installed base. Nvidia’s GPU has the dominant share of data center and enterprise installed base. It is for that reason they have such a large developer base. To draw a parallel here, I think a point about Apple is relevant. Apple has the largest third-party developer community of any platform company and a big reason is Apple’s installed base among the world’s most valuable customers. Develpers have the best chance at monetizing their software on Apple platforms because of this large installed base of customers who are more likely to have the means to pay for software.  Similarly, Nvidia has a large installed base where many of the most valuable software workloads run on Nvidia GPUs.  Because of this installed base, Nvidia’s developer loyalty is akin to Apple’s developer loyalty and needs to be strongly considered as a part of Nvidia’s sustainable competitive advantage.

The other key differentiator mentioned by Jensen Huang was architecture compatibility. Developers do not need to think about or target specific products in Nvidia’s offerings but only develop their software in CUDA, and they can be confident it will run on Nvidia’s platforms. The company’s dedication to this principle over many years has resulted in a stable platform that developers can rely on. This stability and compatibility are cited as reasons developers often build and optimize on Nvidia’s platform first. These two things, Nvidia’s large installed base and architecture compatibility, are key reasons to believe in their sustained competitive advantage.

Last comment on competition: as long as the main demand for computing remains in training, Nvidia is the clear winner.  We believe that will be the case throughout 2024.  Once the market shifts to more inferencing of these models, we see more competition emerging.

AI Factories

On the call Jensen Huang outlined the new class of data center needed for AI as an AI factory.  We explore, briefly how to think about this concept.

AI factories are described as a new class of data centers. Unlike traditional data centers that store many files and run many applications used by different tenants, these AI factories are used by one tenant and process data, train models, and generate AI. These AI factories are being built globally, and their expansion is seen in large language model startups, generative AI startups, and consumer internet companies.

In another discussion, AI factories as an example via Nvidia’s partnership with Microsoft. Nvidia’s AI factories, available on Azure as DJX Cloud, are part of the strategy to help customers build their own custom large language models. This can be likened to being a foundry for AI models, similar to how TSMC is a foundry for semiconductors.

Sovereign AI Infrastructure

Another new angle shared by Jensen Huang on the call was around sovereign AI infrastructure. Briefly, here is an outline of how to think about this.

Nations are increasingly recognizing the need to invest in sovereign AI infrastructure to support economic growth and industrial innovation. Countries are leveraging their own data to train large language models, fostering local generative AI ecosystems and technological self-reliance. For instance, India is collaborating with Nvidia and major tech companies to enhance its domestic AI capabilities. Similarly, a French cloud provider is building a regional AI cloud based on Nvidia’s technology to stimulate investment across France and Europe. These developments underscore the growing global investment in computing capacity as a new economic priority.

This sovereign AI infrastructure would also fall into our thinking about how large enterprises will want to keep some of their AI infrastructure at the edge meaning this concept applies to large organizations not just nations.

China Risk
Without supply chain constraints and China impact, it is interesting to consider how much larger the quarterly growth could have been.

The new US export regulations classify Nvidia products for China into 3 tiers based on performance thresholds – products clearly exceeding thresholds require a license, lower performance products still need a license but with easier approvals anticipated for commercial uses, and the lowest performance segment not needing any license. If Chinese domestic solutions become highly competitive for those products not subject to licensing requirements. So Nvidia may not see sharp demand growth in China until significant license volumes get approved for their higher performance offerings destined for non-defense applications.

If meaningful percentages of licenses succeed for those upper Nvidia product tiers enabling commercial deployments, substantial business improvement in China is possible relative to their currently projected declines. However, visibility on longer-term license approval trends remains limited. How this regulatory framework impacts the ability of Chinese firms to leverage Nvidia’s premier AI accelerators for next-generation use cases will be a crucial factor in determining commercial outcomes in this complex and evolving market.

 

Quarter Details

Details on the quarter: October revenue of $18.120bn (up 34.2% q/q and 205.5% y/y) beat Street expectations of $16.191bn and our estimate of $16.185bn.

By segment:

  • Gaming was up 15.0% q/q and 81.0% y/y to $2.86bn
  • Data Center revenue of $14.51bn was up 40.6% q/q and 278.7% y/y
  • Professional Visualization was up 10.9% q/q and 108.0% y/y to $416mn
  • Automotive was up 3.2.5% q/q and up 4.0% y/y to $261mn
  • OEM & Other was up 10.6% q/q and flat y/y to $73mn
  • Gross margin of 75.0% was above the Street of 72.4% and 73.0%
  • Non-GAAP EPS of $4.02

NVIDIA once again guided to sharp sequential revenue growth; $20bn at the midpoint for January. Revenue of $20bn (up 10.4% q/q and 230.5% y/y) came in ahead of the Street estimate of $17.958bn and $19.434bn, respectively, with data center clearly driving the majority of the growth. The company guided gross margin to 75.5%, compared to the Street at 72.0% and our estimate of 73.6%.

Join the newsletter and stay up to date

Trusted by 80% of the top 10 Fortune 500 technology companies