The Developer Battle and On-Device AI Compute
This is essentially two different but interlinked analyses of the opportunity looming for AI development. On the back of Microsoft BUILD, where Microsoft outlined the fundamental computing platform shift we are witnessing, I wanted to set the broader market context, opportunity, and evolution of computing relative to AI.
Imminent Market Opportunity
Within the context that AI, and really what we mean, is a true natural language interface deeply integrated into an application’s user interface, it is clear we are on the cusp of the next major user interface evolution. When you look at how generative AI is being integrated into applications like Office, Adobe Suite, Google Workspace, Windows as a whole with Copilot, and many more examples, it is clear generative AI is changing how we interact with the applications we use on a daily basis. The power of this technology gets unlocked in its promise to save us time, help us get more done, and be more productive in a short time. This promise of higher productivity with less monotonous work is likely the driving force that is making generative AI tools the fastest-adopted technology in industry history. I believe it is abundantly clear generative AI (AI broadly) will touch every workflow we do with computers today over the next two-three years.
This cycle is different from past technology adoption cycles because no new hardware is technically required to take advantage of these new technologies. Developers right now have an active and interested addressable market in the billions of people range. Past technology cycles have required new hardware to roll out before developers can capitalize on a new computing paradigm at any scale. Because many of these new AI-driven apps and services are run in the cloud, they can be accessed by anyone with a modern desktop/laptop, tablet, or smartphone. There may be no more exciting time to be a software developer than right now.
The Battle For Developers
As with all new technology platforms, developers are essential to boost an ecosystem and driving economic value. Last year, Apple announced they have 34 million registered developers at their World Wide Developer Conference. The number of developers for Apple’s platform is many times higher than their closest competitors. Much of the gap in Apple’s developer community comes from smaller/independent developers who primarily develop apps only for Apple platforms. In a study we conducted on Apple’s developer community, we found only 4% of Apple’s smaller independent developers wrote apps for Windows. But many of them did write apps or services for web-based applications.
While the latter point, specific to developers integrating more AI into their software, is good for Azure, Microsoft would love more developers to write native apps for Windows as well. This is why at Microsoft BUILD this week, Microsoft told a compelling story about the industry-leading AI infrastructure they have with Azure and the newly released tools to make Windows development even easier for all developers.
This may be the most exciting time to be a developer at any point in the technology industry history and that makes this platform battle for developers even that much more interesting.
On Device vs. Cloud AI
On a semiconductor indsutry focused podcast I have with Jay Goldberg called The Circuit, we spent a whole episode discussing the inevitable evolution of AI computing workloads to move from being primarily in the cloud as they are today to offloading relevant workloads to on-device silicon. But the main point to be made is why AI workloads will inevitably move to the edge. I assumed it was a forgone conclusion this would happen, but apparently, it is a more common belief that these workloads remain in the cloud. That isn’t feasible.
Even as innovations in deep learning and machine learning algorithms will get better, the hurdle that will remain is cost. Running all these AI workloads in the cloud is not economically viable. Furthermore, to the point, even it was there is no foreseeable future where data centers can secure enough critical silicon components to scale their data centers to meet the demands of tens of millions of developers hosting AI applications and services only in the cloud. Beyond it remaining extremely expensive and data centers already up against struggling to meet the compute demands on AI workloads today, there are numerous reasons to push AI workloads to devices. Below is a list of a few of the main value propositions for on-device AI workloads.
-
- Speed: Cloud-based workloads will have latency. When using a generative AI product today, you may notice that the speed or responsiveness varies. Sometimes you may get a paragraph or more a second, and other times you may get 20-30 words a second or less. As more of these generative AI workloads move to the device, there will be little to no latency. So using generative AI to write your email will be nearly instantaneous. Thus fulfilling the core promise to save us time in our day-to-day workflows.
-
- Security and Privacy: On-device AI processing can help to protect user data from unauthorized access. This is because data is never sent to the cloud, where it could be intercepted by hackers or other malicious actors. This will become increasingly important as organizations are sensitive to cloud company data or IP vulnerability. As workplace AI evolves, many companies will likely mandate numerous generative AI applications run only on the device.
-
- Personalization: As we begin to integrate more AI into our daily workflows, we want these tools to adapt to our styles and work with us. At Microsoft BUILD this week, they positioned a new feature in Windows called Windows Copilot as not being an “auto pilot” but truly being more a companion helping you in your creativity or productivity life. So what if these tools can learn our unique workflows, language preferences, calendar priorities, and more and were able to provide a unique AI-enhanced experience for its owner? This would require much more computing than is on any silicon company’s roadmap today, but it is something that will only happen with on-device AI.
-
- Accuracy: The last point for on-device AI I want to make is on accuracy. We all know these larger models like Bard, Bing CoPilot, and ChatGPT tend to get things wrong occasionally. That won’t always be the case, but the smaller, more finely tuned models running on devices will tend to be more accurate, which is something that matters in mission-critical workflows.
Like the data center, it will be very easy for on-device AI workloads to outstrip compute capacity on things like smartphones, tablets, and laptops/desktops. Things like ChatGPT are large language models with hundreds of billions of parameters. The largest model I could find running locally on a device today was 17 billion parameters. It seems for the next few years, between 10-30 billion parameter models will be the max size that can run on devices. Overall, I think this AI boom could cause some interesting silicon design changes as companies try to meet the demand for on-device processing of AI workloads. This makes for exciting competition among all those working on relevant SoCs for said device categories, like Apple, Intel, AMD, Qualcomm, and MediaTek.
The AI SoC race is on to empower more AI workloads on-device and it will be fascinating to see how these companies use their chip design prowess and transistor budget to expand the capabilities further of on-device AI capabilities and absorb more primary AI workloads on our personal computers.