Research Archive
All About Agents: Cheap Tokens, Local Models, and Product Fit
Over the past year or so we’ve heard a lot about agents. Most of it hasn’t really happened, things have changed quickly, it’s hard to pinpoint and trends tend to be near impossible to predict because something you think may be successful ends up becoming irrelevant in three months. If you take a step back…
TSMC COUPE: Why the CoWoS Pattern Is Repeating in Silicon Photonics
While we understand why, much of the current narrative around the optical market is spent debating when co-packaged optics scales and which active optics vendors benefit first. We think the more important question is who controls the manufacturing platform that the transition depends on. That distinction matters because the connectivity stack is already moving from…
Arm’s Move Into Merchant Silicon Is About Escaping the Revenue Ceiling
For those of us who have covered Arm for decades, we remember the days when this was a $500-600M yearly revenue company, and hardly growing. How times have changed. The evolution of Arm, now clearly framed as Arm 2.0, has been years in the making, with all the stakeholders recognizing that an IP business alone…
HP IQ: The Right Instinct, With a Long Road Ahead
The AI PC category has a credibility problem. Two years into the NPU era, most on-device AI still feels like a solution searching for a problem. The hardware arrived before the software had anything meaningful to say, and Microsoft’s Copilot rollout did little to change that narrative. Rather than building a coherent ecosystem experience, Microsoft…
Intel Pro Day 2026: Intel’s Commercial AI PC Push
The next phase of the AI PC market will be won less by raw AI specifications and more by enterprise usefulness. Intel is repositioning the commercial PC as a managed, secure, on-device AI endpoint that can execute work, observe itself, and improve user experience without sending every workflow back to the cloud. What stands out…
Secret Agent CPU
The Thesis in 60 Seconds We believe the shift from monolithic LLM inference to multi-step agentic workflows structurally changes the compute mix inside datacenters. Training-era architectures assumed GPUs would dominate every phase of inference. Agentic workloads have challenged that assumption. When an agent calls a tool, queries a database, waits for human approval, or orchestrates…