Intel Foundry Technology Bets: A Full-Stack Strategy for the Disaggregated Era
The semiconductor industry is entering a new phase, one defined not by squeezing more transistors onto a single monolithic die, but by how intelligently we compose systems from smaller, specialized components. This is the beginning of the disaggregated era of design. As Moore’s Law scaling slows and the cost and complexity of large chips climb, chiplets—modular, functionally distinct pieces of silicon—are emerging as the architectural backbone of next-generation systems.
This shift isn’t just about managing complexity or yield. It’s about unlocking new design freedom. The chiplet era allows companies to optimize each tile for its unique role—AI acceleration, CPU, IO, memory, analog, and assemble them into tightly integrated, high-performance systems. It also elevates the role of chip architects, bringing system-level decisions—power, thermal, bandwidth, topology, into the core of silicon design strategy.
Intel’s latest foundry family of IP and roadmap make clear that it sees this shift not as a threat to the old model, but as an opportunity to lead in the new one. Across a broad set of products and technologies, Intel positioned itself not as an alternative to other foundries, but as the foundry purpose built for the disaggregated era of computing.
In this era, where system performance depends less on monolithic scaling and more on assembling complex systems out of smaller, differentiated tiles, the requirements shift dramatically: you need scalable interconnect, efficient power delivery, flexible packaging, economic cost structures, and the ability to build large systems with modularity in mind. Intel is betting on being the foundry that can deliver all of that in a cohesive, integrated stack.
We have been doing a deep technology evaluation of Intel Foundry technologies, and this report’s goal is to set the stage for a broader series where we will deep dive into the technologies, described below, that we think are the anchor technology bets Intel is making as it positions itself to be the foundry of choice for the disaggregated design era.
PowerVia and RibbonFET: Foundational Leaps
Intel will be the first foundry to bring a fully integrated backside power delivery network (BSPDN) to high-volume manufacturing. PowerVia, coming to production in 18A, separates power and signal paths by moving power delivery to the backside of the wafer. PowerVia is not just a structural improvement—it unlocks tangible design benefits: up to 4% performance uplift at iso-power and up to 10% improved utilization. Compared to Intel 3, 18A delivers 15% better performance per watt and 1.3x chip density.
Unlike partial solutions that route only some power rails to the backside, PowerVia fully removes power delivery from the front-side metal stack. This allows for a dramatic relaxation in front-side signal routing, freeing up area and reducing congestion in high-density logic designs. Intel also uses PowerVia to enable single-patterning of the lower metal layers, reducing overall cost and manufacturing complexity, an often overlooked but critical factor in scaling new transistor architectures.
A common assumption around backside power delivery has been that it introduces a cost tax due to the added fabrication complexity. However, Intel’s implementation on 18A flips that assumption. As shown in Intel’s internal cost model, the cost of implementing backside power is offset by the reduced complexity of front-side metal layers. By increasing the minimum metal pitch from <25nm to 32nm, Intel enables direct-print single patterning, which avoids the multi-patterning steps that dominate cost in traditional flows. The net result: the combined cost of backside power delivery and simplified lower metal layers is projected to be lower to comparable than the standard frontside-only flow. This positions PowerVia not only as a performance and routing improvement, but as a cost-competitive and scalable manufacturing innovation.
Paired with this is RibbonFET, Intel’s gate-all-around (GAA) transistor architecture, which enables tighter electrostatic control and higher drive current than FinFETs. RibbonFET, together with PowerVia, forms the transistor and power foundation of Intel’s chiplet and advanced packaging strategy.
Looking ahead, PowerVia doesn’t stop at 18A. Intel has already confirmed that PowerVia continues forward into Intel 14A as PowerDirect, Intel’s second generation power delivery innovation, where its backside power implementation is expected to mature further. With the foundation laid at 18A, 14A will likely refine both process integration and design enablement, building on the cost-neutral, performance-positive outcomes already demonstrated. As more customers adopt disaggregated, tile-based designs, the importance of efficient backside power scaling will grow, making PowerVia not just a milestone for 18A—but a backbone for Intel’s future nodes.
We think this is important strategically because they represent the foundational capabilities needed to unlock the chiplet era. As disaggregated systems scale, routing density, power integrity, and physical partitioning become key bottlenecks. PowerVia addresses these challenges by decoupling signal and power delivery, enabling more efficient physical design and better thermal behavior. For designers, this opens up a range of options at the power delivery level allowing optionality for designing for effefiecy and performance. RibbonFET provides the electrostatic control and scaling headroom needed for dense, high-performance logic blocks. These technologies are not optional for modular systems. Indeed, they are essential for building scalable, power-efficient silicon architectures in a post-monolithic world.
EMIB Scaling: Disaggregated at AI Scale
Intel continues to push EMIB (Embedded Multi-die Interconnect Bridge) as its modular 2.5D packaging solution, and it’s now scaling far beyond previous bounds. In a disaggregated approache, EMIB enables die-to-die interconnect through localized embedded bridges placed only where needed. This modular design avoids the cost and thermal constraints of large-area silicon interposers, while still delivering high-bandwidth, low-latency communication between chiplets.
The new EMIB-T variant integrates TSVs and MIM capacitors, enabling stitched die-to-HBM4 connections and vertical power delivery—critical for high-bandwidth, low-noise memory access in AI workloads. The roadmap calls for EMIB-based systems with more than 12x reticle-scale integration and over 24 HBM stacks by 2028.
Further, its differentiators extend beyond interconnect density: cost efficiency benefits from the use of small bridges and panel-based manufacturing, while cycle time is reduced thanks to the avoidance of chip-on-wafer assembly steps. EMIB provides a scalable, economically viable path for multi-tile, HPC and AI-centric system design.
The roadmap calls for EMIB-based systems with more than 12x reticle-scale integration and over 24 HBM stacks by 2028. This scaling is critical in the disaggregated design era, where compute, memory, and specialized accelerators are often developed on different process nodes or sourced from different foundries. EMIB enables these disparate chiplets to be tightly integrated into a single, high-performance system without the yield, cost, or routing constraints of large monolithic dies or interposers.
As we have been digging into the strategic nature of these technologies to Intel, we believe that EMIB plays a unique role for Intel as it functions as a strategic bridge between foundries, enabling customers to integrate dies sourced from both Intel and external foundries—within the same package—without needing to commit exclusively to either.
EMIB as a Strategic Bridge
EMIB is more than a technical enabler — it’s a strategic lever for Intel Foundry. As a packaging technology, it offers a neutral, interoperable platform that allows customers to combine dies from multiple foundries in a single integrated system. This capability is redefining how customers evaluate foundry relationships — not as exclusive commitments, but as composable ecosystems.
EMIB allows heterogeneous dies from different foundries to coexist in a single package without requiring redesign or system-level compromise. It eliminates the legacy constraint of committing an entire SoC, or chiplet design, to a single foundry and opens up the possibility of selecting the best technology or partner for each die. This level of flexibility gives customers new design freedom while preserving existing investments.
Because EMIB uses standard die-to-die interfaces and localized embedded bridges, chiplets do not need to be co-optimized for a shared interposer or reticle-bound layout. This modularity lets customers adopt Intel’s advanced packaging capabilities while continuing to use silicon sourced from other foundries, with minimal impact on current design flows.
Strategically, EMIB supports de-risked dual sourcing. Customers can start sourcing advanced-node wafers from Intel alongside existing foundry relationships—appealing to hyperscalers, defense contractors, and AI infrastructure builders seeking supply chain diversity and resilience.
Additionally, EMIB enables mixed-node, mixed-function integration: compute from one node, IO from another, and memory from a third can coexist within the same tightly coupled package. This architectural flexibility allows each function to be optimized independently without compromising overall system integration.
Perhaps most importantly, EMIB opens a natural path for deeper Intel engagement. Customers who start with EMIB-based packaging can later expand into Intel’s broader technology stack—such as PowerVia, RibbonFET, and co-packaged optics—as system needs evolve. EMIB, in this sense, is not just a bridge between chiplets—it’s a bridge into Intel’s future ecosystem. It also reframes the competitive narrative around Intel Foundry. Rather than requiring customers to choose between foundries, EMIB enables a hybrid model where customers can adopt Intel’s advanced packaging while continuing to manufacture dies at their preferred foundry. This flexibility allows them to tap into Intel’s packaging and integration roadmap—PowerVia, RibbonFET, co-packaged optics—without being locked in, and unlock new system-level differentiators in a modular, chiplet-based world.
Foveros Evolution: 3D Flexibility Across the Stack
Intel’s Foveros roadmap spans multiple stacking approaches. Foveros-S, R, and B are all 2.5D-style architectures offering different bump pitches, redistribution layers, and substrate-level options. These allow for optimized integration of compute, analog, or IO tiles in systems where cost, density, or form factor vary by market.
Foveros Direct, by contrast, is Intel’s true 3D integration platform. It uses hybrid bonding with bump pitches of ≤5 microns and densities of up to 10,000 bump/mm²—ideal for memory-on-logic stacking in AI inference or dense edge compute.
By providing this range of solutions, Intel gives customers composable flexibility across vertical and horizontal dimensions. And when used alongside EMIB, Foveros enables hybrid 2.5D/3D systems that can scale compute and memory while balancing yield, cost, and thermal design.
What makes Foveros a strategic differentiator is its ability to support both incremental and ambitious system scaling without forcing a one-size-fits-all packaging model. In a disaggregated world, different dies may require different physical, electrical, or thermal treatments—Foveros allows each layer of the system to be optimized independently. This flexibility is key for advanced use cases like AI, where logic and memory co-location is critical, but power budgets and physical constraints are unforgiving. Foveros also provides a pathway to stack analog or IO dies below compute tiles, enabling tighter integration and form factor innovation. As more systems move to chiplet-based architectures, Intel’s ability to support fine-grained stacking at multiple levels gives it a packaging capability unmatched by more monolithic or node-constrained competitors.
Strategic Takeaway
Intel is not just aligning with the disaggregated future — it’s building for it. Each of the technologies detailed in this report reflects a deliberate choice to invest where the industry is headed: PowerVia to reset the power delivery model, RibbonFET to push logic scaling forward, EMIB and Foveros to enable flexible multi-tile integration, and co-packaged optics to remove bandwidth ceilings beyond the package.
These are not marginal gains—they’re infrastructure-level moves designed to address the new bottlenecks that emerge in a world of chiplets, heterogeneity, and system-level design. Together, they position Intel to serve customers building increasingly modular compute platforms that demand fine-grained optimization across silicon, packaging, and interconnect.
This is no longer a conversation about catching up in traditional mobile SoCs. It’s about enabling next-generation AI, HPC, and hyperscale systems where silicon composition, not just transistor count, defines competitiveness. Intel’s bet is that tomorrow’s most important compute systems won’t be monolithic—and it is shaping its foundry offering to reflect that belief.