Discover how Intel advances with ZAM technology, overcoming HBM limits and boosting bandwidth in high-performance AI chips.
Intel has presented a proposal that could change the course of high-performance memories for artificial intelligence. According to information from the Adrenaline website and other outlets on May 3rd, the so-called ZAM technology (Z-Angle Memory) emerges as a direct alternative to HBM, promising up to 5.3 TB/s of bandwidth per stack, a leap that could represent approximately double the performance compared to current solutions.
Developed in partnership with SoftBank, the innovation will be detailed during the VLSI Symposium 2026, with support from SAIMEMORY, a subsidiary of the Japanese group. The expectation is that this architecture will be ready for production between 2028 and 2030, directly targeting applications in GPUs and AI accelerators, where the demand for performance is growing rapidly.
Intel’s ZAM Technology Born to Break Current Bandwidth Limits
The evolution of artificial intelligence demands increasingly robust memory solutions. Intel, by investing in ZAM technology, seeks to solve a central problem: the limitation of bandwidth in current HBM-based systems.
-
A rainbow cloud with shades of green, pink, and blue went viral in Indonesia, revealing a rare optical phenomenon in the sky and drawing attention because the colors seen over Bogor were not an image trick, but sunlight diffracted by water droplets and ice crystals.
-
A study confirms for the first time, with two decades of data, that a warm mass of deep water in the Southern Ocean is approaching Antarctica and threatening the continent’s ice shelves.
-
The magnetic anomaly over Brazil was born in the Indian Ocean 900 years ago, crossed Africa until settling in South America, and the most curious detail is that this “hole” in Earth’s shield may be repeating an ancient path.
-
Scientists are testing if life could travel from Mars to Earth inside meteorites
Today, HBM dominates the high-performance segment, especially in data centers and AI-focused boards. However, as models become more complex and demanding, bottlenecks emerge that are difficult to overcome. Data transfer needs to be faster, more efficient, and stable — and it is precisely at this point that the proposal for the new generation of chips with ZAM gains traction.
The promise is not just incremental. By talking about up to 2x more bandwidth, Intel signals a structural change, capable of redefining technical standards in the sector.
ZAM Technology’s Vertical Architecture Boosts Performance and Simplifies Design
One of the most relevant aspects of ZAM technology lies in its construction. The project uses a vertical stacking composed of 9 layers, with 8 dedicated to DRAM and a central layer responsible for logical control.
This approach eliminates the need for multiple distributed controllers, common in HBM solutions. In practice, this means a more integrated system, with lower operational complexity and greater efficiency in internal communication.
Another technical highlight involves the use of TSV (Through-Silicon-Via). The architecture concentrates approximately 13.7 thousand interconnection paths, allowing data to circulate at high speed between layers.
Furthermore, the structure uses silicon substrates of only 3 micrometers between layers, reinforcing density and directly contributing to bandwidth gain.
More Data in Less Space: Density and Capacity of the New Generation of Chips
Storage capacity also stands out. Each layer of ZAM technology delivers approximately 1.125 GB, totaling around 10 GB per stack. In a complete package, this number can reach 30 GB, maintaining high space efficiency.
The stack has dimensions of approximately 171 mm² (15.4 x 11.1 mm) and achieves an estimated density of 0.25 Tb/s per mm², a significant number within the current context of advanced memories.
This level of integration is fundamental for the new generation of chips, which needs to handle large volumes of data in real time. The combination of density and speed reinforces Intel’s potential to compete with HBM-based solutions.
Energy Efficiency and Thermal Control Put ZAM Ahead of HBM
One of the main challenges faced by HBM is related to the heat generated during operation. As more layers are added, the heat tends to concentrate, directly affecting the system’s performance and stability.
The ZAM technology, according to Intel and its partners, adopts a different approach. Its architecture prevents heat from passing through the wiring layer, reducing thermal buildup and improving dissipation.
Additionally, energy consumption is optimized, especially in high data transfer scenarios. This is particularly relevant in environments like data centers, where energy efficiency directly impacts operational costs.
Among the main highlighted advances:
- Reduction of heat buildup in internal layers
- Better thermal distribution due to vertical design
- More efficient energy consumption in intensive transfers
- Greater stability in prolonged workloads
These factors reinforce the competitiveness of ZAM technology against HBM, especially in critical applications.
Technical comparison highlights direct competition between ZAM and HBM
The arrival of ZAM technology intensifies the competition with HBM, especially with future versions like HBM4 and HBM4E. Although these evolutions are still in development, Intel‘s proposal already presents impressive numbers.
Among the main points of comparison:
- Bandwidth: ZAM can achieve up to twice the current HBM
- Density: about 0.25 Tb/s/mm²
- Capacity per package: up to 30 GB
- Architecture: vertical stacking with unified controller
- Interconnection: 13.7 thousand TSV lanes
Additionally, ZAM uses a packaging model described as 3.5D, which combines three-dimensional elements with horizontal integration. This format allows for the inclusion of features like silicon photonics and advanced input-output connections.
This set of characteristics positions the solution as a strong candidate for high-demand applications in the new generation of chips.
Direct impact on the advancement of artificial intelligence and modern GPUs
The evolution of artificial intelligence directly depends on the ability to move large volumes of data quickly. In this scenario, bandwidth becomes a critical factor.
Intel, by investing in ZAM technology, seeks to meet a growing demand for performance in applications such as generative AI, large-scale data analysis, and complex simulations.
Today, HBM already plays an essential role in this ecosystem but faces limitations that may become more evident with the advancement of workloads. ZAM emerges as a possible solution to this challenge.
Among the expected impacts:
- Faster training of AI models
- Reduced latency in data processing
- Better performance in next-generation GPUs
- Greater efficiency in data center infrastructures
With this, the new generation of chips can reach unprecedented levels of performance, driven by the evolution of memory.
What to expect from ZAM technology in the coming years
Despite its promising potential, ZAM technology is still in the development phase. Intel, along with SoftBank and SAIMEMORY, is working to validate the architecture and ensure its viability on an industrial scale.
The production forecast between 2028 and 2030 indicates that there are still challenges to be overcome. Adoption will depend on factors such as compatibility with existing systems, manufacturing costs, and market acceptance.
The presentation at the VLSI Symposium 2026 will be an important step to consolidate the proposal and demonstrate its operation under real conditions.
Even with uncertainties, Intel‘s move reinforces a clear trend: the search for solutions that expand bandwidth without compromising energy efficiency and stability.
A silent shift that could redefine the future of chips
The arrival of ZAM technology represents more than an incremental evolution. It is a concrete attempt by Intel to reposition its operations in a highly competitive market, dominated by HBM-based solutions.
By proposing an architecture capable of delivering up to 5.3 TB/s per stack, with improvements in thermal dissipation and energy consumption, the company points to a future where the new generation of chips will be more efficient, faster, and adapted to the demands of artificial intelligence.
Although the path to large-scale adoption involves technical and commercial challenges, initial data indicates that ZAM has the potential to become a new standard. If this is confirmed, the impact will be felt across the entire industry — from data centers to the most advanced computing applications.
With information from Adrenaline.

Be the first to react!