AI Technology

Samsung Debuts 7th-Gen HBM4E at GTC 2026: 16Gbps Speed and 4TB/s Bandwidth for NVIDIA Vera Rubin

Samsung unveiled its 7th-generation HBM4E memory at NVIDIA GTC 2026, achieving 16Gbps per pin and 4TB/s bandwidth, designed for NVIDIA's next-gen Vera Rubin platform.

SamsungHBM4EGTC 2026NVIDIAVera RubinSemiconductor
※ このページにはアフィリエイトリンクが含まれています。リンク経由でご購入いただくと、運営費の一部として還元されます。

On March 17, 2026, Samsung made the world premiere of its 7th-generation High Bandwidth Memory 'HBM4E' at NVIDIA GTC 2026. This next-generation memory achieves a transfer speed of 16Gbps per pin and 4TB/s bandwidth, optimized for NVIDIA's next-gen AI platform 'Vera Rubin.'


HBM4E was developed building on the technology of Samsung's 6th-generation HBM4 (11.7Gbps), which is already in mass production. The significant performance improvement over HBM4 addresses the massive data processing demands of next-generation AI workloads. It is particularly expected to resolve memory bandwidth bottlenecks in large language model inference and training.


Samsung also introduced a new Hybrid Copper Bonding (HCB) technology that enables stacking of 16 or more layers while reducing thermal resistance by over 20%. This technology maintains stable operating temperatures even in high-density stacked structures, maximizing the performance of AI accelerators.


NVIDIA CEO Jensen Huang emphasized the collaborative relationship with Samsung, noting that Samsung manufactures the Groq 3 LPU AI chip. In the AI semiconductor market, SK hynix and Samsung are competing for HBM memory dominance, and this HBM4E announcement represents a crucial step for Samsung in establishing its position as a key supplier for NVIDIA's next-generation platform. With the rapid expansion of AI infrastructure, demand for high-performance memory is expected to accelerate further.

AI Newsletter

Get the latest AI tools and news delivered daily

Related Articles