Input
Changed
Over 50% performance improvement over the previous model. Potential collaboration to reclaim technological leadership. Strategy includes GDDR7, emphasizing cost and flexibility advantages.

Samsung Electronics has made a strong entry into the AI semiconductor market by supplying its HBM3E 12-layer product to AMD. Previously overshadowed by doubts about its high-bandwidth memory (HBM) capabilities and seen as a latecomer, Samsung now has an opportunity to regain market trust through this supply deal. The partnership with AMD is likely to go beyond a one-time transaction, potentially developing into a long-term collaboration. Moreover, with growing interest in GDDR7 for graphics applications, Samsung is expanding its influence in the market with a flexible strategy that encompasses both HBM and GDDR technologies.
Cracks in Nvidia's Dominance of the AI Semiconductor Market
On June 12 (local time), AMD announced at the “AI Advancing 2025” event in San Jose, California, that its next-generation AI accelerators, the MI350X and MI355X, will use HBM3E 12-layer memory from Samsung Electronics and Micron. While there had been quiet speculation that Samsung was supplying HBM to AMD, this was the first official confirmation, solidifying the collaboration between the two companies.
The HBM3E 12-layer product now being adopted was developed by Samsung last year. It stacks twelve 24Gb DRAM chips vertically using TSV (Through-Silicon Via) technology, providing a high capacity of 36GB. The memory offers up to 1,280 GB/s bandwidth and can handle speeds of up to 10Gbps across 1,024 I/O channels — a more than 50% improvement in performance and capacity over the previous 8-layer HBM3E.
Samsung applied advanced TC NCF (Thermal Compression Non-Conductive Film) technology in the HBM3E 12-layer process to maintain the same package height as the 8-layer version. It also minimized the gap between chips to 7 micrometers, increasing vertical density by over 20%. These enhancements aim to boost both power efficiency and performance. The integration into AMD’s MI350 series is seen as proof of Samsung’s successful execution of these goals.
AMD’s Expansion Offers Samsung a Golden Opportunity
Until now, Nvidia has had near-total control over the AI semiconductor market, not only dominating high-performance GPUs but also the HBM supply chain. Nvidia’s decision not to adopt Samsung’s HBM had raised concerns about Samsung’s technological prowess, pushing the company into a perceived competitive disadvantage. However, AMD’s adoption of Samsung’s HBM now helps dispel doubts about its technology and allows Samsung to shed its underdog image in the HBM market.
Industry observers are now eyeing the possibility of deeper technical collaboration between Samsung and AMD. AI semiconductors require intricate coordination between memory and GPUs in areas like communication efficiency, heat dissipation, and power optimization. This could evolve into a scenario where memory suppliers work alongside GPU designers from the early planning stages to develop custom HBM solutions — a step toward co-optimized products and integrated technical capabilities.
Market expansion potential is also significant. AMD is extending its MI series into various sectors, including data centers, high-performance computing, and defense/scientific research. As Samsung’s high-capacity HBM proves itself in these environments, it could be included in future products like the MI400 and MI500. Given the industry’s tendency to continue partnerships with trusted suppliers, Samsung has effectively secured a long-term growth path.
Ultimately, this partnership offers both companies a strategic opportunity to reclaim technological leadership. AMD can narrow the gap with Nvidia, while Samsung can rebuild confidence in its HBM products. If the two continue to deliver results, their collaboration could go beyond simple supply deals to include joint product development, marketing, and even ecosystem-level cooperation. This would help break Nvidia’s lock on the AI semiconductor market and foster a more competitive and flexible environment.
“HBM Isn’t the Only Answer”: GDDR7 Adoption Gains Traction
The dominance of HBM in AI semiconductors is also beginning to show cracks. While GDDR7 offers lower bandwidth than HBM, it is emerging as an efficient alternative for mid-range AI solutions due to its advantages in cost, heat management, and power consumption. Nvidia is already using GDDR7 in AI GPUs for the Chinese market, and AMD is reportedly considering a similar approach. This signals a more pragmatic strategy in memory selection — one that balances performance and cost.
The rise of GDDR7 is driven by increasing segmentation in the AI semiconductor market. While data center GPUs require ultra-high performance, edge AI and industrial GPUs with lower computational demands call for different memory strategies. Though HBM excels in performance, it remains expensive and faces supply constraints — particularly with SK Hynix and Micron now dominating the HBM space. This makes alternative solutions increasingly attractive to reduce risk.
This trend presents yet another opportunity for Samsung. While it has faced challenges regaining trust in the HBM market, Samsung is considered a technology leader in GDDR7, with robust production capacity. If Nvidia and AMD increase adoption of GDDR7, Samsung could expand its market presence with competitively priced products. It also allows Samsung to pursue a more flexible strategy, no longer needing to rely solely on HBM for growth.
The AI semiconductor market is clearly entering a phase where no single memory solution fits all. HBM remains essential for the highest performance, while alternatives like GDDR7 are gaining ground where cost-efficiency is key — signaling the rise of a "multi-tiered memory strategy." With its capabilities in both HBM and GDDR7, Samsung is well-positioned to compete more broadly, moving beyond the latecomer label to become a strategic player across the entire market.