Samsung’s Delay in AI-Driven High-Bandwidth Memory Costs $126B in Market Value

Once a leader in memory semiconductors, Samsung Electronics now finds itself trailing SK Hynix in high-bandwidth memory (HBM), a key technology for powering NVIDIA’s AI chips. Samsung’s delay in embracing HBM technology, a once-niche area now vital for AI applications, has resulted in a steep decline, with around $126 billion wiped from its market value, as per S&P Capital IQ.

HBM, which stacks multiple DRAM chips, plays a crucial role in training large AI models, a field where NVIDIA has become dominant. Samsung’s slower investment in this area contrasted with SK Hynix’s aggressive push into HBM, which quickly led to NVIDIA’s approval of its products and a close partnership. SK Hynix even recorded a significant operating profit increase in the latest quarter, driven by its strong positioning in HBM.

Morningstar’s Kazunori Ito noted that Samsung’s initial reluctance to prioritize HBM development stemmed from high costs and limited market size, which allowed SK Hynix to capture an early lead. Counterpoint Research’s Brady Wang added that SK Hynix’s proactive research and industry partnerships have given it an edge in HBM innovation and adoption.

Samsung reported a 70% quarter-on-quarter growth in HBM sales in the third quarter and stated that its HBM3E is now in mass production, with plans to launch HBM4 in 2025. Analysts believe a major comeback for Samsung depends on NVIDIA’s qualification, which would enable Samsung to tap into the AI chip market more effectively. Samsung said it had made “meaningful progress” in the qualification process and expects increased sales in the fourth quarter.

AI Insider

Discover the future of AI technology with "AI Insider" - your go-to platform for industry data, market insights, and groundbreaking AI news

Subscribe today for the latest news about the AI landscape