HBM is reshaping chips

March 05, 2026

6

The export landscape for artificial intelligence (AI) memory is rapidly reshaping. With the surge in exports of high-bandwidth memory (HBM), a key component of AI chips, reliance on China's semiconductor exports, which once held an absolute dominant position, has significantly decreased. 

Particularly noteworthy is that, as a model of exporting HBM to Taiwan for packaging and processing has gradually taken shape, Taiwan's export share has grown to 30% of total exports, roughly on par with China's.

Korea International Trade Association

According to data released by the Korea International Trade Association on March 3, South Korea's total exports of memory semiconductors reached US$94.613 billion last year, with exports to China amounting to US$30.99 billion, accounting for 32.7% of the total. This represents a significant decrease compared to the previous 70% share of semiconductor exports to China. Since 2018, South Korea's share of memory semiconductor exports to China has remained around 50% for the past five years, but it is projected to drop to around 30% by 2024.

In contrast, exports to Taiwan have increased significantly. Last year, memory semiconductor exports to Taiwan reached US$27.076 billion, an 87.2% increase from US$14.46 billion the previous year. Correspondingly, Taiwan's share of total memory semiconductor exports rose to 28.6%, an increase of 14.1 percentage points from the previous year. Notably, the gap between Taiwan and mainland China, the largest destination for memory semiconductor exports, has narrowed significantly.

Dependence on Chinese memory semiconductors is decreasing, while the revenue structure of AI-driven memory is becoming increasingly prominent. This change is mainly attributed to the increased number of HBM chips sold by SK Hynix to NVIDIA. Statistics show that HBM chips are exported to Taiwan, not the United States. After packaging and back-end processing at TSMC in Taiwan, they are ultimately delivered to Nvidia in the US. In fact, in 2020, the share of memory semiconductor exports to Taiwan was only about 6%, but it is projected to surge to 14.5% by 2024.

Export value has also increased significantly, from approximately $3 billion in 2023 to $14.46 billion in 2024, reaching $27.076 billion last year. Therefore, the ranking of memory chip export destinations has shifted from first to second (mainland China and Hong Kong) to first to second (mainland China and Taiwan).

Analysts point out that with reduced reliance on the Chinese market and diversification of export destinations to countries and regions such as the United States, Taiwan, and Vietnam, South Korea's memory chip export landscape is undergoing a structural transformation. This trend of reduced reliance on the Chinese market and diversification of export destinations is expected to have a positive impact on South Korea's semiconductor industry in the long run.

This trend is likely to continue in the short term. Kim Hyuk-jung, associate researcher at the Korea Institute for International Economic Policy, predicts, "With the continued growth in demand for AI chips, the supply model of HBM to TSMC in Taiwan will be strengthened." He adds, "Exports to Taiwan may even exceed exports to mainland China." US export controls on semiconductor equipment to China also appear to have an impact. Samsung Electronics' NAND factory in Xi'an (Shaanxi Province), SK Hynix's DRAM factory in Wuxi (Jiangsu Province), and its NAND factory in Dalian (Liaoning Province) previously held "End-User Certification" (VEU) status from the US government, allowing them to import US-made equipment without separate licenses. However, since Trump's second term, these Chinese subsidiaries' VEU status has been revoked, and separate licenses are now required for imported equipment.

Associate researcher Kim points out, "Statistics show that equipment imports from major production bases like Shaanxi and Jiangsu are more intermittent than before." He adds, "Due to the inactive import of new equipment, the capacity per unit area (CAPA) of Chinese factories may naturally and gradually shrink."

HBM 4, The New Revolution

Samsung Electronics and SK Hynix are fiercely competing in the next-generation high-bandwidth memory (HBM4) market, vying for dominance. HBM4 has become a core infrastructure of the artificial intelligence era, and this competition is not only a battle between Samsung and SK for global memory leadership but also concerns the future of the South Korean economy. The HBM4 market could significantly impact both companies' visions for the future of artificial intelligence, affecting not only next-generation memory technology but also the entire supply chain.

SK Hynix is working on innovating its high-bandwidth memory (HBM) packaging technology. The company's technology, which improves HBM stability and performance without requiring major process changes, is currently undergoing validation.

If commercialized, this technology is expected to not only achieve the peak performance required by Nvidia for HBM4 (6th generation) but also significantly improve the performance of next-generation products. Therefore, the industry's attention is focused on the success or failure of this technology.

According to a report by ZDNet Korea on the 3rd, SK Hynix is seeking to apply new packaging technologies to improve HBM performance.

HBM is a memory device that vertically stacks multiple DRAMs and connects them through through-silicon vias (TSVs). Each DRAM is bonded via microbumps. HBM4 will initially be commercialized in the form of a 12-layer stacked product.

SK Hynix has now begun initial mass production of HBM4. Given that the delivery cycle for HBM4 is approximately six months (including the total time required for mass production and supply), the company has aggressively begun mass production of the product before completing official quality testing with NVIDIA.

The situation of HBM4

The supply of HBM4 is not a problem, but achieving optimal performance is difficult.

The industry has long been focused on the performance and stability of SK Hynix's HBM4. This is because NVIDIA requires HBM4 to achieve a maximum performance (speed per pin) of 11.7 Gbps, far exceeding the original product standard of 8 Gbps, significantly increasing development complexity.

In fact, SK Hynix's HBM4 struggled to achieve optimal performance in 2.5D package testing for integrated AI accelerators. This resulted in improvements to its circuit design only being made earlier this year. Consequently, its full-scale production plans were delayed compared to industry expectations.

However, according to industry reports, the likelihood of a major disruption to SK Hynix's HBM4 supply to NVIDIA is currently very low.

The key issue lies in the supply chain. Despite NVIDIA's high specifications for HBM4, if this situation persists, its latest AI accelerator, Rubin, may face supply shortages in the second half of this year. Samsung Electronics, currently receiving the most positive feedback on HBM4, also faces difficulties in expanding supply in the short term due to its yield rates and current investments in 1C DRAM.

Therefore, industry insiders generally believe that NVIDIA is likely to lower the performance requirements of its initial HBM4 offerings, bringing them down to 10Gbps.

Semiconductor analysis firm Semianalysis recently reported that "NVIDIA initially set a total bandwidth target of 22 TB/s for the Rubin chip, but memory suppliers seem to be struggling to meet NVIDIA's requirements," and "initial shipments are expected to be lower than this, closer to 20 TB/s (equivalent to 10 Gbps per HBM4 pin)." An industry insider stated, "The HBM supply chain isn't just about speed; it also takes into account difficult factors such as yield and supply chain stability, so the expectation that SK Hynix will supply the maximum quantity remains valid." He added, "However, we cannot rest on our laurels, as we are constantly striving to improve to achieve optimal performance." A new weapon designed to overcome the performance limitations of high-energy ballistic missiles is under development and is currently in the verification phase.

In this regard, SK Hynix is currently attempting to introduce a new packaging method, aiming to apply it to HBM4 and next-generation products.

The biggest performance bottleneck facing HBM4 is the increase in the number of input/output (I/O) ports. I/O ports are the channels for data transmission and reception. HBM4 now offers 2048 I/O ports, double the number of its predecessor.

However, doubling the number of I/O ports can lead to interference between the densely packed I/O ports. Furthermore, voltage issues make it difficult to ensure adequate power transfer from the underlying logic chip (the controller chip located below the HBM) to the top-level chip.

Specifically, SK Hynix uses 1b (fifth-generation 10nm-class) DRAM, a generation behind its main competitor, Samsung Electronics. Its logic chips are also manufactured using TSMC's 12nm process, resulting in lower integration density compared to Samsung Electronics (which uses Samsung's 4nm process). Therefore, technically, it is more susceptible to the problems caused by the increased number of I/O ports.

SK Hynix is reportedly developing a new packaging method to address these challenges. Key elements include increasing the chip core thickness and reducing the gaps between DRAM layers.

Firstly, the thickness of some upper-layer DRAM will be increased compared to previous generations. Previously, to meet the HBM4 package specification (775 micrometers in height), manufacturers used a thinning process to grind the back of the DRAM thinner. However, if the DRAM is too thin, chip performance may degrade, or it may be more susceptible to damage from external impacts. Therefore, SK Hynix is believed to be aiming to improve the stability of HBM4 by increasing the thickness of the DRAM.

Furthermore, the spacing between DRAM chips has been further reduced, improving energy efficiency without increasing the overall package thickness. Because the distance between each DRAM chip is closer, data transmission speeds are faster, and less power is required to reach the top layer of the DRAM.

A key issue lies in the implementation difficulty. As the gap between DRAM chips shrinks, reliably injecting MUF (molded underfill material) into the gaps becomes difficult. MUF serves as a protective and insulating material for DRAM; uneven filling and void formation can lead to chip defects.

SK Hynix has developed a new packaging technology to address this problem. While specific details have not been disclosed, its core idea is to reduce the yield gap of DRAM with stable yield without major process or equipment changes. Recent internal testing results have reportedly been satisfactory.

If SK Hynix can quickly commercialize this technology, it could effectively narrow the performance gap between HBM4 and next-generation DRAM products. However, on the other hand, this technology may still face many challenges in mass production.

An insider explained, "SK Hynix has designed a new packaging method to overcome the limitations of existing HBM and is currently actively working on verification." He added, "Since HBM performance can be improved without large-scale facility investment, its commercialization will have a significant ripple effect."

Source: Compiled from businesskorea

We take your privacy very seriously and when you visit our website, please agree to all cookies used.