Jun 03, 2024

High Bandwidth Memory Technology for AI Applications

By Prakash Vijayan, CFA

High Bandwidth Memory Technology for AI Applications

High Bandwidth Memory (HBM) is a type of memory technology that uses stacked Dynamic Random Access Memory (DRAM) chips to achieve high-speed data transfer and low power consumption. HBM is mainly used for applications that need high-performance computing, such as graphics cards, artificial intelligence (AI), and supercomputers. AI applications need HBM because they require high-performance computing (HPC) systems that can process large amounts of data and complex computations. Conventional memory technologies, such as Double Data Rate (DDR) or Graphics DDR (GDDR), cannot satisfy the needs of these applications, as they have limited bandwidth, density, and efficiency.

Exhibit 1: HBM Bandwidth Comparison by Generation vs GDDR/DDR

Source: Mizuho Securities

One of the challenges of HBM technology is its manufacturing complexity. HBM requires a sophisticated fabrication process that involves stacking multiple DRAM chips on a silicon interposer using through-silicon vias (TSVs) and micro bumps. TSVs are vertical electrical connections that pass through the silicon wafer, while micro bumps are tiny solder balls that connect the DRAM chips to the interposer. These techniques enable high-density and high-speed data transfer between the chips, but they also increase the cost and difficulty of production. Advanced packaging and metrology requirements are more capital intensive in HBM manufacturing given the 2.5/3D stacked die arrangement.

Exhibit 2: Schematic of Stacked HBM Package

Source: Jefferies

One of the beneficiaries of HBM technology is a company that unveiled its new GPU architecture that will use HBM2E, HBM3 and HBM3E memory for high-performance computing and AI applications. The new GPUs will feature up to 18 HBM stacks, providing a total of 288 GB of memory and 3 TB/s of bandwidth. This will enable unprecedented levels of performance and efficiency for data-intensive workloads, such as deep learning, scientific simulation, and video processing. The leading HBM manufacturers have revealed plans to increase their production significantly in the next few years with global HBM supply projected to rise eight times over the next three years with HBM expanding to more than 20% of the total DRAM market from less than 5% in 2023.

Exhibit 3: Global HBM Capacity Trends

Source: Mizuho Securities

In the strategies, the team has invested in semiconductor capital equipment companies that make the production of HBM possible, as well as companies that solve the complicated advanced packaging and metrology requirements in making HBM.

This information is not intended to provide investment advice. Nothing herein should be construed as a solicitation, recommendation or an offer to buy, sell or hold any securities, market sectors, other investments or to adopt any investment strategy or strategies. You should assess your own investment needs based on your individual financial circumstances and investment objectives. This material is not intended to be relied upon as a forecast or research. The opinions expressed are those of Driehaus Capital Management LLC (“Driehaus”) as of May 2024 and are subject to change at any time due to changes in market or economic conditions. The information has not been updated since May 2024 and may not reflect recent market activity. The information and opinions contained in this material are derived from proprietary and non-proprietary sources deemed by Driehaus to be reliable and are not necessarily all inclusive. Driehaus does not guarantee the accuracy or completeness of this informa­tion. There is no guarantee that any forecasts made will come to pass. Reliance upon information in this material is at the sole discretion of the reader.

About Prakash Vijayan, CFA

Prakash Vijayan is an assistant portfolio manager and senior analyst on the US Growth Equities Team with a focus on the information technology and communication services sectors.

Read Bio