Nvidia recently recognized Samsung’s HBM4 chip as the best on the market, and now Broadcom and Google appear to agree. Google, in collaboration with Broadcom, is developing its next Tensor Processing Unit (TPU) to manage AI workloads, utilizing HBM4 technology. Broadcom has been evaluating HBM4 chips from Micron, Samsung, and SK Hynix, the leading manufacturers capable of producing these chips. A recent assessment revealed that Samsung’s HBM4 chip achieved an operating speed of 11 Gbps, outperforming its competitors in speed. Additionally, it excelled in thermal performance, a critical factor for high-bandwidth memory chips. Broadcom’s evaluation was conducted in a system-in-package (SiP) environment, where logic chips and high-bandwidth memory are integrated into a single unit, marking the final testing phase before the HBM is incorporated into AI chips.

