Marvell introduced a custom high-bandwidth memory (HBM) compute architecture designed to optimize XPUs for cloud AI accelerators. This innovation delivers up to 25% more compute capability and 33% higher memory density while significantly reducing power consumption. The architecture incorporates advanced die-to-die interfaces, HBM base dies, controller logic, and 2.5D packaging to meet the specific needs of cloud data centers.
Marvell is collaborating with leading memory manufacturers—Micron, Samsung, and SK hynix—to customize HBM solutions tailored to cloud operator requirements. By enhancing the integration of HBM memory subsystems and XPU designs, Marvell’s new approach reduces silicon real estate requirements and power usage while increasing memory capacity and compute density. This enables cloud operators to scale their infrastructure more efficiently for AI workloads.
• Key Points:
• New architecture boosts compute performance by 25% and memory density by 33%.
• Reduces interface power consumption by up to 70% compared to standard HBM designs.
• Collaboration with Micron, Samsung, and SK hynix for custom HBM solutions.
• Enables support for up to 33% more HBM stacks per XPU.
• Lowers total cost of ownership (TCO) for cloud operators.
“Tailoring HBM for specific performance, power, and cost requirements is a new paradigm in AI accelerator design,” said Will Chu, Senior Vice President and General Manager of Marvell’s Custom, Compute, and Storage Group. “We are excited to partner with leading memory manufacturers to help cloud data centers scale efficiently for the AI era.”