Hewlett Packard Enterprise (HPE) has shipped its first NVIDIA Blackwell-based system, the NVIDIA GB200 NVL72, marking a major step in AI infrastructure deployment. This rack-scale solution integrates NVIDIA’s latest GPU technology with HPE’s advanced direct liquid cooling, targeting service providers and enterprises building large AI clusters. The GB200 NVL72 is optimized for training and inferencing large-scale AI models with over a trillion parameters, offering high-performance shared-memory architecture and seamless integration of compute, networking, and software components.
Designed for power-intensive workloads, the system features 72 NVIDIA Blackwell GPUs and 36 NVIDIA Grace CPUs interconnected via high-speed NVLink. With up to 13.5 TB of HBM3e memory delivering 576 TB/sec bandwidth, the GB200 NVL72 supports highly parallel AI applications, including generative AI model training. HPE leverages five decades of liquid cooling expertise, enabling efficient power usage and high-density computing. The company’s contributions to energy-efficient supercomputing include eight of the top 15 systems on the Green500 list and seven of the world’s 10 fastest supercomputers.
Key Points:
- First shipment of the NVIDIA GB200 NVL72 by HPE, integrating Blackwell GPUs and Grace CPUs.
- 72 NVIDIA Blackwell GPUs, 36 NVIDIA Grace CPUs, and 13.5 TB HBM3e memory with 576 TB/sec bandwidth.
- Direct liquid cooling for efficient power usage and high-density computing.
- HPE’s liquid cooling expertise includes eight of the top 15 Green500 supercomputers.
- Comprehensive AI support services, including on-site engineering, benchmarking, and sustainability initiatives.