AIC has unveiled its new 2U storage server platform, the SB201-SU, featuring Intel Xeon 6 processors and integrated support for the CXL 2.0 memory expansion standard. Developed in partnership with Micron and Intel, the system leverages Micron’s CZ122 memory expansion module—the industry’s first compliant CXL 2.0 device—to deliver scalable, low-latency performance aimed at the most demanding AI, big data, and HPC workloads.
The SB201-SU platform is designed to address the growing compute and memory bottlenecks in enterprise data centers. By combining dense storage with flexible CXL-based memory expansion, AIC’s new offering provides a dynamic architecture for applications such as AI inference and training, large-scale databases, and financial modeling. The collaboration with Micron and Intel allows data center operators to unlock additional memory bandwidth and capacity without overhauling existing infrastructure, reducing total cost of ownership.
The system will be showcased at COMPUTEX 2025 in Taipei, with AIC offering global access to test and deployment support. This launch represents a concrete step forward in transitioning CXL from theoretical promise to production-ready deployments that benefit data-centric architectures.
- AIC’s new SB201-SU storage server supports Intel Xeon 6 and CXL 2.0
- Features Micron CZ122, the industry’s first CXL 2.0 memory expansion module
- Enables scalable, low-latency compute for AI, HPC, and big data
- Targeted at memory-constrained workloads with enhanced performance per rack
- Live demo available at COMPUTEX 2025, booth M1119a
“With the explosive growth of AI and data-intensive applications, enterprises now require even more flexible and high-performance compute and memory architectures,” said Allen Tsai, Senior Enterprise Server Product Manager at AIC. “This collaboration with Micron and Intel is a significant step forward in realizing high-performance computing infrastructures that meet these evolving demands.”
- Compute Express Link (CXL) is an open industry standard high-speed interconnect designed to enhance memory and accelerator coherency between CPUs and devices like GPUs, FPGAs, and memory expanders. Developed by a consortium led by Intel and formally launched in 2019, CXL enables dynamic resource sharing and reduces memory bottlenecks by allowing devices to access shared memory pools with low latency. CXL 1.1 introduced basic device-to-host communication, while CXL 2.0, released in 2020, added critical features such as memory pooling, switching, and persistent memory support. The recently ratified CXL 3.0 standard further improves bandwidth and scalability with support for multi-level switching, peer-to-peer communication, and up to 64 devices on a single hierarchy, cementing CXL’s role as a foundational technology for composable, disaggregated infrastructure in next-generation AI and HPC data centers.