Keysight Technologies‘ AresONE-M 800GE Layer 1-3 Ethernet performance test platform set a new benchmark for validating Ethernet silicon switches by processing 1.2 Tbps of traffic across 64 x 800GE links while measuring loss, latency, and jitter.
The tests were conducted on the Marvell Teralynx 10 Ethernet switch chip.
Highlights from the validation include:
- High throughput and high speed – The Keysight AresONE-M 800GE generated 51.2 Tbps of traffic, sending it successfully through the Teralynx 10 switch to test its limits. The test validated an 800GE interface speed based on 112G SerDes, which facilitates faster data transfer between devices and extended reach in data center interconnects or telecommunications networks that run data-intensive AI applications.
- Scalability – Eight AresONE-M 8-port chassis were chained together, providing an industry first 64 x 800GE link configuration to achieve a high scale test bed running at 800GE line rate.
- Low latency – Low latency is critical for achieving the shortest job completion time for AI training and other highly distributed applications. These AI workloads depend on the switching fabric to provide the lowest possible latency — and to do so predictably.
- Performance analysis – Beyond the need for high bandwidth and low latency, measuring the performance of all 64 x 800GE links was an important aspect of the testbed, providing deeper, actionable analytics around loss, latency, and jitter at line rate.
Rishi Chugh, Vice President of Product Marketing, Network Switching, Marvell, said: “To definitively measure and validate the low-latency capabilities of Teralynx 10, Marvell turned to Keysight and its industry-leading AresONE-M 800GE test equipment. Their thorough evaluation and collaboration with our team made the process smooth and ensured the integrity of the strong results we achieved.”
Ram Periakaruppan, Vice President and General Manager, Network Test & Security Solutions, Keysight, said: “The switching fabric is a vital component of the back-end network architecture used for AI training. AI training workloads trigger a huge increase in traffic volume and compared to front-end networks, are persistently pushing this higher traffic volume around the clock. At these sustained rates, benchmarking low latency becomes very critical for ensuring that the AI training algorithms achieve the most efficient job completion time.”