Converge Digest

Mellanox debuts Ethernet Cloud Fabric for 400G

Mellanox Technologies introduced its data center Ethernet Cloud Fabric (ECF) technology based on its second generation, Spectrum-2 silicon, which can deliver up to 16 ports of 400GbE, 32 ports of 200GbE, 64 ports of 100GbE, or 128 ports of 50/25/10/1GbE. 


Mellanox ECF combines three critical capabilities:


Packet forwarding data plane

Flexible and fully programmable data pipeline

Open and Actionable telemetry

Marvell said its Ethernet Cloud Fabric incorporates Ethernet Storage Fabric (ESF) technology that seamlessly allows the network to serve as the ideal scale-out data plane for computing, storage, artificial intelligence, and communications traffic. 

“The Spectrum-2 switch ASIC operates at speeds up to 400 Gigabit Ethernet, but goes beyond just raw performance by delivering the most advanced features of any switch in its class without compromising operation ability and simplicity,” said Amir Prescher, senior vice president of end user sales and business development at Mellanox Technologies, “Spectrum-2 enables a new era of Ethernet Cloud Fabrics designed to increase business continuity by delivering the most advanced visibility capabilities to detect and eliminate data center outages. This state-of-the-art visibility technology is combined with fair and predictable performance unmatched in the industry, which guarantees consistent application level performance, which in turn drives predictable business results for our customers. Spectrum-2 is at the heart a new family of SN3000 switches that come in leaf, spine, and super-spine form factors.”


The Spectrum-2 based SN3000 family of switch systems with ECF technology will be available in Q3.

With Mellanox, NVIDIA targets full compute/network/storage stack

NVIDIA agreed to acquire Mellanox in a deal valued at approximately $6.9 billion.

The merger targets data centers in general and the high-performance computing (HPC) market in particular. Together, NVIDIA’s computing platform and Mellanox’s interconnects power over 250 of the world’s TOP500 supercomputers and have as customers every major cloud service provider and computer maker. Mellanox pioneered the InfiniBand interconnect technology, which along with its high-speed Ethernet products is now used in over half of the world’s fastest supercomputers and in many leading hyperscale datacenters.

NVIDIA said the acquired assets enables it to data center-scale workloads across the entire computing, networking and storage stack to achieve higher performance, greater utilization and lower operating cost for customers.

“The emergence of AI and data science, as well as billions of simultaneous computer users, is fueling skyrocketing demand on the world’s datacenters,” said Jensen Huang, founder and CEO of NVIDIA. “Addressing this demand will require holistic architectures that connect vast numbers of fast computing nodes over intelligent networking fabrics to form a giant datacenter-scale compute engine.

Exit mobile version