Mellanox provides the world's smartest switch, enabling in-network computing through the Co-Design Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™ technology. QM8790 has the highest fabric performance available in the market with up to 16Tb/s of non-blocking bandwidth with sub-90ns port-to-port latency.
SCALING-OUT DATA CENTERS WITH HDR 200G INFINIBAND Faster servers, combined with high-performance storage and applications that use increasingly complex computations are causing data bandwidth requirements to spiral upward. As servers are deployed with next generation processors, High-Performance Computing (HPC) environments and Enterprise Data Centers (EDC) will need every last bit of bandwidth delivered with Mellanox's next generation of HDR InfiniBand, high-speed, smart switches. WORLD'S SMARTEST SWITCH Built with Mellanox's Quantum InfiniBand switch device, the QM8790 provides up to forty 200Gb/s ports, with full bi-directional bandwidth per port. These stand-alone switches are an ideal choice for top-of-rack leaf connectivity or for building small to extremely large sized clusters.
QM8790 is the world's smartest network switch, designed to enable in-network computing through the Co-Design Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™ technology. The Co-Design architecture enables the usage of all active data center devices to accelerate the communications frameworks using embedded hardware, resulting in an order of magnitude application performance improvements.
QM8790 enables efficient computing with features such as static routing, adaptive routing, congestion control and enhanced VL mapping to enable modern topologies (SlimFly, Dragonfly+, 6DT). These ensure the maximum effective fabric bandwidth by eliminating congestion hot spots.
The QM8790 switch has best-in-class design to support low power consumption. Power is further reduced upon partial port utilization.
COLLECTIVE COMMUNICATION ACCELERATION
Collective communication describes communication patterns in which all members of a group of communication endpoints participate. Collective communications are commonly used in HPC protocols such as MPI and SHMEM.