Accelerate Data-Driven Scientific Computing with In-Network Computing
The NVIDIA® ConnectX®-7 NDR 400 gigabits per second (Gb/s) InfiniBand host channel adapter (HCA) provides the highest networking performance available to take on the world's most challenging workloads. The ConnectX-7 InfiniBand adapter provides ultra-low latency, 400Gb/s throughput, and innovative NVIDIA In-Network Computing engines to deliver the acceleration, scalability, and feature-rich technology needed for high performance computing (HPC), artificial intelligence (AI), and hyperscale cloud data centers.
High performance computing and artificial intelligence have driven supercomputers into wide commercial use as the primary data processing engines enabling research, scientific discoveries and product development. These systems can carry complex simulations and unlock the new era of AI, where software writes software. NVIDIA InfiniBand networking is the engine of these platforms delivering breakthrough performance.
ConnectX-7 NDR InfiniBand smart In-Network Computing acceleration engines include collective accelerations, MPI Tag Matching and All-to-All engines, and programmable datapath accelerators. These performance advantages and the standard guarantee of backward- and forward- compatibility ensure leading performance and scalability for compute and data-intensive applications and enable users to protect their data center investments.
Portfolio
Single-port or dual-port NDR (400Gb/s) or NDR200 (200Gb/s), with octal small form-factor pluggable (OSFP) connectors
Dual-port HDR (200Gb/s) with quad small form-factor pluggable (QSFP) connectors
PCIe standup half-height, half-length (HHHL) and full-height, half-length (FHHL) form factors, with options for NVIDIA Socket Direct™
Open Compute Project 3.0 (OCP3.0) tall small form factor (TSFF) and small form factor (SFF)