8× NVIDIA Hopper H200 GPUs · high-bandwidth memory · scalable NVLink interconnect.
8× NVIDIA Blackwell GPUs · High-bandwidth HBM3e memory · NVLink 5 multi-GPU fabric.
8× NVIDIA Blackwell Ultra GPUs · 14.4 TB/s NVLink · Massive memory and networking for AI workloads.
The NVIDIA MQM9701‑NS2R Quantum‑2 DGX‑ready NDR InfiniBand Switch delivers enterprise‑grade 400 Gb/s NDR connectivity via 32 OSFP ports with breakout support up to 64 NDR links. Designed for GPU‑dense AI servers and HPC clusters, it ensures deterministic, lossless, ultra‑low‑latency communication — ideal for deep learning, HPC workloads, and modern data‑center fabrics. Built for reliability and scalability, it supports advanced RDMA, adaptive routing, and full InfiniBand fabric orchestration.
The NVIDIA Q3400-RA features a 4U chassis with two adjoining Quantum-3 XDR switches, providing 144 high-speed XDR ports in 72 OSFP cages. Designed for AI, HPC, and cloud-scale infrastructures, it offers SHARPv3 in-network acceleration, adaptive routing, congestion-aware control, and full fabric telemetry, ensuring maximum performance, low latency, and enterprise-grade reliability.
The NVIDIA Q3200-RA Quantum-3 Based Two-Adjoining XDR InfiniBand Switch is engineered for extreme-performance computing environments that demand ultra-fast data throughput and minimal latency. Featuring 36 XDR (400Gb/s) InfiniBand ports across dual adjoining modules, it delivers unmatched scalability, congestion control, and fabric acceleration for HPC, AI training, simulation, and cloud infrastructures. Designed with advanced telemetry, network automation, and high-availability features, it ensures reliable performance in the most demanding environments.
The NVIDIA/Mellanox MQM8700-HS2 delivers ultra-low-latency, high-density HDR InfiniBand connectivity with 40 non-blocking 200 Gb/s QSFP56 ports and 16 Tb/s switching capacity. Built for high-performance computing, AI clusters, and hyperscale data centers, it features SHARP in-network compute acceleration, adaptive routing, RDMA-optimized fabric, full fabric telemetry and management interfaces, along with redundant hot-swappable power supplies and fan modules for top reliability.
The NVIDIA/Mellanox Quantum MQM8700-HS2R delivers ultra-fast 200G HDR InfiniBand connectivity with 40 QSFP56 ports and a non-blocking 16 Tb/s switching fabric. Engineered for AI acceleration, HPC supercomputing, and scalable cloud environments, it supports SHARP in-network compute, adaptive routing, congestion control, advanced RDMA performance, and robust system telemetry. With redundant hot-swap PSUs and fan trays, the MQM8700-HS2R ensures maximum reliability and performance for modern high-density compute clusters.
The NVIDIA/Mellanox MQM9790-NS2R Quantum-2 switch delivers enterprise-grade 400 Gb/s NDR InfiniBand connectivity across 64 OSFP ports (split-capable up to 128 × 200 Gb/s), with 51.2 Tb/s non-blocking switching capacity. Designed for hyperscale AI, cloud, and HPC infrastructure, it features SHARP in-network acceleration, adaptive routing, RDMA-optimized fabric, real-time telemetry, and redundant hot-swappable PSUs and fans. Perfect for GPU clusters, high-performance compute racks, and next-gen data-center fabrics.
The NVIDIA MQM9790-NS2F Quantum-2 NDR InfiniBand switch delivers exceptional fabric performance for hyperscale AI and HPC infrastructures. With 64 × 400Gb/s NDR OSFP ports (split-capable to 128 × 200Gb/s), 51.2Tb/s switching capacity, SHARPv3 acceleration, and advanced telemetry, it enables ultra-low-latency, lossless connectivity for next-gen GPU clusters and supercomputing environments. Redundant PSUs, modular cooling, and flexible airflow ensure maximum uptime in high-density deployments.
The MQM9700‑NS2R is a 1U Quantum‑2 NDR InfiniBand switch delivering 64 × 400 Gb/s NDR ports (32 OSFP connectors), with full breakout support to 128 × 200 Gb/s. With a total fabric bandwidth of 51.2 Tb/s and ultra-low latency, it’s built for high-performance computing, AI clusters, scientific computing and data-center backbone fabrics. Advanced features include in‑network acceleration (SHARP), adaptive routing, congestion control, RDMA, and dense InfiniBand protocol support. The switch supports redundant hot‑swappable power supplies and fans, configurable airflow, and compact rack‑mount form factor — ensuring reliability and high availability for demanding workloads.
The MQM9700-NS2F is a 1U rack-mount Quantum-2 NDR InfiniBand switch providing 64 × 400Gb/s ports (32 OSFP connectors) delivering up to 51.2 Tb/s bandwidth. Built for ultra-low latency and maximum throughput, it supports advanced features like RDMA, adaptive routing, congestion control, in-network computing acceleration via SHARPv3, and flexible topology support (Fat Tree, DragonFly+, SlimFly). The switch includes redundant hot-swappable power supplies and fans for high availability, managed via CLI / WebUI / SNMP / JSON-RPC interfaces, making it perfect for AI-ML clusters, HPC, scientific computing, and hyperscale cloud infrastructure.