The NVIDIA GB200 NVL72 is a liquid-cooled, data-centre-class AI server delivering unprecedented performance for generative AI, large language models and HPC workloads. With a 72-GPU NVLink domain, the system acts as a unified accelerator delivering up to 1,440 PFLOPS FP4, 720 PFLOPS FP8, and supports massive 13.4 TB HBM3e memory and 130 TB/s high-speed interconnect. Designed for real-time trillion-parameter inference (30× H100) and large-scale training (4× H100), this rack-scale solution enables hyperscale AI infrastructure with extreme throughput and efficiency.