Oracle offering 2.4zettaFLOPS

Author: EIS Release Date: Sep 20, 2024


Oracle Cloud Infrastructure (OCI) is offering customers access to what it claims to be the largest AI supercomputer in the cloud — with up to 131,072 NVIDIA Blackwell GPUs — delivering  2.4 zettaFLOPS of peak performance.

OCI Supercluster includes OCI Compute Bare Metal, ultra-low latency RoCEv2 with ConnectX-7 NICs and ConnectX-8 SuperNICs or NVIDIA Quantum-2 InfiniBand-based networks, and a choice of HPC storage.

NVIDIA Blackwell platform

OCI Superclusters are orderable with OCI Compute powered by either NVIDIA H100 or H200 Tensor Core GPUs or NVIDIA Blackwell GPUs. OCI Superclusters with H100 GPUs can scale up to 16,384 GPUs with up to 65 ExaFLOPS of performance and 13Pb/s of aggregated network throughput.


OCI Superclusters with H200 GPUs will scale to 65,536 GPUs with up to 260 ExaFLOPS of performance and 52Pb/s of aggregated network throughput and will be available later this year.


OCI Superclusters with NVIDIA GB200 NVL72liquid-cooled bare-metal instances will use NVLink and NVLink Switch to enable up to 72 Blackwell GPUs to communicate with each other at an aggregate bandwidth of 129.6 TB/s in a single NVLink domain.

NVIDIA Blackwell GPUs, available in the first half of 2025, with fifth-generation NVLink, NVLink Switch, and cluster networking will enable  GPU-GPU communication in a single cluster.