Amazon Web Services' Project Ceiba's effort to build the world's largest cloud AI supercomputer just got bigger.

Originally announced last November with 16,384 GH200 NVL32 Grace Hopper Superchips, it will now sport the latest Nvidia GPU - the Blackwell.

The platform is expected to be used by Nvidia itself for research and development.

Nvidia GB200 NVL72
Nvidia's GB200 NVL72 – Nvidia

Alongside the 20,736 B200 GPUs, Ceiba will boast 10,368 Grace CPUs.

Ceiba will be built using the new liquid-cooled GB200 NVL72 platform with fifth generation NVLink. It will also use fourth-generation EFA networking, providing up to 800 Gbps per superchip.

This, Nvidia claims, all comes together for 414 exaflops of AI (likely FP4). That represents a 6x increase in performance over the previous Hopper version.

Nvidia's research and development teams will use Ceiba to advance AI for LLMs, graphics (image/video/3D generation) and simulation, digital biology, robotics, self-driving cars, climate prediction, and other workloads.

"AWS has really leaned into accelerated computing," Nvidia CEO and founder Jensen Huang said.