AI translation company DeepL plans to deploy a Nvidia DGX SuperPod with GB200 racks at EcoDataCenter's facility in Sweden.

The system is expected to be operational by mid-2025.

dgx-superpod-with-dgx-gb200-systems-og-1200x630
– Nvidia

"DeepL has always been a research-led company, which has enabled us to develop language AI for translation that continues to outperform other solutions on the market," said Jarek Kutylowski, CEO and founder of DeepL.

"This latest investment in Nvidia accelerated computing will give our research and engineering teams the power necessary to continue innovating and bringing to market the language AI tools and features that our customers know and love us for."

The company previously deployed a DGX SuperPod with H100 GPUs at the same data center in northern Sweden, with DeepL Mercury coming in at number 41 in the latest Top500 ranking of the world's most powerful supercomputers.

Installed in 2023, DeepL Mercury has a Linpack performance of 21.85 petaflops and a theoretical peak of 33.85 petaflops.

"Customers using language AI applications expect nearly instant responses, making efficient and powerful AI infrastructure critical for both building and deploying AI in production," said Charlie Boyle, vice president of the Nvidia DGX platform.

"DeepL's deployment of the latest Nvidia DGX SuperPod will accelerate its language AI research and development, empowering users to communicate more effectively across languages and cultures."

The liquid-cooled GB200 SuperPod features 36 GB200 Superchips per rack, with each GB200 made out of one Grace CPU and two Blackwell GPUs.