Generative AI coding startup Magic has announced a partnership with Google Cloud to build two supercomputers on Google Cloud Platform.
The news follows the closure of a $320 million fundraising round for Magic, which saw participation from Alphabet’s CapitalG, Atlassian, Nat Friedman and Daniel Gross, and Sequoia.
The San Francisco-based company was founded in 2022 by Eric Steinberger and Sebastian De Ro, has raised $465 million to date, and was valued at $500 million in February 2024. The 23-person strong Magic is focused on the development of artificial intelligence (AI) models to write software and automate software development tasks.
According to a blog post, the partnership with Google Cloud will see the startup launch the Magic-G4 supercomputer – made up of Nvidia H100 Tensor Core GPUs – and the Magic-G5, which will be powered by Nvidia’s upcoming Grace Blackwell platform and will scale up to “tens of thousands of GPUs” over time.
Magic claims to have 8,000 H100s in its possession.
In a separate blog post, Google said these supercomputers will be able to achieve 160 exaflops of, presumably, AI performance. No further information about the technical specifications of the supercomputers has been released.
Currently, the world’s most powerful supercomputer, Frontier, has a benchmark HPL (high-performance Linpack) score of 1.102 exaflops. AI performance is measured in FP8, or 8-bit precision calculations, compared to traditional compute performance which is measured in double-precision calculations, also known as FP64 – the industry standard for large systems.
Therefore, a system boasting 100 petaflops of FP64 performance is more powerful than a system that has achieved 100 petaflops of FP8 performance, for example.
“We are excited to partner with Google and Nvidia to build our next-gen AI supercomputer on Google Cloud,” said Steinberger, Magic’s CEO and co-founder. “Nvidia’s GB200 NLV72 system will greatly improve inference and training efficiency for our models, and Google Cloud offers us the fastest timeline to scale, and a rich ecosystem of cloud services.”
In late August, Japan's Ministry of Education, Culture, Sports, Science and Technology (MEXT) announced that it was planning to build a successor to the country’s Fugaku supercomputer, which it claims will be the world’s first zettascale supercomputer -- though again likely in AI flops rather than FP64.