CoreWeave will be providing the artificial intelligence (AI) compute to IBM for training its next generation of Granite models.
IBM will use CoreWeave's cloud platform to access the Nvidia GB200 clusters, featuring the GB200 NVL72 systems, interconnected with Nvidia Quantum-2 InfiniBand networking.
IBM's Granite models are open-source enterprise-ready AI models. As well as developing its own supercomputers, the company has its own cloud platform on which it offers access to GPUs. The company was recently reported to be nearing a deal to access AWS' GPUs.
Michael Intrator, CoreWeave CEO and co-founder, said the deal will help "push the boundaries of artificial intelligence. Intrator said: “This collaboration is a testament to CoreWeave’s ability to deliver some of the world’s most advanced AI cloud solutions and will combine our strengths in engineering and product development. We look forward to deepening our relationship with IBM to drive transformative innovation together.”
The supercomputer will use IBM's Storage Scale System which is combined with NVMe technology. CoreWeave customers will also be able to access the IBM Storage platform via CoreWeave's cloud platform.
"CoreWeave’s cutting-edge platform can augment IBM’s organic capabilities to help build advanced, performant, and cost-efficient models for powering enterprise AI applications and agents," said Sriram Raghavan, VP of AI at IBM Research.
"In turn, IBM Storage is enabling a new world of possibilities for AI by offering IBM Storage Scale System to enhance CoreWeave’s comprehensive suite of developer-focused AI capabilities. And finally, as part of this collaboration, we will leverage this supercomputer to advance open technologies such as Kubernetes that will power AI computing in a hybrid cloud environment.”
The companies didn't say which facilities the models would be trained in or how many clusters it would be using.
CoreWeave displayed a demonstration of a GB200 NVL72 system at one of its data centers in November of last year.
The company, which counts Microsoft as a customer, said it is one of the first major cloud providers to bring up an Nvidia GB200 NVL72 cluster based on Nvidia’s Blackwell GPUs, and that the cluster delivers up to 1.4 exaFLOPS of AI compute. On LinkedIn, the company suggested the cluster is hosted by Switch, with Dell was named as a partner.
Earlier this week, CoreWeave launched its first data center spaces in the UK located in Crawley and London Docklands.