On the back of Nvidia announcing its latest Blackwell line of GPUs, the hyperscale cloud providers have all announced plans to offer access to them later this year.

Oracle, Amazon, Microsoft, and Google have all said they will offer access to the new GPUs through their respective cloud platforms at launch. Lambda and NexGen, GPU-cloud providers, have said it will soon be offering access to Blackwell hardware.

Nvidia GB200 NVL72
– Nvidia

The launch of the H100 Hopper GPU saw niche cloud providers including CoreWeave and Cirrascale get first access, with H100 instances coming to the big cloud platforms later.

Malaysian conglomerate YTL, which recently moved into developing data centers, is also set to host and offer access to a DGX supercomputer.

Singaporean telco Singtel is also set to launch a GPU cloud service later this year.

Applied Digital, a US company previously focused on hosting cryptomining hardware, has also announced it will host Blackwell hardware.


Oracle said it plans to offer Nvidia’s Blackwell GPUs via its OCI Supercluster and OCI Compute instances. OCI Compute will adopt both the Nvidia GB200 Grace Blackwell Superchip and the Nvidia Blackwell B200 Tensor Core GPU.

Oracle also said Nvidia’s Oracle-based DGX Cloud cluster will consist of 72 Blackwell GPUs NVL72 and 36 Grace CPUs with fifth-generation NVLink. Access will be available through GB200 NVL72-based instances.

“As AI reshapes business, industry, and policy around the world, countries and organizations need to strengthen their digital sovereignty in order to protect their most valuable data,” said Safra Catz, CEO of Oracle.

“Our continued collaboration with Nvidia and our unique ability to deploy cloud regions quickly and locally will ensure societies can take advantage of AI without compromising their security.”


Google announced its adoption of the new Nvidia Grace Blackwell AI computing platform. The company said Google is adopting the platform for various internal deployments and will be one of the first cloud providers to offer Blackwell-powered instances.

The search and cloud company also said the Nvidia H100-powered DGX Cloud platform is now generally available on Google Cloud. The company said it will bring Nvidia GB200 NVL72 systems, which combine 72 Blackwell GPUs and 36 Grace CPUs interconnected by fifth-generation NVLink, to its cloud infrastructure in future.

"The strength of our long-lasting partnership with Nvidia begins at the hardware level and extends across our portfolio - from state-of-the-art GPU accelerators, to the software ecosystem, to our managed Vertex AI platform," said Google Cloud CEO Thomas Kurian.

"Together with Nvidia, our team is committed to providing a highly accessible, open, and comprehensive AI platform for ML developers."


Microsoft also said it will be one of the first organizations to bring the power of Nvidia Grace Blackwell GB200 and advanced Nvidia Quantum-X800 InfiniBand networking to the cloud and will be offering them through its Azure cloud service.

Microsoft also announced the general availability of its Azure NC H100 v5 VM virtual machine (VM) based on the Nvidia H100 NVL platform, which is designed for midrange training and inferencing.

“Together with Nvidia, we are making the promise of AI real, helping drive new benefits and productivity gains for people and organizations everywhere,” said Satya Nadella, chairman and CEO, Microsoft.

“From bringing the GB200 Grace Blackwell processor to Azure to new integrations between DGX Cloud and Microsoft Fabric, the announcements we are making today will ensure customers have the most comprehensive platforms and tools across every layer of the Copilot stack, from silicon to software, to build their own breakthrough AI capability.”


Blackwell hardware is also coming to Amazon Web Services (AWS). The companies said AWS will offer the GB200 Grace Blackwell Superchip and B100 Tensor Core GPUs on its cloud platform.

AWS will offer the Blackwell platform, featuring GB200 NVL72, with 72 Blackwell GPUs and 36 Grace CPUs interconnected by fifth-generation NVLink. The cloud provider also plans to offer EC2 instances featuring the new B100 GPUs deployed in EC2 UltraClusters. GB200s will also be available on Nvidia’s DGX Cloud within AWS.

“The deep collaboration between our two organizations goes back more than 13 years, when together we launched the world’s first GPU cloud instance on AWS, and today we offer the widest range of Nvidia GPU solutions for customers,” said Adam Selipsky, CEO at AWS.

“Nvidia’s next-generation Grace Blackwell processor marks a significant step forward in generative AI and GPU computing. When combined with AWS’s powerful Elastic Fabric Adapter Networking, Amazon EC2 UltraClusters’ hyper-scale clustering, and our unique Nitro system’s advanced virtualization and security capabilities, we make it possible for customers to build and run multi-trillion parameter large language models faster, at massive scale, and more securely than anywhere else.”


In its own announcement, GPU cloud provider Lambda Labs said it would be ‘one of the first’ companies to deploy the latest Blackwell hardware.

The GB200 Grace Blackwell Superchip and B200 and B100 Tensor Core GPUs will be available through Lambda’s On-Demand & Reserved Cloud, and Blackwell-based DGX SuperPODs will be deployed in Lambda’s AI-Ready Data Centers.


NexGen, a GPU cloud and Infrastructure-as-a-Service provider, also announced it would be ‘among the first cloud providers’ to offer access to Blackwell hardware.

The company said it will provide these services as part of its AI Supercloud, which is itself planned for Q2 2024.

“Being one of the first Elite Cloud Partners in the Nvidia Partner Network to offer Nvidia Blackwell-powered products to the market marks a major milestone for our business,” said Chris Starkey, CEO of NexGen Cloud.

“Through Blackwell-powered solutions, we will be able to equip customers with the most powerful GPU offerings on the market, empowering them to drive innovation, whilst achieving unprecedented efficiencies. This will help unlock new opportunities across industries and enhance the way we use AI both now and in the future.”


Malaysia’s YTL, which is developing data centers in Johor, is moving to become an AI cloud provider.

The company this week announced the formation of YTL AI Cloud, a specialized provider of GPU-based computing. The new unit will deploy and manage “one of the world’s most advanced supercomputers” on Nvidia’s Grace Blackwell-powered DGX Cloud.

The YTL AI Supercomputer will reportedly surpass more than 300 exaflops of AI compute.

The supercomputer will be located in a facility at the 1,640-acre YTL Green Data Center Campus, Johor. The site will reportedly be powered via 500MW of on-site solar capacity.

YTL Power International Managing Director, Dato’ Seri Yeoh Seok Hong, said: “We are proud to be working with Nvidia and the Malaysian government to bring powerful AI cloud computing to Malaysia.

"We are excited to bring this supercomputing power to the Asia Pacific region, which has been home to many of the fastest-growing cloud regions and many of the most innovative users of AI in the world.”

Applied Digital

In the US, Applied Digital also said it would be "among the pioneering cloud service providers" offering Blackwell GPUs. Further details weren't shared.

Applied develops and operates next-generation data centers across North America to cater to high-performance computing (HPC). It was previously focused on hosting cryptomining hardware. The company also has a cloud offering through Sai Computing.

“Applied Digital demonstrates a profound commitment to driving generative AI, showcasing a deep understanding of its transformative potential. By seamlessly integrating infrastructure, Applied breathes life into generative AI, recognizing the critical role of GPUs and supporting data center infrastructure in its advancement,” said Wes Cummins, CEO and chairman of Applied Digital.


Singaporean telco Singtel announced it will be launching its GPU-as-a-Service (GPUaaS) in Singapore and Southeast Asia in the third quarter of this year.

At launch, Singtel’s GPUaaS will be powered by Nvidia H100 Tensor Core GPU-powered clusters that are operated in existing upgraded data centers in Singapore. In addition, Singtel - like everyone else - will be “among the world's first “to deploy GB200 Grace Blackwell Superchips.

Bill Chang, CEO of Singtel’s Digital InfraCo unit and Nxera regional data center business, said: “We are seeing keen interest from the private and public sectors which are raring to deploy AI at scale quickly and cost-effectively.

"Our GPUaaS will run in AI-ready data centers specifically tailored for intense compute environments with purpose-built liquid-cooling technologies for maximum efficiency and lowest PUE, giving them the flexibility to deploy AI without having to invest and manage expensive data center infrastructure.”