Microsoft has acquired twice as many Nvidia Hopper GPUs as other tech companies.

Per a report in the FT, analysts from technology consultancy Omdia estimate that Microsoft bought 485,000 Hopper chips this year.

Nvidia
– Sebastian Moss

By comparison, Meta bought an estimated 224,000 Hopper chips, with ByteDance and Tencent ordering approximately 230,000 GPUs each, and xAI/Tesla purchasing approximately 200,000.

Of the big tech companies analyzed by Omdia, Amazon, and Google came bottom of the table, purchasing 196,000 and 169,000 Hopper chips, respectively.

Apple was not included in Omdia’s report but it has recently been reported that the company is working with Broadcom to develop its first AI-specific server chips in order to reduce its reliance on third-party GPUs.

Amazon, Google, Microsoft, and Meta have also all developed their own AI chips.

Launched in March 2022 to replace Nvidia’s Ampere architecture, the Hopper GPU has 80 billion transistors, was the first GPU to support PCIe Gen5, and the first to utilize HBM3. 

It provides up to 30 teraflops of peak standard IEEE FP64 performance, 60 teraflops of peak FP64 tensor core performance, and 60 teraflops of peak FP32 performance.

Despite Nvidia's Hopper successor Blackwell, expected in early 2025, demand for the Hopper GPUs remains strong.

Speaking to DCD at Supercomputing 2024, Dion Harris, head of data center product marketing at Nvidia, said that although there’s been a lot of hype around Blackwell, Hopper will still continue to have “incredible value” amongst Nvidia customers.

“When Grace Blackwell comes to market next year, I think a lot of those [Hopper applications] will instantly carry forward, and you'll see a lot more excitement just in terms of performance and benefits. But I think Grace Hopper is really having a transformative impact in terms of how some of these applications are being developed and run,” he said.

Blackwell's ongoing delays 

Blackwell has also had an unfortunate start to life, experiencing an unexpected production error that forced Nvidia to announce it would be pushing deliveries back. Since then, reports have emerged that the AI processors were overheating when linked together in 72-chip data center racks - the GB200 NVL72 configuration is capable of running 72 GB200 GPUs, 36 Grace CPUs, and nine NVLink switch trays, each of which has two NVLink switches.

And the problems don’t end there. According to a new report from TrendForce, the higher design specifications of Nvidia’s GB200 rack mean that the supply chain requires additional time for optimization and adjustment.

Despite Nvidia initially planning to start shipping Blackwell GPUs in the second half of 2024, TrendForce reported that there have been “only limited shipments” in Q4 2024. 

TrendForce further noted that because the requirements for the GB200’s high-speed interconnect interfaces and thermal design power (TDP) significantly exceed market norms, mass production, “the peak shipment period for the GB200 full-rack system will be postponed to between Q2 and Q3 of 2025.”