Tesla has claimed to now have the world’s seventh largest graphics supercomputer by GPU count.

Tim Zaman, the company’s AI and Autopilot lead tweeted on Friday that: “We [Tesla] have recently upgraded our GPU supercomputer to 7,360 A-100(80GB) GPUs, making it Top-7 by GPU-count.”

The A100 GPUs are created by Nvidia and each processor has 80GB of graphics memory and boasts a memory bandwidth of two terabytes per second.

supercomputer tesla 1508.png
– Tim Zaman

The supercomputer is a precursor cluster for a project named ‘Dojo’. Released in 2021, Tesla claimed at the time that it was the fifth most powerful supercomputer in the world, with 5,760 Nvidia A100s throughout the system. This latest upgrade has added 1,600 GPUs to the system, though the company’s self-estimated ranking has dropped two spaces.

Dojo was initially announced in 2019 as being a ‘super-powerful training computer’ for video processing. CEO Elon Musk reaffirmed this with a tweet in 2020, “Tesla is developing a [neural network] training computer called Dojo to process truly vast amounts of video data. It’s a beast! … A truly useful exaflop at de facto FP32.”

While its precursor cluster is reliant on Nvidia A100 chips, Dojo will consist of Tesla’s own D1 chip. The D1 chip will be supported by FP32, BFP16 (bfloat16) and a new format called CFP8 (configurable FP8). It will be optimized for machine learning workloads and will consist of 354 ‘training nodes’. Each chip will be just 645 square millimeters and contain 50 billion transistors.

Further details about the Dojo supercomputer have not been released, nor has a planned launch date.

Get a weekly roundup of North America news, direct to your inbox.