Tesla CEO Elon Musk claimed that the company will spend more than $1 billion over the next year on its Dojo supercomputer.

The supercomputer uses the custom Dojo D1 chip architecture designed by the automotive company for training self-driving vehicle systems.

Tesla Dojo first cabinet.png
– Tesla

"Through the next year, it’s well over $1 billion in Dojo," Musk said in an earnings call. "We’ve got a truly staggering amount of video data to do training on. And this is another thing - in order to copy us, you also need to spend billions of dollars on training compute."

He added: "What Dojo is designed to do is optimize for video training. It’s not optimized for [large language models]. It’s optimized for video training."

On the call, Tesla CFO Zachary Kirkhorn later clarified that the $1bn investment included research and development costs, so it could include chip R&D as well as data center spend. When Tesla began installing Dojo last year, it claimed it created a fully-custom designed cooling distribution unit to support densities of more than 200kW per cabinet. Those cabinets, it said, were also custom.

At the time, the company said that Dojo would feature around 3,000 custom D1 chips, for a total of 1.1 exaflops (BF16/CFP8) of performance.

Alongside Dojo, Tesla is known to have a significant GPU footprint - as of 2021 it had 10,000 GPUs across three HPC clusters - and Musk said that that number is only set to grow. "We’ll actually take Nvidia hardware as fast as Nvidia will deliver it to us," he said. "Tremendous respect for Jensen [Huang, CEO and founder] and Nvidia. They’ve done an incredible job.

"And frankly, I don’t know, if they could deliver us enough GPUs, we might not need Dojo - but they can’t."

Between Dojo and the GPU deployments, Musk claimed that the company "may reach in-house neural net training capability of 100 exaflops by the end of next year." The benchmark used to make that claim was not shared. Musk has a history of promising to meet large targets and underdelivering, so the pronouncement should be taken with a large grain of salt.

"We expect to use both, Nvidia and Dojo, to be clear," Musk continued. "But we just see demand for really vast training resources."

In the past, Musk said that he might offer Dojo to other businesses as a cloud resource, but has yet to do so.

At the same time, Musk has acquired around 10,000 GPUs for Twitter (rebranded as X Corp) and launched a new AI startup (known as X.AI), which will require substantial GPU resources.