OpenAI is set to spend billions of dollars on training and inference this year, and may be forced to raise more money to cover growing losses.

The Information reports that, as of March, the company was set to spend nearly $4 billion this year on using Microsoft’s servers to run inference workloads for ChatGPT.

101_OpenAI_DALLE_2__-_A_cloud_with_a_data_center_nestling_on_it_digital_art.png
– DCD/DALL·E 2

A person familiar with the matter told the publication that OpenAI has the equivalent of 350,000 servers containing Nvidia A100 chips for inference, with around 290,000 of those servers used for ChatGPT. The hardware is being run at near full capacity.

Training ChatGPT as well as new models could cost as much as $3bn this year, according to financial documents seen by the publication. The company has ramped up the training of new AI faster than it had originally planned.

For both inference and training, OpenAI gets heavily discounted rates from Microsoft Azure. Microsoft has charged OpenAI about $1.30 per A100 server per hour, way below normal rates.

The company now employs about 1,500 people, which could cost $1.5bn as it continues to grow, The Information estimates - OpenAI had originally projected workforce costs of $500 million for 2023 while doubling headcount to around 800 by the end of that year.

The company is bringing in about $2bn annually from ChatGPT, and could be set to bring in nearly $1bn from charging access to LLMs.

OpenAI recently generated $283m in total revenue per month, which could mean full-year sales of between $3.5bn and $4.5bn.

That would leave a $5bn shortfall, with the company likely in need of fresh funds within the next 12 months.