Generative AI company OpenAI is considering removing a clause that limited Microsoft's control over the company should it achieve artificial general intelligence (AGI).

When Microsoft first invested in the then-non-profit in 2019, it gained access to the majority of OpenAI's technology, data, and model weights. However, the two firms agreed that should AGI be achieved, Microsoft's access would be revoked.

OpenAI Blueprint
– OpenAI

The provision was enacted as a nod to the company's founding vision - of building a pathway to AGI that was not controlled by big tech.

That has shifted significantly over the past few years, as the company pivoted to a for-profit business in a still ongoing restructure. CEO and co-founder Sam Altman also initially did not take a stake in the business, but is believed to be set to gain as much as seven percent of a company last valued at $150 billion.

Microsoft has invested around $13bn into the company, primarily in the form of cloud credits, with the funds and capacity used to train and inference OpenAI's models.

But, as compute demands continue to scale, the financial requirements are growing just as fast. OpenAI has openly pushed for 5GW data centers to train its future models, but such a project could cost at least $100bn.

Raising such funds from Microsoft or other tech giants could be a challenge with the AGI safety clause, causing the discussions about removing it, the Financial Times reports.

“When we started, we had no idea we were going to be a product company or that the capital we needed would turn out to be so huge,” Altman told a New York Times conference on Wednesday.

“If we knew those things, we would have picked a different structure."

He added: "We’ve also said that our intention is to treat AGI as a mile marker along the way. We’ve left ourselves some flexibility because we don’t know what will happen."

Altman also downplayed the importance of AGI, something that OpenAI previously saw as an existential threat to humanity. “My guess is we will hit AGI sooner than most people in the world think and it will matter much less,” he said.

"And a lot of the safety concerns that we and others expressed actually don’t come at the AGI moment. AGI can get built, the world mostly goes on in mostly the same way, things grow faster, but then there is a long continuation from what we call AGI to what we call superintelligence."

Last month, AI governance researcher Richard Ngo quit the company over concerns that the business had departed from its "mission of making AGI go well."

The company has experienced a number of high-profile departures after Altman was fired and rehired by the board last year.

Co-founder Ilya Sutskever left to start his own company, co-founder John Schulman left to join Anthropic, former safety leader Jan Leike departed, co-founder and president Greg Brockman took a sabbatical, and CTO Mira Murati left to start her own company, chief research officer Bob McGrew and VP research (post-training) Barret Zoph both quit in September, among others.