Leading artificial intelligence safety researcher Eliezer Yudkowsky has called for a cap on compute power, said GPU sales should be tracked, and believes we should be prepared to blow up rogue data centers.

Yudkowsky, best known for popularizing the idea of friendly artificial intelligence and research fellow at the Machine Intelligence Research Institute (MIRI), has written an article in Time Magazine claiming that humanity's future is in the balance.

103_OpenAI_DALLE_2__-_A_server_rack_on_fire_in_the_middle_of_a_climate_apocalypse.png
– DCD/DALL·E 2

"If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter," he said. In a podcast, the researcher revealed that he cried all night when OpenAI was founded.

OpenAI's CEO Sam Altman previously said that Yudkowsky had done more to accelerate artificial general intelligence (AGI) than anyone else, due to his research on what was possible. "Certainly he got many of us interested in AGI, helped DeepMind get funded at a time when AGI was extremely outside the Overton Window, was critical in the decision to start OpenAI, etc," Altman tweeted last month.

"It is possible at some point he will deserve the Nobel Peace Prize for this - I continue to think short timelines and slow takeoff is likely the safest quadrant of the short/long timelines and slow/fast takeoff matrix."

Yudkowsky is a researcher in the field of AI alignment, which is about how to steer AI systems towards their designers' intended goals and interests, so that they don't accidentally or intentionally cause harm.

OpenAI has said that it plans to solve the alignment problem by building an AI that can help develop alignment for other AIs. "Just hearing that this is the plan ought to be enough to get any sensible person to panic," Yudkowsky said in the Time piece.

The researcher also criticized OpenAI's lack of transparency, saying that it was hard to understand how close we are to disaster or AI self-awareness due to the company keeping the internal workings of its systems a secret.

He noted that an open letter published this week by researchers, CEOs, and AI figures (including Elon Musk, Stuart Russell, and Yoshua Bengio) that calls for a six-month moratorium on giant AI experiments "is an improvement on the margin," but "is understating the seriousness of the situation and asking for too little to solve it."

Effectively studying AI safety could take decades, he warned. "And the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we’ve overcome in our history, because we are all gone."

Instead of the six-month pause, Yudkowsky said that the "moratorium on new large training runs needs to be indefinite and worldwide."

He added: "Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere.

"Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue data center by airstrike."

He concluded: "We are not ready. We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong."

Get a monthly roundup of Hyperscale news, direct to your inbox.