A new paper co-authored by the former CEO of Google has outlined a future where AI training data centers could be blown up by foreign nations.
Eric Schmidt, along with Scale AI CEO Alexandr Wang and the Center for AI Safety's Dan Hendrycks, warned that "destabilizing AI developments could rupture the balance of power and raise the odds of great-power conflict."
The paper lays out the concept of Mutual Assured AI Malfunction (MAIM), modeled on nuclear mutual assured destruction (MAD), where any "aggressive bid for unilateral AI dominance is met with preventive sabotage by rivals."
This could involve espionage, cyberattacks, or kinetic strikes on data centers and their supporting infrastructure and supply chain, the authors argue.
"Well-placed or blackmailed insiders can tamper with model weights or training data or AI chip fabrication facilities, while hackers quietly degrade the training process so that an AI’s performance when it completes training is lackluster," they state.
"When subtlety proves too constraining, competitors may escalate to overt cyberattacks, targeting data center chip-cooling systems or nearby power plants in a way that directly - if visibly - disrupts development. Should these measures falter, some leaders may contemplate kinetic attacks on data centers, arguing that allowing one actor to risk dominating or destroying the world are graver dangers, though kinetic attacks are likely unnecessary."
With kinetic attacks a possibility, the authors suggest building data centers in remote locations to minimize collateral damage. For those looking to damage other nations' efforts, they recommend first trying the cyber route: "States could also poison data, corrupt model weights and gradients, disrupt software that handles faulty GPUs... training runs are non-deterministic and their outcomes are difficult to predict even without bugs, providing cover to many cyberattacks."
Following similar lessons with nuclear weapons, the authors believe that some of the attacks could be avoided with more transparency. Distinguishing between destabilizing AI projects and acceptable use facilities could prevent consumer AI data centers from being targeted.
Similarly, AI-assisted inspections could be used to confirm that AI projects "abide by declared constraints without revealing proprietary code or classified material."
While intangible aspects like algorithms and data are hard to control, semiconductors are physical assets, giving nations power over their production and distribution.
The authors call for better tracking on every high-end AI chip sales, and more enforcement officers to ensure that chips actually go where they are meant to - instead of, for example, being secretly diverted to China. "To assist enforcement officers, tamper-evident camera feeds from data centers can confirm that declared AI chips remain on-site, exposing any smuggling."
Any chip that is said to be inoperable or obsolete would have to undergo verified decommissioning, much like the disposal of chemical or nuclear materials, so that it doesn't get resold on the black market.
The chip industry could put in firmware level protections, including having the chips deactivate if they are in the wrong country. Similarly, they could require periodic authorization, or have restrictions on how many chips they can be networked with.
While the US could enforce some of these restrictions (something that seems unlikely with the current administration), "the dependence on Taiwan for advanced AI chips presents a critical vulnerability" for America.
"A blockade or invasion may spell the end of the West’s advantage in AI," the authors said. "To mitigate this foreseeable risk, Western countries should develop guaranteed supply chains for AI chips. Though this requires considerable investment, it is potentially necessary for national competitiveness."
The Biden-era US Chips Act aimed to fund such development, but is being dismantled by President Trump, who favors tariffs as an incentive structure.
If any nation achieves superintelligence first, the impact could be profound, the authors claim. "Superintelligence is not merely a new weapon, but a way to fast-track all future military innovation. A nation with sole possession of superintelligence might be as overwhelming as the Conquistadors were to the Aztecs.
"If a state achieves a strategic monopoly through AI, it could reshape world affairs on its own terms. An AI-driven surveillance apparatus might enable an unshakable totalitarian regime, transforming governance at home and leverage abroad."
While the US and China are locked in a great power struggle, the paper again turns to the Cold War to find common ground: Avoiding dangerous weapons ending up in the hands of terrorists.
"AI holds the potential to reshape the balance of power. In the hands of state actors, it can lead to disruptive military capabilities and transform economic competition. At the same time, terrorists may exploit its dual-use nature to orchestrate attacks once within the exclusive domain of great powers. It could also slip free of human oversight."
Avoiding this will require the world's governments to work together to track and limit AI development.
"The United States cooperated with both the Soviet Union and China on nuclear, biological, and chemical arms not from altruism but from self-preservation. If the US begins to treat advanced AI chips like fissile material, it may likewise encourage China to do the same."