×
Nuclear weapons experts oppose AI launch control despite inevitable integration
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Nuclear weapons experts gathered at the University of Chicago in July are unanimous that artificial intelligence will inevitably become integrated into nuclear weapons systems, though none can predict exactly how this integration will unfold. The consensus among Nobel laureates, scientists, and former government officials underscores a critical shift in global security as AI permeates the most dangerous weapons on Earth.

What you should know: While experts agree AI integration is inevitable, they remain united in opposing AI control over nuclear launch decisions.

  • “In this realm, almost everybody says we want effective human control over nuclear weapon decisionmaking,” says Jon Wolfsthal, a nonproliferation expert and former Obama administration official.
  • Current nuclear launch protocols require multiple human decisions and physical actions, including two operators turning keys simultaneously in missile silos.
  • No expert believes large language models like ChatGPT will receive nuclear codes anytime soon.

The big picture: AI is already being considered for nuclear command and control systems, with military leaders actively pursuing AI-enabled decision support tools.

  • Air Force General Anthony J. Cotton announced last year that nuclear forces are “developing artificial intelligence or AI-enabled, human led, decision support tools to ensure our leaders are able to respond to complex, time-sensitive scenarios.”
  • Bob Latiff, a retired US Air Force major general who helps set the Doomsday Clock, compares AI’s spread to electricity: “It’s going to find its way into everything.”

Key concerns: Experts worry about AI creating vulnerabilities rather than improving nuclear security.

  • Wolfsthal’s primary concern isn’t rogue AI starting wars, but rather that “somebody will say we need to automate this system and parts of it, and that will create vulnerabilities that an adversary can exploit.”
  • AI systems operating as “black boxes” make it impossible to understand their decision-making processes, which experts consider unacceptable for nuclear weapons.
  • Current US nuclear policy requires “dual phenomenology”—confirmation from both satellite and radar systems—to verify nuclear attacks, and experts question whether AI should fulfill either role.

The human element: Nuclear experts emphasize the irreplaceable value of human judgment in nuclear decisions.

  • Stanford professor Herb Lin references Stanislav Petrov, the Soviet officer who prevented nuclear war in 1983 by questioning his computer systems and choosing not to report a false alarm.
  • “Can we expect humans to be able to do that routinely? Is that a fair expectation?” Lin asks, noting that AI cannot “go outside your training data” to make such judgment calls.
  • Latiff worries about AI reinforcing confirmation bias and reducing meaningful human control: “If Johnny gets killed, who do I blame?”

What they’re saying: Experts express frustration with current AI rhetoric and policy approaches.

  • “The conversation about AI and nukes is hampered by a couple of major problems. The first is that nobody really knows what AI is,” Wolfsthal explains.
  • Lin criticizes the Pentagon’s comparison of AI development to the Manhattan Project: “I think it’s awful. For one thing, I knew when the Manhattan Project was done, and I could tell you when it was a success, right? We exploded a nuclear weapon. I don’t know what it means to have a Manhattan Project for AI.”

Policy implications: The Trump administration and Pentagon have positioned AI as a national security priority, framing development as an arms race against China.

  • The Department of Energy declared in May that “AI is the next Manhattan Project, and the UNITED STATES WILL WIN.”
  • This competitive framing concerns experts who emphasize the need for careful consideration over speed in nuclear weapons integration.
Nuclear Experts Say Mixing AI and Nuclear Weapons Is Inevitable

Recent News

Google DeepMind expands Perch AI to track endangered wildlife sounds

Biologists built custom classifiers in under an hour to find endangered species 50x faster.

James Cameron warns AI weapons could trigger “Terminator”-style apocalypse

The director sees three existential threats converging at humanity's most dangerous crossroads.

OpenAI offers $1.5M bonuses as Meta hoovers up AI talent

The unprecedented retention package makes every employee a millionaire over two years.