Artificial intelligence should not control nuclear weapons use, officials say

.

Stanislav Petrov
In this Thursday, Aug. 27, 2015 photo former Soviet missile defense forces officer Stanislav Petrov poses for a photo at his home in Fryazino, Moscow region, Russia. On Sept. 26, 1983, despite the data coming in from the Soviet Union’s early-warning satellites over the United States, Petrov, a Soviet military officer, decided to consider it a false alarm. If he had decided otherwise, the Soviet leadership could have responded by ordering a retaliatory nuclear strike on the United States. (AP Photo/Pavel Golovkin) Pavel Golovkin/AP

Artificial intelligence should not control nuclear weapons use, officials say

Video Embed

Artificial intelligence systems should not control “actions critical” to the use of nuclear weapons, according to a new U.S. proposal on the military applications of the emerging technology.

“States should maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment,” the State Department declared Thursday.

That statement was a cardinal proposal in a political declaration unveiled by Secretary of State Antony Blinken’s team during a conference on the military implications of artificial intelligence at The Hague. Dutch and South Korean officials hosted the conference against the backdrop of artificial intelligence chat-bot launches that have stoked new unease about the specter of military conflicts waged with weapons systems that can operate independently of humans.

“AI is everywhere. It’s on our children’s phones, where ChatGPT is their new best friend where homework is concerned,” Dutch Foreign Affairs Minister Wopke Hoekstra said Wednesday at the launch of the conference. “Yet AI also has the potential to destroy within seconds. And that’s worrying: considering that over the past decades, only prudence has prevented nuclear escalation. How will this develop with technology that can make decisions faster than any of us can think?”

US HAS ‘GROWING CONCERN’ OVER CHINA’S RELATIONSHIP WITH RUSSIA

Cold War history bears out the importance of human decision-making for averting nuclear war. One famous incident in 1983 involved a false alarm from Soviet systems that appeared to detect an incoming strike from the United States. The Soviet officer on duty surmised correctly that the detection system had malfunctioned and delayed reporting the alarm to his superiors.

“There was no rule about how long we were allowed to think before we reported a strike. But we knew that every second of procrastination took away valuable time; that the Soviet Union’s military and political leadership needed to be informed without delay,” the officer, Stanislav Petrov, told the BBC in 2013. “Twenty-three minutes later I realized that nothing had happened. If there had been a real strike, then I would already know about it. It was such a relief.”

The artificial intelligence research could open the door to weapons systems that sidestep such human deliberation, as dozens of countries agreed.

“We note that AI can be used to shape and impact decision making, and we will work to ensure that humans remain responsible and accountable for decisions when using AI in the military domain,” as more than 50 states agreed in a “call to action” released at the Responsible AI in the Military Domain summit this week. “We recognise that failure to adopt AI in a timely manner may result in a military disadvantage, while premature adoption without sufficient research, testing and assurance may result in inadvertent harm. We see the need to increase the exchange of lessons learnt regarding risk mitigation practices and procedures.”

The signatories of that broad statement included four of the five nuclear powers that wield vetoes at the U.N. Security Council, including the U.S., China, France, and the United Kingdom. The fifth, Russia, the fifth, was not invited to the conference due to the invasion of Ukraine.

The U.S. offered a more specific statement of 12 proposals designed to put guardrails around the military uses of artificial intelligence, especially by preserving “a responsible human chain of command and control” over AI-powered weapons.

“States should design and engineer military AI capabilities so that they possess the ability to detect and avoid unintended consequences and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior,” the U.S. political declaration says. “States should also implement other appropriate safeguards to mitigate risks of serious failures.”

Yet the voluntary political declaration is a far cry from a treaty that might restrain militaries around the world.

CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER

“The aim of the declaration is to respond to rapid advancements in technology by beginning a process of building international consensus around responsible behavior and to guide states’ development and deployment and use of military AI,” State Department deputy spokesman Vedant Patel said Thursday. “We encourage other states to join us in building an international consensus around the principles we articulated in our political declaration.”

© 2023 Washington Examiner

Related Content