The "nuclear football" follows the president on trips. It allows the president to authorize a nuclear launch.
If artificial intelligences controlled nuclear weapons, all of us could be dead.
That is no exaggeration. In 1983, Soviet Air Defense Forces Lieutenant Colonel Stanislav Petrov was monitoring nuclear early warning systems, when the computer concluded with the highest confidence that the United States had launched a nuclear war. But Petrov was doubtful: The computer estimated only a handful of nuclear weapons were incoming, when such a surprise attack would more plausibly entail an overwhelming first strike. He also didnt trust the new launch detection system, and the radar system didnt have corroborative evidence. Petrov decided the message was a false positive and did nothing. The computer was wrong; Petrov was right. The false signals came from the early warning system mistaking the suns reflection off the clouds for missiles. But if Petrov had been a machine, programmed to respond automatically when confidence was sufficiently high, that error would have started a nuclear war.
Militaries are increasingly incorporating autonomous functions into weapons systems, though as far as is publicly known, they havent yet turned the nuclear launch codes over to an AI system. Russia has developed a nuclear-armed, nuclear-powered torpedo that is autonomous in some not publicly known manner, and defense thinkers in the United States have proposed automating the launch decision for nuclear weapons.
There is no guarantee that some military wont put AI in charge of nuclear launches; International law doesnt specify that there should always be a Petrov guarding the button. Thats something that should change, soon.
How autonomous nuclear weapons could go wrong. The huge problem with autonomous nuclear weapons, and really all autonomous weapons, is error. Machine learning-based artificial intelligencesthe current AI voguerely on large amounts of data to perform a task. Googles AlphaGo program beat the worlds greatest human go players, experts at the ancient Chinese game thats even more complex than chess, by playing millions of games against itself to learn the game. For a constrained game like Go, that worked well. But in the real world, data may be biased or incomplete in all sorts of ways. For example, one hiring algorithm concluded being named Jared and playing high school lacrosse was the most reliable indicator of job performance, probably because it picked up on human biases in the data.
In a nuclear weapons context, a government may have little data about adversary military platforms; existing data may be structurally biased, by, for example, relying on satellite imagery; or data may not account for obvious, expected variations such as imagery in taken during foggy, rainy, or overcast weather.
The nature of nuclear conflict compounds the problem of error.
How would a nuclear weapons AI even be trained? Nuclear weapons have only been used twice in Hiroshima and Nagasaki, and serious nuclear crises are (thankfully) infrequent. Perhaps inferences can be drawn from adversary nuclear doctrine, plans, acquisition patterns, and operational activity, but the lack of actual examples of nuclear conflict means judging the quality of those inferences is impossible. While a lack of examples hinders humans too, humans have the capacity for higher-order reasoning. Humans can create theories and identify generalities from limited information or information that is analogous, but not equivalent. Machines cannot.
The deeper challenge is high false positive rates in predicting rare events. There have thankfully been only two nuclear attacks in history. An autonomous system designed to detect and retaliate against an incoming nuclear weapon, even if highly accurate, will frequently exhibit false positives. Around the world, in North Korea, Iran, and elsewhere, test missiles are fired into the sea and rockets are launched into the atmosphere. And there have been many false alarms of nuclear attacks, vastly more than actual attacks. An AI thats right almost all the time still has a lot of opportunity to get it wrong. Similarly, with a test that accurately diagnosed cases of a rare disease 99 percent of the time, a positive diagnosis may meanjusta5 percent likelihood of actually having the disease, depending on assumptions about the diseases prevalence and false positive rates. This is because with rare diseases, the number of false positives could vastlyoutweighthe number of true positives.So, if an autonomous nuclear weapon concluded with 99 percent confidence a nuclear war is about to begin, should it fire?
In the extremely unlikely event those problems can all be solved, autonomous nuclear weapons introduce new risks of error and opportunities for bad actors to manipulate systems. Current AI is not only brittle; its easy to fool. A single pixel change is enough to convince an AI a stealth bomber is a dog. This creates two problems. If a country actually sought a nuclear war, they could fool the AI system first, rendering it useless. Or a well-resourced, apocalyptic terrorist organization like the Japanese cult Aum Shinrikyo might attempt to trick an adversarys system into starting a catalytic nuclear war. Both approaches can be done in quite subtle, difficult-to-detect ways: data poisoning may manipulate the training data that feeds the AI system, or unmanned systems or emitters could be used to trick an AI into believing a nuclear strike is incoming.
The risk of error can confound well-laid nuclear strategies and plans. If a military had to start a nuclear war, targeting an enemys own nuclear systems with gigantic force would be a good way to go to limit retaliation. However, if an AI launched a nuclear weapon in error, the decisive opening salvo may be a pittancea single nuclear weapon aimed at a less than ideal target. Accidentally nuking a major city might provoke an overwhelming nuclear retaliation because the adversary would still have all its missile silos, just not its city.
Some have nonetheless argued that autonomous weapons (not necessarily autonomous nuclear weapons) will eventually reduce the risk of error. Machines do not need to protect themselves and can be more conservative in making decisions to use force. They do not have emotions that cloud their judgement and do not exhibit confirmation biasa type of bias in which people interpret data in a way that conforms to their desires or beliefs.
While these arguments have potential merit in conventional warfare, depending on how technology evolves, they do not in nuclear warfare. As strategic deterrents, countries have strong incentives to protect their nuclear weapons platforms, because they literally safeguard their existence. Instead of being risk avoidant, countries have an incentive to preemptively launch under attack, because otherwise they may lose their nuclear weapons. Some emotion should also be a part of nuclear decision-making: the prospect of catastrophic nuclear war should be terrifying, and the decision made extremely cautiously.
Finally, while autonomous nuclear weapons may not exhibit confirmation biases, the lack of training data and real-world test environments mean an autonomous nuclear weapon may experience numerous biases, which may never be discovered until after a nuclear war has started.
The decision to unleash nuclear force is the single most significant decision a leader can make. It commits a state to an existential conflict with millionsif not billionsof lives in the balance. Such a consequential, deeply human decision should never be made by a computer.
Activists against autonomous weapons have been hesitant to focus on autonomous nuclear weapons. For example, the International Committee of the Red Cross makes no mention of autonomous nuclear weapons in its position statement on autonomous weapons. (In fairness, the International Committee for Robot Arms Controls 2009 statement references autonomous nuclear weapons, though it represents more of the intellectual wing of the so-called stop killer robots movement.) Perhaps activists see nuclear weapons as already broadly banned or do not wish to legitimize nuclear weapons generally, but the lack of attention is a mistake. Nuclear weapons already have broad established norms against their use and proliferation, with numerous treaties supporting them. Banning autonomous nuclear weapons should be an easy win to establish norms against autonomous weapons. Plus, autonomous nuclear weapons represent perhaps the highest-risk manifestation of autonomous weapons (an artificial superintelligence is the only potential higher risk). Which is worse: an autonomous gun turret accidently killing a civilian, or an autonomous nuclear weapon igniting a nuclear war that leads to catastrophic destruction and possibly the extinction of all humanity? Hint: catastrophic destruction is vastly worse.
Where autonomous nuclear weapons stand. Some autonomy in nuclear weapons is already here, but its complicated and unclear how worried we should be.
Russias Poseidon is an Intercontinental Nuclear-Powered Nuclear-Armed Autonomous Torpedo according to US Navy documents, while the Congressional Research Service has also described it as an autonomous undersea vehicle. The weapon is intended to be a second-strike weapon used in the event of a nuclear conflict. That is, a weapon intended to ensure a state can always retaliate against a nuclear strike, even an unexpected, so-called bolt from the blue. An unanswered question is: what can the Poseidon do autonomously? Perhaps the torpedo just has some autonomous maneuvering ability to better reach its targetbasically, an underwater cruise missile. Thats probably not a big deal, though there may be some risk of error in misdirecting the attack.
It is more worrisome if the torpedo is given permission to attack autonomously under specific conditions. For example, what if, in a crisis scenario where Russian leadership fears a possible nuclear attack, Poseidon torpedoes are launched under a loiter mode? It could be that if the Poseidon loses communications with its host submarine, it launches an attack. Most worrisome: The torpedo has the ability to attack on its own, but this possibility is quite unlikely. This would require an independent means for the Poseidon to assess whether a nuclear attack had taken place, while sitting far beneath the ocean. Of course, given how little is known about the Poseidon, this is all speculation. But thats part of the point: understanding how another countrys autonomous systems operate is really hard.
Countries are also interested in so-called dead hand systems. Dead hand systems are meant to provide a back-up, in case a states nuclear command authority is disrupted, or killed. A relatively simple system like Russias Perimeter might delegate launch authority to a lower-level commander in the event of a crisis and specific conditions like a loss of communication with command authorities. But as deterrence experts Adam Lowther and Curtis McGuffin argued in a 2019 article in War on the Rocks, the United States should consider an automated strategic response system based on artificial intelligence.
The authors reason the decision-making time to launch nuclear weapons has become so constrained, that an artificial intelligence-based dead hand should be considered, despite, as the authors acknowledge, the potential for numerous errors and problems the system would create. Lt. Gen. Jack Shanahan, former leader of the Department of Defenses Joint Artificial Intelligence Center, shot the proposal down immediately: You will find no stronger proponent of integration of AI capabilities writ large into the Department of Defense, but there is one area where I pause, and it has to do withnuclear command and control. But Shanahan retired in 2020, and there is no reason to believe the proposal will not come up again. Perhaps next time, no one will shoot it down.
What needs to happen. As allowed under Article VIII of the Nuclear Non-Proliferation Treaty, a member state should propose an amendment to the treaty requiring all nuclear weapons states to always include humans within decision-making chains on the use of nuclear weapons. This could require diplomacy and might take a while. In the near term, countries should raise the issue when the member states next meet to review the treaty in August 2022 and establish a side-event focused on autonomous nuclear weapons issues during the 2025 conference. Even if a consensus cannot be established at the 2022 conference, countries can begin the process of working through any barriers in support of a future amendment. Countries can also build consensus outside the review conference process: Bans on autonomous nuclear weapons could be discussed as part of broader multilateral discussions on a new autonomous weapons ban.
The United States should be a leader in this effort. The congressionally-appointed National Security Commission on AI recommended humans maintain control over nuclear weapons. Page 12 notes, The United States should (1) clearly and publicly affirm existing US policy that only human beings can authorize employment of nuclear weapons and seek similar commitments from Russia and China. Formalizing this requirement in international law would make it far more robust.
Unfortunately, requiring humans to make decisions on firing nuclear weapons is not the end of the story. An obvious challenge is how to ensure the commitments to human control are trustworthy. After all, it is quite tough to tell whether a weapon is truly autonomous. But there might be options to at least reassure: Countries could pass laws requiring humans to approve decisions on the use of nuclear weapons; provide minimum transparency into nuclear command and control processes to demonstrate meaningful human control; or issue blanket bans on any research and development aimed at making nuclear weapons autonomous.
Now, none of this should suggest that any fusion of artificial intelligence and nuclear weapons is terrifying. Or, more precisely, any more terrifying than nuclear weapons on their own. Artificial intelligence also has applications in situational awareness, intelligence collection, information processing, and improving weapons accuracy. Artificial intelligence may aid decision support and communication reliability, which may help nuclear stability. In fact, artificial intelligence has already been incorporated in various aspects of nuclear command, control, and communication systems, such as early warning systems. But that should never extend to complete machine control over the decision to use nuclear weapons.
The challenge of autonomous nuclear weapons is a serious one that has gotten little attention. Making changes to the Nuclear Non-Proliferation Treaty to require nuclear weapons states to maintain human control over nuclear weapons is just the start. At the very least, if a nuclear war breaks out, well know who to blame.
The author would like to thank Philipp C. Bleek, James Johnson, and Josh Pollack for providing invaluable input on this article.
Go here to read the rest:
- The Great AI Race: Forecasts Diverge on the Arrival of Superintelligence - elblog.pl - April 14th, 2024 [April 14th, 2024]
- ASI Alliance Voting Opens: What Lies Ahead for AGIX, FET and OCEAN? - CCN.com - April 6th, 2024 [April 6th, 2024]
- Revolutionary AI: The Rise of the Super-Intelligent Digital Masterminds - Medium - January 2nd, 2024 [January 2nd, 2024]
- AI, arms control and the new cold war | The Strategist - The Strategist - November 16th, 2023 [November 16th, 2023]
- The Best ChatGPT Prompts Are Highly Emotional, Study Confirms - Tech.co - November 16th, 2023 [November 16th, 2023]
- 20 Movies About AI That Came Out in the Last 5 Years - MovieWeb - November 16th, 2023 [November 16th, 2023]
- Can You Imagine Life Without White Supremacy? - Dallasweekly - November 16th, 2023 [November 16th, 2023]
- Will Humanity solve the AI Alignment Problem? | by Enrique Tinoco ... - Medium - October 31st, 2023 [October 31st, 2023]
- The Tesla Trap; Ellison Going Nuclear; Dont Count Headsets Out - Equities News - October 31st, 2023 [October 31st, 2023]
- Future Investment Initiative emphasizes global cooperation and AI ... - Saudi Gazette - October 31st, 2023 [October 31st, 2023]
- AI systems favor sycophancy over truthful answers, says new report - CoinGeek - October 31st, 2023 [October 31st, 2023]
- What "The Creator", a film about the future, tells us about the present - InCyber - October 31st, 2023 [October 31st, 2023]
- Invincible's Guardians Of The Globe Team Members, History ... - Screen Rant - October 31st, 2023 [October 31st, 2023]
- From streaming wars to superintelligence with John Oliver & Calum ... - KNEWS - The English Edition of Kathimerini Cyprus - October 22nd, 2023 [October 22nd, 2023]
- Reckoning with self-destruction in Oppenheimer, Indiana Jones, and ... - The Christian Century - October 22nd, 2023 [October 22nd, 2023]
- How Microsoft's CEO tackles the ethical dilemma of AI and its ... - Medium - October 22nd, 2023 [October 22nd, 2023]
- Managing risk: Pandemics and plagues in the age of AI - The Interpreter - October 22nd, 2023 [October 22nd, 2023]
- Artificial Intelligence Has No Reason to Harm Us - The Wire - August 2nd, 2023 [August 2nd, 2023]
- Fischer Black and Artificial Superintelligence - InformationWeek - August 2nd, 2023 [August 2nd, 2023]
- OpenAI Forms Specialized Team to Align Superintelligent AI with ... - Fagen wasanni - August 2nd, 2023 [August 2nd, 2023]
- 10 Best Books on Artificial Intelligence | TheReviewGeek ... - TheReviewGeek - August 2nd, 2023 [August 2nd, 2023]
- The Concerns Surrounding Advanced Artificial Intelligence and the ... - Fagen wasanni - August 2nd, 2023 [August 2nd, 2023]
- Decentralized AI: Revolutionizing Technology and Addressing ... - Fagen wasanni - August 2nd, 2023 [August 2nd, 2023]
- An 'Oppenheimer Moment' For The Progenitors Of AI - NOEMA - Noema Magazine - August 2nd, 2023 [August 2nd, 2023]
- The Implications of AI Advancements on Human Thinking and ... - Fagen wasanni - August 2nd, 2023 [August 2nd, 2023]
- Focusing on Tackling Algorithmic Bias is Key to Ethical AI ... - Fagen wasanni - August 2nd, 2023 [August 2nd, 2023]
- Discover This SUPER Early AI Crypto Gem - Altcoin Buzz - August 2nd, 2023 [August 2nd, 2023]
- Biden meets with AI leaders to discuss its 'enormous promise and its ... - KULR-TV - June 20th, 2023 [June 20th, 2023]
- VivaTech: The Secret of Elon Musk's Success? 'Crystal Meth' - The New Stack - June 20th, 2023 [June 20th, 2023]
- AI meets the other AI - POLITICO - POLITICO - June 20th, 2023 [June 20th, 2023]
- Squid Game trailer for real-life reality contest prompts confusion from Netflix users - Yahoo News - June 20th, 2023 [June 20th, 2023]
- Our Future Inside The Fifth Column- Or, What Chatbots Are Really For - Tech Policy Press - June 20th, 2023 [June 20th, 2023]
- Elon Musk refuses to 'censor' Twitter in face of EU rules - Roya News English - June 20th, 2023 [June 20th, 2023]
- AI alignment - Wikipedia - January 4th, 2023 [January 4th, 2023]
- Are We Living In A Simulation? Can We Break Out Of It? - December 28th, 2022 [December 28th, 2022]
- Amazon.com: Superintelligence: Paths, Dangers, Strategies eBook ... - October 13th, 2022 [October 13th, 2022]
- What is Artificial Super Intelligence (ASI)? - GeeksforGeeks - October 13th, 2022 [October 13th, 2022]
- Literature and Religion | Literature and Religion - Patheos - October 13th, 2022 [October 13th, 2022]
- Why AI will never rule the world - Digital Trends - September 27th, 2022 [September 27th, 2022]
- Why DART Is the Most Important Mission Ever Launched to Space - Gizmodo Australia - September 27th, 2022 [September 27th, 2022]
- 'Sweet Home Alabama' turns 20: See how the cast has aged - Wonderwall - September 27th, 2022 [September 27th, 2022]
- Research Shows that Superintelligent AI is Impossible to be Controlled - Analytics India Magazine - September 24th, 2022 [September 24th, 2022]
- Eight best books on AI ethics and bias - INDIAai - September 20th, 2022 [September 20th, 2022]
- AI Art Is Here and the World Is Already Different - New York Magazine - September 20th, 2022 [September 20th, 2022]
- Would "artificial superintelligence" lead to the end of life on Earth ... - September 14th, 2022 [September 14th, 2022]
- Instrumental convergence - Wikipedia - September 14th, 2022 [September 14th, 2022]
- The Best Sci-Fi Movies on HBO Max - CNET - September 14th, 2022 [September 14th, 2022]
- Elon Musk Shares a Summer Reading Idea - TheStreet - August 2nd, 2022 [August 2nd, 2022]
- Bullet Train Review: Brad Pitt Even Shines in an Action-Packed Star Vehicle that Goes Nowhere Fast - IndieWire - August 2nd, 2022 [August 2nd, 2022]
- Peter McKnight: The day of sentient AI is coming, and we're not ready - Vancouver Sun - June 20th, 2022 [June 20th, 2022]
- SC judges should have minimum of seven to eight years of judgeship tenure: Justice L Nageswara Rao - The Tribune India - May 25th, 2022 [May 25th, 2022]
- WATCH: Neuralink, mind uploading and the AI apocalypse - Hamilton Spectator - May 3rd, 2022 [May 3rd, 2022]
- Artificial Intelligence And the Human Context of War - The National Interest Online - May 3rd, 2022 [May 3rd, 2022]
- Elon Musk and the Posthumanist Threat | John Waters - First Things - May 3rd, 2022 [May 3rd, 2022]
- Eliminating AI Bias: Human Intelligence is Not the Ultimate Solution - Analytics Insight - April 15th, 2022 [April 15th, 2022]
- If You Want to Succeed With Artificial Intelligence in Marketing, Invest in People - CMSWire - April 15th, 2022 [April 15th, 2022]
- Here Are All The TV Shows And Movies To Watch Featuring The Cast Of "Atlanta" - BuzzFeed - April 15th, 2022 [April 15th, 2022]
- What2Watch: This week's worth-the-watch - Review - Review - March 31st, 2022 [March 31st, 2022]
- Top 10 Algorithms Helping the Superintelligent AI Growth in 2022 - Analytics Insight - March 29th, 2022 [March 29th, 2022]
- AI Ethics Keeps Relentlessly Asking Or Imploring How To Adequately Control AI, Including The Matter Of AI That Drives Self-Driving Cars - Forbes - March 18th, 2022 [March 18th, 2022]
- What to watch next on Showmax - News24 - March 18th, 2022 [March 18th, 2022]
- Does Kimi deliver the goods? Thriller aims to capitalize on Alexa, Siri and Seattles tech cachet - GeekWire - February 17th, 2022 [February 17th, 2022]
- 'Downfall' follows chain of bad decisions that led up to Boeing 737-Max crashes - Lewiston Morning Tribune - February 17th, 2022 [February 17th, 2022]
- Maybe it is Not Too Late to Buy Bitcoin! BTC has a Long Way to Go - Analytics Insight - February 17th, 2022 [February 17th, 2022]
- Meet the cast of The Afterparty - Radio Times - January 27th, 2022 [January 27th, 2022]
- 8 big threats to human stability and even existence in 2022 - AMEinfo - January 27th, 2022 [January 27th, 2022]
- Artificial Intelligence in Cardiology | AER - January 17th, 2022 [January 17th, 2022]
- Are We Living in a Computer Simulation? Artificial Superintelligence Could Provide the Answer - BBN Times - January 17th, 2022 [January 17th, 2022]
- Movie Review: The 355 - mxdwn.com - January 9th, 2022 [January 9th, 2022]
- AI control problem - Wikipedia - December 10th, 2021 [December 10th, 2021]
- REPORT : Baltic Event Works in Progress 2021 - Cineuropa - December 5th, 2021 [December 5th, 2021]
- Top Books On AI Released In 2021 - Analytics India Magazine - November 27th, 2021 [November 27th, 2021]
- Inside the MIT camp teaching kids to spot bias in code - Popular Science - November 27th, 2021 [November 27th, 2021]
- 7 Types Of Artificial Intelligence - Forbes - November 17th, 2021 [November 17th, 2021]
- The Flash Season 8 Poster Kicks Off Five-Part Armageddon Story Tonight on The CW - TVweb - November 17th, 2021 [November 17th, 2021]
- Nick Bostrom - Wikipedia - November 15th, 2021 [November 15th, 2021]
- Cowboy Bebop; Ein dogs were really spoiled on set - Dog of the Day - November 13th, 2021 [November 13th, 2021]
- Inside the Impact on Marvel of Brian Tyree Henry's Openly Gay Character in 'Eternals' - Black Girl Nerds - November 13th, 2021 [November 13th, 2021]
- The funny formula: Why machine-generated humor is the holy grail of A.I. - Digital Trends - November 13th, 2021 [November 13th, 2021]
- The World's First Decentralized Search Engine for Web3 to Be Launched at the Blockchain Conference in Lisbon - NewsBTC - November 5th, 2021 [November 5th, 2021]