PERSPECTIVE: Why Strong Artificial Intelligence Weapons Should Be Considered WMD Homeland Security Today – HSToday

The concept of strong Artificial Intelligence (AI), or AI that is cognitively equivalent to (or better than) a human in all areas of intelligence, is a common science fiction trope.[1] From HALs adversarial relationship with Dave in Stanley Kubricks film 2001: A Space Odyssey[2] to the war-ravaged apocalypse of James Camerons Terminator[3] franchise, Hollywood has vividly imagined what a dystopian future with super intelligent machines could look like and what the ultimate outcome for humanity might be. While I would not argue that the invention of super-intelligent machines will inevitably lead to our Schwarzenegger-style destruction, rapid advances in AI and machine learning have raised the specter of strong AI instantiation within a lifetime,[4] and this requires serious consideration. It is becoming increasingly important that we have a real conversation about strong AI before it becomes an existential issue, particularly within the context of decision making for kinetic autonomous weapons and other military systems that can result in a lethal outcome. From these discussions, appropriate global norms and international laws should be established to prevent the proliferation and use of strong AI systems for kinetic operations.

With the invention of almost every new technology, changes to ethical norms surrounding its appropriate use lag significantly behind proliferation. Consider social media as an example. We imagined that social media platforms would bring people together and facilitate greater communication and community, yet the reality has become significantly less sanguine.[5] Instead of bringing people together, social media has deepened social fissures and enabled the proliferation of disinformation at a virulent rate. It has torn families apart, caused greater divide, and at times transformed the very definition of truth.[6] Only now are we considering ethical restraints on social media to prevent the poison from spreading.[7] It is highly probable that any technology we create will ultimately reflect the darker parts of our nature, unless we create ethical limits before the technology becomes ubiquitous. It would be foolish to believe that AI would be an exception to this rule. This becomes especially important when considering strong AI designed for warfare, which is distinguishable from other forms of artificial intelligence.

To fully examine the implications of strong AI, we need to understand how it differs from current AI technologies, which are what we would consider weak AI.[8] Your smartphones ability to recognize images of your face is an example of weak AI. For a military example, an algorithm that can recognize a tank in an aerial video would be considered a weak AI system.[9] It can identify and label tanks, but it does not really know what a tank is or have any cognizance of how it relates to a tank. In contrast, a strong AI would be capable of the same task (as well as parallel tasks) with human-level proficiency (or beyond), but with an awareness of its own mind. This makes strong AI a more unpredictable threat. Not only would strong AI be highly proficient at rapidly processing battlefield data for pre- and post-strike decision making, but it would do so with an awareness of itself and its own motives, whatever they might be. Proliferation of weak AI systems for military applications is already becoming a significant issue. As an anecdotal example, Vladimir Putin has stated that the nation that leads AI will be the ruler of the world.[10] Imagine what the outcome could be if military AI systems had their own motives. This would likely involve catastrophic failure modes beyond what could be realized from weak AI systems. Thus, military applications of strong AI deserve their own consideration.

At this point, one may be tempted to dismiss strong AI as being highly improbable and therefore not worth considering. Given the rapid pace of AI technology development, it could be argued that, while the precise probability of instantiating strong AI is unknown,[11] it is a safe assumption that it is greater than zero. But what is important in this case is not the probability of strong AI instantiation, but the severity of a realized risk. To understand this, one need only consider how animals of greater intelligence typically consider animals of lesser intelligence. Ponder this scenario: when we have ants in our garden, does their well-being ever cross our minds? From our perspective, the moral value of an insect is insignificant in relation to our goals, thus we would not hesitate to obliterate them simply for eating our tomatoes. Now imagine if we encountered a significantly more intelligent AI how might it consider us in relation to its goals, whatever they might be? This meeting could yield an existential crisis if our existence hinders the AIs goal achievement, thus even this low-probability event could have a catastrophic outcome if it became a reality.

Understanding what might motivate a strong AI could provide some insight into how it might relate to us in such a situation. Human motivation is an evolved phenomenon. Everything that drives us (self-preservation, hunger, sex, desire for community, accumulation of resources, etc.) exists to facilitate our survival and that of our kin.[12] Even higher-order motives, like self-actualization, can be linked to the more fundamental goal of individual and species survival when viewed through the lens of evolutionary psychology.[13] However, a strong AI would not necessarily have evolved. It may simply be instantiated in situ as software or hardware. In this case, no evolutionary force would have existed over eons to generate a motivational framework analogous to what we, as humans, experience. In an instantiated strong AI, it might be prudent to assume that the AIs primary motive would be to achieve whatever goal it was initially programmed to do. Thus, self-preservation might not be the primary motivating factor. However, the AI would probably recognize that its continued existence is necessary for it to achieve its primary goal, thus self-preservation could become a meaningful sub-goal.[14] Other sub-goals may also exist, some of which would not be obvious to humans in the context of how we understand motivation. The AIs thought process by which sub-goals are generated or achieved might be significantly different from what humans would expect.

The existence of AI sub-goals that do not follow the patterns of human motivation implies the existence of a strong AI creative process that may be completely alien to us. One only needs to look at AI-generated art to see that AI creativity can manifest itself in often grotesque ways that are vastly different from what a human might expect.[15] While weird AI artistry hardly poses an existential threat to humanity, it illustrates the concept of perverse instantiation,[16] where the AI achieves a goal, but in an unexpected and potentially malignant way. As a military example, imagine a strong AI whose primary goal is to degrade and destroy the adversary. As we have demonstrated, AI creativity can be unbounded in its weirdness, as its thought processes are unlike that of any evolved intelligence. This AI might find a creative and completely unforeseen way to achieve its primary goal that leads to significant collateral damage against non-combatants, such as innocent civilians. Taking this analogy to a darker level, the AI might determine that a useful sub-goal would be to remove its military handlers from the equation. Perhaps they act as a man in the middle gatekeeper in affecting the AIs will, and the AI determines that this arrangement creates unacceptable inefficiencies. In this perverse instantiation, the AI achieves its goal of destroying the enemy, but in a grotesque way by killing its overseers.

The next obvious question is, how could we contain a strong AI in a way that would prevent malignant failure? The obvious solution might be in engineering a deontological ethic an Asimovian set of rules to limit the AIs behavior.[17] Considering a strong AIs tendency toward unpredictable creativity in methods of goal achievement, encoding an exhaustive set of rules would pose a titanic challenge. Additionally, deontological ethics is often subject to deontological failure, e.g., what happens when rules contradict one another? A classic example would be the trolly problem: if an AI is not allowed to kill a human, but the only two possible choices involve the death of humans, which choice does it make?[18] This is already an issue in weak AI, specifically with self-driving cars.[19] Does the vehicle run over a small child who crosses the road, or crash and kill its inhabitants, if those are the only possible choices? If deontological ethics are an imperfect option, perhaps AI disembodiment would be a viable solution. In this scenario, the AI would lack a means to directly interact with its environment, acting as sort of an oracle in a box.[20] The AI would advise its human handlers, who would act as ethical gatekeepers in affecting the AIs will. Upon cursory examination, this seems plausible, but we have already established that a strong AI might determine that a man in the middle arrangement degrades its ability to achieve its primary goal, so what would prevent the AI from coercing its handlers into enabling its escape? In our hubris, we would like to believe that we could not be outsmarted by a disembodied AI, but a being that is more intelligent than us could reasonably outsmart us just as easily as a savvy adult could a nave child.

While a single strong AI instantiation could pose a significant risk of malignant failure, imagine the impact that the proliferation of strong AI military systems might have on how we approach war. Our adversaries are earnestly exploring AI for military applications; thus, it is extremely likely that strong AI may become a reality and also proliferate.[21] The real problem becomes not how to prevent malignant failure of a single strong AI, but how to address the complex adaptive system of multiple strong AIs fighting against all logical actors, none of which exhibit reasonably predictable behavior.[22] To further complicate matters, ethical decision making is influenced by culture, and our adversaries might have different ideas as to which strong AI behaviors are acceptable during war, and which are not.

To avoid this potentially disastrous outcome, I propose the following be considered for further discussion with the hopeful end-goal of appropriate global norms and future international laws that ban strong AI decision making for kinetic offensive operations. Strong AI-based lethal autonomous weapons should be considered a weapon of mass destruction. This may be the best way to prevent the complex, unpredictable destruction that could arise from multiple strong AI systems intent on killing the enemy or unnecessarily wreaking havoc on critical infrastructure, which may have negative secondary and tertiary effects impacting countless innocent non-combatants. Inevitably, there may be rogue or non-signatory actors who develop weaponized strong AI systems despite international norms. Any strategy that addresses strong AI should also consider this potential outcome.

Several years ago, seriously discussing strong AI might get you laughed out of the room. Today, as AI continues to advance, and as our adversaries continue to aggressively militarize AI technologies, it is imperative that the United States consider a defense strategy specifically addressing the possibility of a strong AI instantiation. Any use of strong AI in the battlefield should be limited to non-kinetic operations to reduce the impact of malignant failure. This standard should be reflected in multilateral treaty agreements or protocols to prevent strong AI misuse and the inevitable unpredictability of adversarial strong AI systems interacting with each other in complex, unpredictable, and possibly horrific ways. This may be a sufficient way to ensure that weaponized strong AI does not cause cataclysmic devastation.

The author is responsible for the content of this article. The views expressed do not reflect the official policy or position of the National Intelligence University, the Department of Defense, the U.S. Intelligence Community, or the U.S. Government.

(Visited 371 times, 1 visits today)

Original post:

PERSPECTIVE: Why Strong Artificial Intelligence Weapons Should Be Considered WMD Homeland Security Today - HSToday

Related Posts

Comments are closed.