The Promise and Risks of Artificial Intelligence: A Brief History – War on the Rocks

Editors Note: This is an excerpt from a policy roundtable Artificial Intelligence and International Security from our sister publication, the Texas National Security Review. Be sure to check out the full roundtable.

Artificial intelligence (AI) has recently become a focus of efforts to maintain and enhance U.S. military, political, and economic competitiveness. The Defense Departments 2018 strategy for AI, released not long after the creation of a new Joint Artificial Intelligence Center, proposes to accelerate the adoption of AI by fostering a culture of experimentation and calculated risk taking, an approach drawn from the broader National Defense Strategy. But what kinds of calculated risks might AI entail? The AI strategy has almost nothing to say about the risks incurred by the increased development and use of AI. On the contrary, the strategy proposes using AI to reduce risks, including those to both deployed forces and civilians.

While acknowledging the possibility that AI might be used in ways that reduce some risks, this brief essay outlines some of the risks that come with the increased development and deployment of AI, and what might be done to reduce those risks. At the outset, it must be acknowledged that the risks associated with AI cannot be reliably calculated. Instead, they are emergent properties arising from the arbitrary complexity of information systems. Nonetheless, history provides some guidance on the kinds of risks that are likely to arise, and how they might be mitigated. I argue that, perhaps counter-intuitively, using AI to manage and reduce risks will require the development of uniquely human and social capabilities.

A Brief History of AI, From Automation to Symbiosis

The Department of Defense strategy for AI contains at least two related but distinct conceptions of AI. The first focuses on mimesis that is, designing machines that can mimic human work. The strategy document defines mimesis as the ability of machines to perform tasks that normally require human intelligence for example, recognizing patterns, learning from experience, drawing conclusions, making predictions, or taking action. A somewhat distinct approach to AI focuses on what some have called human-machine symbiosis, wherein humans and machines work closely together, leveraging their distinctive kinds of intelligence to transform work processes and organization. This vision can also be found in the AI strategy, which aims to use AI-enabled information, tools, and systems to empower, not replace, those who serve.

Of course, mimesis and symbiosis are not mutually exclusive. Mimesis may be understood as a means to symbiosis, as suggested by the Defense Departments proposal to augment the capabilities of our personnel by offloading tedious cognitive or physical tasks. But symbiosis is arguably the more revolutionary of the two concepts and also, I argue, the key to understanding the risks associated with AI.

Both approaches to AI are quite old. Machines have been taking over tasks that otherwise require human intelligence for decades, if not centuries. In 1950, mathematician Alan Turing proposed that a machine can be said to think if it can persuasively imitate human behavior, and later in the decade computer engineers designed machines that could learn. By 1959, one researcher concluded that a computer can be programmed so that it will learn to play a better game of checkers than can be played by the person who wrote the program.

Meanwhile, others were beginning to advance a more interactive approach to machine intelligence. This vision was perhaps most prominently articulated by J.C.R. Licklider, a psychologist studying human-computer interactions. In a 1960 paper on Man-Computer Symbiosis, Licklider chose to avoid argument with (other) enthusiasts for artificial intelligence by conceding dominance in the distant future of cerebration to machines alone. However, he continued: There will nevertheless be a fairly long interim during which the main intellectual advances will be made by men and computers working together in intimate association.

Notions of symbiosis were influenced by experience with computers for the Semi-Automatic Ground Environment (SAGE), which gathered information from early warning radars and coordinated a nationwide air defense system. Just as the Defense Department aims to use AI to keep pace with rapidly changing threats, SAGE was designed to counter the prospect of increasingly swift attacks on the United States, specifically low-flying bombers that could evade radar detection until they were very close to their targets.

Unlike other computers of the 1950s, the SAGE computers could respond instantly to inputs by human operators. For example, operators could use a light gun to select an aircraft on the screen, thereby gathering information about the airplanes identification, speed, and direction. SAGE became the model for command-and-control systems throughout the U.S. military, including the Ballistic Missile Early Warning System, which was designed to counter an even faster-moving threat: intercontinental ballistic missiles, which could deliver their payload around the globe in just half an hour. We can still see the SAGE model today in systems such as the Patriot missile defense system, which is designed to destroy short-range missiles those arriving with just a few minutes of notice.

SAGE also inspired a new and more interactive approach to computing, not just within the Defense Department, but throughout the computing industry. Licklider advanced this vision after he became director of the Defense Departments Information Processing Technologies Office, within the Advanced Research Projects Agency, in 1962. Under Lickliders direction, the office funded a wide range of research projects that transformed how people would interact with computers, such as graphical user interfaces and computer networking that eventually led to the Internet.

The technologies of symbiosis have contributed to competitiveness not primarily by replacing people, but by enabling new kinds of analysis and operations. Interactive information and communications technologies have reshaped military operations, enabling more rapid coordination and changes in plans. They have also enabled new modes of commerce. And they created new opportunities for soft power as technologies such as personal computers, smart phones, and the Internet became more widely available around the world, where they were often seen as evidence of American progress.

Mimesis and symbiosis come with somewhat distinct opportunities and risks. The focus on machines mimicking human behavior has prompted anxieties about, for example, whether the results produced by machine reasoning should be trusted more than results derived from human reasoning. Such concerns have spurred work on explainable AI wherein machine outputs are accompanied by humanly comprehensible explanations for those outputs.

By contrast, symbiosis calls attention to the promises and risks of more intimate and complex entanglements of humans and machines. Achieving an optimal symbiosis requires more than well-designed technology. It also requires continual reflection upon and revision of the models that govern human-machine interactions. Humans use models to design AI algorithms and to select and construct the data used to train such systems. Human designers also inscribe models of use assumptions about the competencies and preferences of users, and the physical and organizational contexts of use into the technologies they create. Thus, like a film script, technical objects define a framework of action together with the actors and the space in which they are supposed to act. Scripts do not completely determine action, but they configure relationships between humans, organizations, and machines in ways that constrain and shape user behavior. Unfortunately, these interactively complex sociotechnical systems often exhibit emergent behavior that is contrary to the intentions of designers and users.

Competitive Advantages and Risks

Because models cannot adequately predict all of the possible outcomes of complex sociotechnical systems, increased reliance on intelligent machines leads to at least four kinds of risks: The models for how machines gather and process information, and the models of human-machine interaction, can both be inadvertently flawed or deliberately manipulated in ways not intended by designers. Examples of each of these kinds of risks can be found in past experiences with smart machines.

First, changing circumstances can render the models used to develop machine intelligence irrelevant. Thus, those models and the associated algorithms need constant maintenance and updating. For example, what is now the Patriot missile defense system was initially designed for air defense but was rapidly redesigned and deployed to Saudi Arabia and Israel to defend against short-range missiles during the 1991 Gulf War. As an air defense system it ran for just a few hours at a time, but as a missile defense system it ran for days without rebooting. In these new operating conditions, a timing error in the software became evident. On Feb. 25, 1991, this error caused the system to miss a missile that struck a U.S. Army barracks in Dhahran, Saudi Arabia, killing 28 American soldiers. A software patch to fix the error arrived in Dhahran a day too late.

Second, the models upon which machines are designed to operate can be exploited for deceptive purposes. Consider, for example, Operation Igloo White, an effort to gather intelligence on and stop the movement of North Vietnamese supplies and troops in the late 1960s and early 1970s. The operation dropped sensors throughout the jungle, such as microphones, to detect voices and truck vibrations, as well as devices that could detect the ammonia odors from urine. These sensors sent signals to overflying aircraft, which in turn sent them to a SAGE-like surveillance center that could dispatch bombers. However, the program was a very expensive failure. One reason is that the sensors were susceptible to spoofing. For example, the North Vietnamese could send empty trucks to an area to send false intelligence about troop movements, or use animals to trigger urine sensors.

Third, intelligent machines may be used to create scripts that enact narrowly instrumental forms of rationality, thereby undermining broader strategic objectives. For example, unpiloted aerial vehicle operators are tasked with using grainy video footage, electronic signals, and assumptions about what constitutes suspicious behavior to identify and then kill threatening actors, while minimizing collateral damage. Operators following this script have, at times, assumed that a group of men with guns was planning an attack, when in fact they were on their way to a wedding in a region where celebratory gun firing is customary, and that families praying at dawn were jihadists rather than simply observant Muslims. While it may be tempting to dub these mistakes operator errors, this would be too simple. Such operators are enrolled in a deeply flawed script one that presumes that technology can be used to correctly identify threats across vast geographic, cultural, and interpersonal distances, and that the increased risk of killing innocent civilians is worth the increased protection offered to U.S. combatants. Operators cannot be expected to make perfectly reliable judgments across such distances, and it is unlikely that simply deploying the more precise technology that AI enthusiasts promise can bridge the very distances that remote systems were made to maintain. In an era where soft power is inextricable from military power, such potentially dehumanizing uses of information technology are not only ethically problematic, they are also likely to generate ill will and blowback.

Finally, the scripts that configure relationships between humans and intelligent machines may ultimately encourage humans to behave in machine-like ways that can be manipulated by others. This is perhaps most evident in the growing use of social bots and new social media to influence the behavior of citizens and voters. Bots can easily mimic humans on social media, in part because those technologies have already scripted the behavior of users, who must interact through liking, following, tagging, and so on. While influence operations exploit the cognitive biases shared by all humans, such as a tendency to interpret evidence in ways that confirm pre-existing beliefs, users who have developed machine-like habits reactively liking, following, and otherwise interacting without reflection are all the more easily manipulated. Remaining competitive in an age of AI-mediated disinformation requires the development of more deliberative and reflective modes of human-machine interaction.

Conclusion

Achieving military, economic, and political competitiveness in an age of AI will entail designing machines in ways that encourage humans to maintain and cultivate uniquely human kinds of intelligence, such as empathy, self-reflection, and outside-the-box thinking. It will also require continual maintenance of intelligent systems to ensure that the models used to create machine intelligence are not out of date. Models structure perception, thinking, and learning, whether by humans or machines. But the ability to question and re-evaluate these assumptions is the prerogative and the responsibility of the human, not the machine.

Rebecca Slayton is an associate professor in the Science & Technology Studies Department and the Judith Reppy Institute of Peace and Conflict Studies, both at Cornell University. She is currently working on a book about the history of cyber security expertise.

Image: Flickr (Image by Steve Jurvetson)

More here:

The Promise and Risks of Artificial Intelligence: A Brief History - War on the Rocks

Related Posts

Comments are closed.