The Ethical Upside to Artificial Intelligence – War on the Rocks

Posted: January 26, 2020 at 11:57 pm

According to some, artificial intelligence (AI) is thenew electricity. Like electricity, AI will transform every major industry and open new opportunities that were never possible. However, unlike electricity, the ethics surrounding the development and use of AI remain controversial, which is a significant element constraining AIs full potential.

The Defense Innovation Board (DIB) released a paper in October 2019 that recommends the ethical use of AI within the Defense Department. It described five principles of ethically used AI responsible, equitable, traceable, reliable, and governable. The paper also identifies measures the Joint Artificial Intelligence Center, Defense Agency Research Projects Agency (DARPA), and U.S. military branches are taking to study the ethical, moral, and legal implications of employing AI. While the paper primarily focused on the ethics surrounding the implementation and use of AI, it also argued that AI must have the ability to detect and avoid unintended harm. This article seeks to expand on that idea by exploring AIs ability to operate within the Defense Department using an ethical framework.

Designing an ethical framework a set of principles that guide ethical choice for AI, while difficult, offers a significant upside for the U.S. military. It can strengthen the militarys shared moral system, enhance ethical considerations, and increase the speed of decision-making in a manner that provides decision superiority over adversaries.

AI Is Limited without an Ethical Framework

Technology is increasing the complexity and speed of war. AI, the use of computers to perform tasks normally requiring human intelligence, can be a means of speeding decision-making. Yet, due to a fear of machines inability to consider ethics in decisions, organizations are limiting AIs scope to focus ondata-supported decision-making using AI to summarize data while keeping human judgment as the central processor. For example, leaders within the automotive industry received backlash for programming self-driving cars to make ethical judgments. Some professional driving organizations have demanded that these cars be banned from the roads for at least 50 years.

This backlash, while understandable, misses the substantial upside that AI can offer to ethical decision-making. AI reflectshuman inputand operates on human-designed algorithms that set parameters for the collection and correlation of data to facilitate machine learning. As a result, it is possible to build an ethical framework that reflects a decision-makers values. Of course, when the data that humans supply is biased, for example, AI can mimic its trainers bydiscriminating on gender and race. Biased algorithms, to be sure, are a drawback. However, bias can be mitigated by techniques such as counterfactual fairness, Google AIs recommended practices, and algorithms such as those provided by IBMs AI Fairness 360 toolkit. Moreover, AI processing power makes it essential for successfully navigating ethical dilemmas in a military setting, where complexity and time pressure often obscure underlying ethical tensions.

A significant obstacle to building an ethical framework for AI is a fundamental element of war the trade-off between human lives and other military objectives. While international humanitarian law provides a codification of actions, many of which have ethical implications, it does not answer all questions related to combat. It primarily focuses on defining combatants, the treatment of combatants and non-combatants, and acceptable weapons. International humanitarian law does not deal with questions concerning how many civilian deaths are acceptable for killing a high-valued target, or how many friendly lives are worth sacrificing to take control of a piece of territory. While, under international law, these are examples of military judgments, this remains an ethical decision for the military leader responsible.

Building ethical frameworks into AI will help the military comply with international humanitarian law and leverage new opportunities while predicting and preventing costly mistakes in four ways.

Four Military Benefits of an Ethical AI Framework

Designing an ethical framework for AI will benefit the military by forcing its leaders to reexamine existing ethical frameworks. In order to supply the benchmark data on which AI can learn, leaders will need to define, label, and score choice options in ethical dilemmas. In doing so they will have three primary theoretical frameworks to leverage for guidance: consequentialist, deontological, and virtue. While consequentialist ethical theories focus on the consequences of the decision (e.g., expected lives saved), deontological ethical theories are concerned with the compliance with a system of rules (refusing to lie based on personal beliefs and values despite the possible outcomes). Virtue ethical theories are concerned with instilling the right amount of a virtuous quality into a person (too little courage is cowardice; too much is rashness; the right amount is courage). A common issue cited as anobstacle to machine ethicsis the lack of agreement on which theory or combination of theories to follow leaders will have to overcome this obstacle. This introspection will help them better understand their ethical framework, clarify and strengthen the militarys shared moral system, andenhance human agency.

Second, AI can recommend decisions that consistently reflect a leaders preferred ethical decision-making process. Even in high-stakes situations, human decision-making is prone to influence from factors that have little or nothing to do with the underlying choice. Things like poor nutrition, fatigue, and stress all common in warfare can lead to biased and inconsistent decision-making. Other influences, such as acting in ones self-interest or extreme emotional responses, can also contribute tomilitary members making unethical decisions. AI, of course, does not become fatigued or emotional. The consistency of AI allows it to act as a moral adviser by providing decision-makers morally relevant data leaders can rely on as their judgment becomes impaired. Overall, this can increase the confidence of young decision-makers, a concern thecommander of U.S. Army Training and Doctrine Commandbrought up early last year.

Third, AI can help ensure that U.S. military leaders make the right ethical choice however they define that in high-pressure situations. Overwhelming the adversary is central to modern warfare. Simultaneous attacks anddeception operationsaim to confuse decision-makers to the point where they can no longer use good judgment. AI can process and correlate massive amounts of data to provide not only response options, but also probabilities that a given option will result in an ethically acceptable outcome. Collecting battlefield data, processing the information, and making an ethical decision is very difficult for humans in a wartime environment. Although the task would still be extremely difficult, AI can gather and process information more efficiently than humans. This would be valuable for the military. For example, AI that is receiving and correlating information from sensors across the entire operating area could estimate non-combatant casualties, the proportionality of an attack, or social reactions from observing populations.

Finally, AI can also extend the time allowed to make ethical decisions in warfare. For example, a central concern in modern military fire support is the ability to outrange the opponent, to be able to shoot without being shot. The race to extend the range of weapons to outpace adversaries continues to increase the time between launch and impact. Future warfare will see weapons that are launched and enter an area that is so heavily degraded and contested that the weapon will lose external communication with the decision-maker who chose to fire it. Nevertheless, as the weapon moves closer to the target, it could gain situational awareness on the target area and identify changes pertinent to the ethics of striking a target. If equipped with onboard AI operating with an ethical framework, the weapon could continuously collect, correlate, and assess the situation throughout its flight to meet the parameters of its programmed framework. If the weapon identified a change in civilian presence or other information altering the legitimacy of a target, the weapon could divert to a secondary target, locate a safe area to self-detonate, or deactivate its fuse. This concept could apply to any semi- or fully autonomous air, ground, maritime, or space assets. The U.S. military could not afford a weapon system deactivating or returning to base in future conflicts each time it loses communication with a human. If an AI-enabled weapon loses the ability to receive human input, for whatever reason, an ethical framework will allow the mission to continue in a manner that aligns the weapons actions with the intent of the operator.

Conclusion

Building an ethical framework for AI will help clarify and strengthen the militarys shared moral system. It will allow AI to act as a moral adviser and provide feedback as the judgment of decision-makers becomes impaired. Similarly, an ethical framework for AI will maximize the utility of its processing power to help ensure ethical decisions when human cognition is overwhelmed. Lastly, providing AI an ethical framework can extend the time available to make ethical decisions. Of course, AI is only as good as the data it is provided.

AI should not replace U.S. military leaders as ethical decision-makers. Instead, if correctly designed, AI should clarify and amplify the ethical frameworks that U.S. military leaders already bring to war. It should help leaders grapple with their own moral frameworks, and help bring those frameworks to bear by processing more data than any decision-maker could, in places where no decision-maker could go.

AI may create new programming challenges for the military, but not new ethical challenges. Grappling with the ethical implications of AI will help leaders better understand moral tradeoffs inherent in combat. This will unleash the full potential of AI, and allow it to increase the speed of U.S. decision-making to a rate that outpaces its adversaries.

Ray Reeves is a captain in the U.S. Air Force and a tactical air control party officer and joint terminal attack controller (JTAC) instructor and evaluator at the 13thAir Support Operations Squadron on Fort Carson, Colorado. He has multiple combat deployments and is a doctoral student at Indiana Wesleyan University, where he studies organizational leadership. The views expressed here are his alone and do not necessarily reflect those of the U.S. government or any part thereof. Linkedin.

Image: U.S. Marine Corps (Photo by Lance Cpl. Nathaniel Q. Hamilton)

More:

The Ethical Upside to Artificial Intelligence - War on the Rocks

Related Posts