AI Spotlight: Paul Scharre On Weapons, Autonomy, And Warfare – Forbes

Paul Scharre / Senior Fellow & Director at CNAS

Paul Scharreis a Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security. He is the award-winning author ofArmy of None: Autonomous Weapons and the Future of War, which won the 2019 Colby Award and was named one of Bill Gates top five books of 2018.

Aswin Pranam: To start, what classifies as an autonomous weapon?

Paul Scharre: An autonomous weapon, quite simply, makes its own decisions of whom to engage in the battlefield. The core challenge is in figuring out which of those decisions matter. For example, modern-day missiles and torpedoes maneuver on their own to course-correct and adjust positioning. Do these decisions matter on a grand scale? Not so much. But munitions that can make kill decisions on their own, without human supervision, matter a great deal.

Pranam: Why should the average citizen care about autonomous weapons?

Scharre: Everyone will have to live in the future we're building, and we should all have a personal stake in what that future looks like. The question of AI being used in warfare isnt a question of when, but rather a question of how. What are the rules? What is the degree of human control? Who sets those rules? There is a real possibility that militaries transition to a world in which human control over war is significantly reduced, and that could be quite dangerous. So, I think engaging in broad conversation internationally and bringing together nations, human rights groups, and subject matter experts (lawyers, ethicists, technologists) to have a productive dialogue is necessary to chart the right course.

Pranam: People concerned about the destructive power of AI weapons want to halt development completely. Is this a realistic solution?

Scharre: We cant stop the underlying technology from being developed because AI and automation are dual-use. The same sensors and algorithms that prevent a self-driving car from hitting pedestrians may also enable an autonomous weapon in war. The basic tools to build a simple autonomous weapon exist today and can be found freely online. If a person is a reasonably competent engineer and has a bit of free time, they could build a crude autonomous drone that could inflict harm for under a thousand dollars.

The open question is still what militaries choose to build with regards to their weapons arsenal. If you look at chemical and biological weapons as a parallel, certain rogue states develop and use them, but the majority of civilized countries have generally agreed not to move forward with development. Historically, attempts to control technology has been a mixed bag, with some successes and many failures.

Pranam: In the United States, the International Traffic in Arms Regulations (ITAR) compliance framework controls & restricts the export of munitions and military technologies. Do you believe AI should fall under the export restricted category?

Scharre: This is an area with a lot of active policy debate in both Washington DC and the broader tech community. Personally, I dont see it as being realistic in most cases. However, theres room for non-ITAR export controls that restrict the sale of AI technologies to countries engaged in human rights abuses. We shouldnt have American enterprises or universities enabling foreign entities that will repurpose the technology to suppress their citizens.

By and large, as far as the access to research is concerned, the AI world is very open. Papers are published online and breakthroughs are freely shared, so it is difficult to imagine components of AI technology being controlled under tight restrictions. If anything, I could see sensitive data sets being ITAR controlled, along with potentially specialized chips or hardware used exclusively for military applications, if they were developed.

Pranam: Conversations around AI and morality generally go hand-in-hand. In your research, have you encountered any examples of AI weapons today that use ethical considerations as an input to decision making?

Scharre: I havent, and I dont fully agree with the characterization that AI needs to engage in moral reasoning in the human sense. We should focus attention on outcomes. We want behavior from AI that is consistent with human ethics and values. Does this mean that machines need to reason or understand abstract ethical concepts? Not necessarily. We do, however, need to control the manifestation of external behavior in machines and reassert human control in cases where ethical dilemmas present themselves. Commercial airliners use autopilots to improve safety. It doesnt need to have moral reasoning programmed in to get to a moral outcome a safe plane flight. In a similar vein, self-driving cars will pilot themselves with the primary outcome of driving safely. Programming in ethical scenarios to manage variations of the trolley problem is largely a red herring.

More:

AI Spotlight: Paul Scharre On Weapons, Autonomy, And Warfare - Forbes

Related Posts

Comments are closed.