A Case for Cooperation Between Machines and Humans – The New York Times

But Ben Shneiderman, a University of Maryland computer scientist who has for decades warned against blindly automating tasks with computers, thinks fully automated cars and the tech industrys vision for a robotic future is misguided. Even dangerous. Robots should collaborate with humans, he believes, rather than replace them.

Late last year, Dr. Shneiderman embarked on a crusade to convince the artificial intelligence world that it is heading in the wrong direction. In February, he confronted organizers of an industry conference on Assured Autonomy in Phoenix, telling them that even the title of their conference was wrong. Instead of trying to create autonomous robots, he said, designers should focus on a new mantra, designing computerized machines that are reliable, safe and trustworthy.

There should be the equivalent of a flight data recorder for every robot, Dr. Shneiderman argued.

It is a warning thats likely to gain more urgency when the worlds economies eventually emerge from the devastation of the coronavirus pandemic and millions who have lost their jobs try to return to work. A growing number of them will find they are competing with or working side by side with machines.

Dr. Shneiderman, 72, began spreading his message decades ago. A pioneer in the field human-computer interaction, he co-founded in 1982 what is now the Conference on Human Factors in Computing Systems and coined the term direct manipulation to describe the way objects are moved on a computer screen either with a mouse or, more recently, with a finger.

In 1997, Dr. Shneiderman engaged in a prescient debate with Pattie Maes, a computer scientist at the Massachusetts Institute of Technologys Media Lab, over the then-fashionable idea of intelligent software agents designed to perform autonomous tasks for computer users anything from reordering groceries to making a restaurant reservation.

Designers believe they are creating something lifelike and smart however, users feel anxious and unable to control these systems, he argued.

Since then, Dr. Shneiderman has argued that designers run the risk not just of creating unsafe machines but of absolving humans of ethical responsibility of the actions taken by autonomous systems, ranging from cars to weapons.

The conflict between human and computer control is at least as old as interactive computing itself.

The distinction first appeared in two computer science laboratories that were created in 1962 near Stanford University. John McCarthy, a computer scientist who had coined the term artificial intelligence, established the Stanford Artificial Intelligence Laboratory with the goal of creating a thinking machine in a decade. And Douglas Engelbart, who invented the computer mouse, created the Augmentation Research Center at the Stanford Research Center and coined the term intelligence augmentation, or I.A.

In recent years, the computer industry and academic researchers have tried to bring the two fields back together, describing the resulting discipline as humanistic or human-centered artificial intelligence.

Dr. Shneiderman has challenged the engineering community to rethink the way it approaches artificial intelligence-based automation. Until now, machine autonomy has been described as a one-dimensional scale ranging from machines that are manually controlled to systems that run without human intervention.

The best known of these one-dimensional models is a set of definitions related to self-driving vehicles established by the Society of Automotive Engineers. It describes six levels of vehicle autonomy ranging from Level 0, requiring complete human control, to Level 5, which is full driving automation.

In contrast, Dr. Shneiderman has sketched out a two-dimensional alternative that allows for both high levels of machine automation and human control. With certain exceptions such as automobile airbags and nuclear power plant control rods, he asserts that the goal of computing designers should be systems in which computing is used to extend the abilities of human users.

This approach has already been popularized by both roboticists and Pentagon officials. Gill Pratt, the head of the Toyota Research Institute, is a longtime advocate of keeping humans in the loop. His institute has been working to develop Guardian, a system that the researchers have described as super advanced driver assistance.

There is so much that automation can do to help people that is not about replacing them, Dr. Pratt said. He has focused the laboratory not just on car safety but also on the challenge of developing robotic technology designed to support older drivers as well.

Similarly, Robert O. Work, a deputy secretary of defense under Presidents Trump and Barack Obama, backed the idea of so-called centaur weapons systems, which would require human control, instead of A.I.-based robot killers, now called lethal autonomous weapons.

The term centaur was originally popularized in the chess world, where partnerships of humans and computer programs consistently defeated unassisted software.

At the Phoenix conference on autonomous systems this year, Dr. Shneiderman said Boeings MCAS flight-control system, which was blamed after two 737 Max jets crashed, was an extreme example of high automation and low human control.

The designers believed that their autonomous system could not fail, he wrote in an unpublished article that has been widely circulated. Therefore, its existence was not described in the user manual and the pilots were not trained in how to switch to manual override.

Dr. Shneiderman said in an interview that he had attended the conference with the intent of persuading the organizers to change its name from a focus on autonomy to a focus on human control.

Ive come to see that names and metaphors are very important, he said.

He also cited examples where the Air Force, the National Aeronautics and Space Administration, and the Defense Science Board, a committee of civilian experts that advises the Defense Department on science and technology, had backed away from a reliance on autonomous systems.

Robin Murphy, a computer scientist and robotics specialist at Texas A&M University, said she had spoken to Dr. Shneiderman and broadly agreed with his argument.

I think theres some imperfections, and I have talked to Ben about this, but I dont know anything better, she said. Weve got to think of ways to better represent how humans and computers are engaged together.

There are also skeptics.

Bens notion that his two-dimensional model is a fresh perspective simply is not true, said Missy Cummings, director of Duke Universitys Humans and Autonomy Laboratory, who said she relied on his human-interface ideas in her design classes.

The degree of collaboration should be driven by the amount of uncertainty in the system and the criticality of outcomes, she said. Nuclear reactors are highly automated for a reason: Humans often do not have fast enough reaction times to push the rods in if the reactor goes critical.

Link:

A Case for Cooperation Between Machines and Humans - The New York Times

Related Posts

Comments are closed.