We Need To Do More Than Just Point to Ethical Questions …

Dancer Matt Del Rosario from Pilobolus performs a scene along with robots created in partnership with the engineers, programmers, and pilots of the MIT Computer Science and Artificial Intelligence Laboratory in New York on July 18, 2011. (TIMOTHY A. CLARY/AFP/Getty Images) | JOHN MACDOUGALL via Getty Images

Hundreds of artificial intelligence experts recently signed a letter put together by the Future of Life Institute that prompted Elon Musk to donate $10 million to the institute. "We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our A.I. systems must do what we want them to do," the letter read.

The problem is that both the letter and the corresponding report allow anyone to read any meaning he or she wants into "beneficial," and the same applies when it comes to defining who "we" are and what "we" want A.I. systems to do exactly. Of course, there already exists a "we" who think it is beneficial to design robust A.I. systems that will do what "we" want them to do when, for example, fighting wars.

But the "we" the institute had in mind is something different. "The potential benefits [of A.I.] are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools A.I. may provide, but the eradication of disease and poverty are not unfathomable." But notice that these are presented as possibilities, not as goals. They are benefits that could happen, not benefits that should happen. Nowhere in the research priorities document are these eventualities actually called research priorities.

One might think that such vagueness is just the result of a desire to draft a letter that a large number of people might be willing to sign on to. Yet in fact, the combination of gesturing towards what are usually called "important ethical issues," while steadfastly putting off serious discussion of them, is pretty typical in our technology debates. We do not live in a time that gives much real thought to ethics, despite the many challenges you might think would call for it. We are hamstrung by a certain pervasive moral relativism, a sense that when you get right down to it, our "values" are purely subjective and, as such, really beyond any kind of rational discourse. Like "religion," they are better left un-discussed in polite company.

There are, of course, "philosophers" who get paid to teach and write about what is not discussed in polite company, but who would look to them as authorities? It is practically a given that on fundamental ethical questions, they will agree no more, and perhaps even less, than the rest of us.

As in the institute's research priorities document, if you want to look responsible, you include such people in the discussion. Whether they will actually influence outcomes is a question about which a certain skepticism is warranted. After all, all participants are entitled to have their own values, are they not?

This ethical reticence has some serious consequences. The more we are restrained by it, the less we can talk seriously about what is good and what is bad in the new world we are creating with science and technology. As our power over nature increases, you might think that the very first thing we would want to be able to do is to know how that power ought to be used responsibly -- if it is used at all. If instead, we hobble our ethical discussions, how will such a question be decided? An increasingly pervasive techno-libertarianism suggests that we will move quickly from "we can do x" to "we should do x," and that our scientific and technical might will end up making right.

A final issue ought to be of particular concern to progressives. The very idea of progress implies improvement in the human condition -- it implies that some change is for the better and some is not. Hence the idea of "improvement" suggests some human good that is sought or has been achieved. Without ethical standards, there is no progress -- only change.

No one doubts that the world is changing and changing rapidly. Organizations that want to work towards making change happen for the better will need to do much more than point piously at "important ethical questions."

View post:

We Need To Do More Than Just Point to Ethical Questions ...

Related Posts

Comments are closed.