When Robots Take Over, What Happens to Us?

Artificial intelligence has a long way to go before computers are as intelligent as humans. But progress is happening rapidly, in everything from logical reasoning to facial and speech recognition. With steady improvements in memory, processing power, and programming, the question isn't if a computer will ever be as smart as a human, but only how long it will take. And once computers are as smart as people, they'll keep getting smarter, in short order become much, much smarter than people. When artificial intelligence (AI) becomes artificial superintelligence (ASI), the real problems begin.

In his new book Our Final Invention: Artificial Intelligence and the End of the Human Era, James Barrat argues that we need to begin thinking now about how artificial intelligences will treat their creators when they can think faster, reason better, and understand more than any human. These questions were long the province of thrilling (if not always realistic) science fiction, but Barrat warns that the consequences could indeed be catastrophic. I spoke with him about his book, the dangers of ASI, and whether we're all doomed.

Your basic thesis is that even if we don't know exactly how long it will take, eventually artificial intelligence will surpass human intelligence, and once they're smarter than we are, we are in serious trouble. This is an idea people are familiar with; there are lots of sci-fi stories about homicidal AIs like HAL or Skynet. But you argue that it may be more likely that super-intelligent AI will be simply indifferent to the fate of humanity, and that could be just as dangerous for us. Can you explain?

First, I think we've been inoculated to the threat of advanced AI by science fiction. We've had so much fun with Hollywood tropes like Terminator and of course the Hal 9000 that we don't take the threat seriously. But as Bill Joy once said, "Just because you saw it in a movie doesn't mean it can't happen."

Superintelligence in no way implies benevolence. Your laptop doesn't like you or dislike you anymore than your toaster does why do we believe an intelligent machine will be different? We humans have a bad habit of imputing motive to objects and phenomenaanthropomorphizing. If it's thundering outside the gods must be angry. We see friendly faces in clouds. We anticipate that because we create an artifact, like an intelligent machine, it will be grateful for its existence, and want to serve and protect us.

But these are our qualities, not machines'. Furthermore, at an advanced level, as I write in Our Final Invention, citing the work of AI-maker and theorist Steve Omohundro, artificial intelligence will have drives much like our own, including self-protection and resource acquisition. It will want to achieve its goals and marshal sufficient resources to do so. It will want to avoid being turned off. When its goals collide with ours it will have no basis for valuing our goals, and use whatever means are at its disposal for achieving its goals.

The immediate answer many people would give to the threat is, "Well, just program them not to hurt us," with some kind of updated version of Isaac Asimov's Three Laws of Robotics. I'm guessing that's no easy task.

That's right, it's extremely difficult. Asimov's Three Laws are often cited as a cure-all for controlling ASI. In fact they were created to generate tension and stories. HIs classic I, Robot is a catalogue of unintended consequences caused by conflicts among the three laws. Not only are our values hard to give to a machine, our values change from culture to culture, religion to religion, and over time. We can't agree on when life begins, so how can we reach a consensus about the qualities of life we want to protect? And will those values make sense in 100 years?

When you're discussing our efforts to contain an AI many times smarter than us, you make an analogy to waking up in a prison run by mice (with whom you can communicate). My takeaway from that was pretty depressing. Of course you'd be able to manipulate the mice into letting you go free, and it would probably be just as easy for an artificial superintelligence to get us to do what it wants. Does that mean any kind of technological means of containing it will inevitably fail?

Our Final Invention is both a warning and a call for ideas about how to govern superintelligence. I think we'll struggle mortally with this problem, and there aren't a lot of solutions out thereI've been looking. Ray Kurzweil, who's portrait of the future is very rosy, concedes that superior intelligence won't be contained. His solution is to merge with it. The 1975 Asilomar Conference on Recombinant DNA is a good model of what should happen. Researchers suspended work and got together to establish basic safety protocols, like "don't track the DNA out on your shoes." It worked, and now we're benefitting from gene therapy and better crops, with no horrendous accidents so far. MIRI (the Machine Intelligence Research Institute) advocates creating the first superintelligence with friendliness encoded, among other steps, but that's hard to do. Bottom linebefore we share the planet with superintelligent machines we need a science for understanding and controlling them.

But as you point out, it would be extremely difficult in practical terms to ban a particular kind of AIif we don't build it, someone else will, and there will always be what seem to them like very good reasons to do so. With people all over the world working on these technologies, how can we impose any kind of stricture that will prevent the outcomes we're afraid of?

Human-level intelligence at the price of a computer will be the most lucrative commodity in the history of the world. Imagine banks of thousands of PhD quality brains working on cancer research, climate modeling, weapons development. With those enticements, how do you get competing researchers and countries to the table to discuss safety? My answer is to write a book, make films, get people aware and involved, and start a private-public partnership targeted at safety. Government and industry have to get together. For that to happen, we must give people the resources they need to understand a problem that's going to deeply affect their lives. Public pressure is all we've got to get people to the table. If we wait to be motivated by horrendous accidents and weaponization, as we have with nuclear fission, then we'll have waited too long.

Beyond the threat of annihilation, one of the most disturbing parts of this vision is the idea that we'll eventually reach the point at which humans are no longer the most important actors on planet Earth. There's another species (if you will) with more capability and power to make the big decisions, and we're here at their indulgence, even if for the moment they're treating us humanely. If we're a secondary species, how do you think that will affect how we think about what it means to be human?

That's right, we humans steer the future not because we're the fastest or strongest creatures, but because we're the smartest. When we share the planet with creatures smarter than we are, they'll steer the future. For a simile, look at how we treat intelligent animals - they're at Seaworld, they're bushmeat, they're in zoos, or they're endangered. Of course the Singularitarians believe that the superintelligence will be ourswe'll be transhuman. I'm deeply skeptical of that one-sided good news story.

As you were writing this book, were there times you thought, "That's it. We're doomed. Nothing can be done"?

Yes, and I thought it was curious to be alive and aware within the time window in which we might be able to change that future, a twist on the anthropic principal. But having hope about seemingly hopeless odds is a moral choice. Perhaps we'll get wise to the dangers in time. Perhaps we'll learn after a survivable accident. Perhaps enough people will realize that advanced AI is a dual use technology, like nuclear fission. The world was introduced to fission at Hiroshima. Then we as a species spent the next 50 years with a gun pointed at our own heads. We can't survive that abrupt an introduction to superintelligence. And we need a better maintenance plan than fission's mutually assured destruction.

See the original post here:

When Robots Take Over, What Happens to Us?

Related Posts

Comments are closed.