Artificial malevolence: watch out for smart machines

Stephen Hawking says successful AI 'would be the biggest event in human history. Unfortunately, it might also be the last.' Photo: Reuters

Ebola sounds like the stuff of nightmares. Bird flu and SARS also send shivers down my spine. But I'll tell you what scares me most: artificial intelligence.

The first three, with enough resources, humans can stop. The last, which humans are creating, could soon become unstoppable.

Before we get into what could possibly go wrong, let me first explain what artificial intelligence is. Actually, skip that. I'll let someone else explain it: Grab an iPhone and ask Siri about the weather or stocks. Or tell her "I'm drunk." Her answers are artificially intelligent.

Right now these artificially intelligent machines are pretty cute and innocent, but as they are given more power in society, these machines may not take long to spiral out of control.

Advertisement

In the beginning, the glitches will be small but eventful. Maybe a rogue computer momentarily derails the stockmarket, causing billions of dollars in damage. Or a driverless car freezes on the highway because a software update goes awry.

But the upheavals can escalate quickly and become scarier and even cataclysmic. Imagine how a medical robot, originally programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease.

Nick Bostrom, author of the book Superintelligence, lays out a number of petrifying doomsday settings. One envisions self-replicating nanobots, which are microscopic robots designed to make copies of themselves. In a positive situation, these bots could fight diseases in the human body or eat radioactive material on the planet. But, Bostrom says, a "person of malicious intent in possession of this technology might cause the extinction of intelligent life on Earth."

Artificial-intelligence proponents argue that these things would never happen and that programmers are going to build safeguards. But let's be realistic: It took nearly a half-century for programmers to stop computers from crashing every time you wanted to check your email. What makes them think they can manage armies of quasi-intelligent robots?

See the rest here:

Artificial malevolence: watch out for smart machines

Related Posts

Comments are closed.