Why Stephen Hawking and Bill Gates Are Terrified of …

Stephen Hawking. Bill Gates. Elon Musk. When the world's biggest brains are lining up to warn us about something that will soon end life as we know it -- but it all sounds like a tired sci-fi trope -- what are we supposed to think?

In the last year, artificial intelligence has come under unprecedented attack. Two Nobel prize-winning scientists, a space-age entrepreneur, two founders of the personal computer industry -- one of them the richest man in the world -- have, with eerie regularity, stepped forward to warn about a time when humans will lose control of intelligent machines and be enslaved or exterminated by them. It's hard to think of a historical parallel to this outpouring of scientific angst. Big technological change has always caused unease. But when have such prominent, technologically savvy people raised such an alarm?

Their hue and cry is all the more remarkable because two of the protestors -- Bill Gates and Steve Wozniak -- helped create the modern information technology landscape in which an A.I. renaissance now appears. And one -- Stuart Russell, a co-signer of Stephen Hawking's May 2014 essay, is a leading A.I. expert. Russell co-authored its standard text, Artificial Intelligence: A Modern Approach.

Many argue we should dismiss their anxiety because the rise of superintelligent machines is decades away. Others claim their fear is baseless because we would never be so foolish as to give machines autonomy or consciousness or the ability to replicate and slip out of our control.

But what exactly are these science and industry giants up in arms about? And should we be worried too?

Stephen Hawking deftly framed the issue when he wrote that, in the short term, A.I.'s impact depends on who controls it; in the long term, it depends on whether it can be controlled at all. First, the short term. Hawking implicitly acknowledges that A.I. is a "dual use" technology, a phrase used to describe technologies capable of great good and great harm. Nuclear fission, the science behind power plant reactors and nuclear bombs, is a "dual use" technology. Since dual use technologies are only as harmful as their users' intentions, what are some harmful applications of A.I.?

One obvious example is autonomous killing machines. More than 50 nations are developing battlefield robots. The most sought-after will be robots that make the "kill decision" -- the decision to target and kill someone -- without a human in the loop. Research into autonomous battlefield robots and drones is richly funded today in many nations, including the United States, the United Kingdom, Germany, China, India, Russia and Israel. These weapons aren't prohibited by international law, but even if they were, it's doubtful they'll conform to international humanitarian law or even laws governing armed conflict. How will they tell friend from foe? Combatant from civilian? Who will be held accountable? That these questions go unanswered as the development of autonomous killing machines turns into an unacknowledged arms race shows how ethically fraught the situation is.

Equally ethically complex are the advanced data-mining tools now in use by the U.S. National Security Agency. In the U.S., it used to take a judge to determine if a law enforcement agency had sufficient cause to seize Americans' phone records, which are personal property protected by the Fourth Amendment to the Constitution. But since at least 2009, the N.S.A. has circumvented the warrant protection by breaking into overseas fiber cables owned by Yahoo and Google and siphoning off oceans of data, much of it belonging to Americans. The N.S.A. could not have done anything with this data -- much less reconstructed your contact list and mine and ogled our nude photos -- without smart A.I. tools. It used sophisticated data-mining software that can probe and categorize volumes of information so huge they would take human brains millions of years to analyze.

Killer robots and data mining tools grow powerful from the same A.I. techniques that enhance our lives in countless ways. We use them to help us shop, translate and navigate, and soon they'll drive our cars. IBM's Watson, the Jeopardy-beating "thinking machine," is studying to take the federal medical licensing exam. It's doing legal discovery work, just as first-year law associates do, but faster. It beats humans at finding lung cancer in X-rays and outperforms high-level business analysts.

How long until a thinking machine masters the art of A.I. research and development? Put another way, when does HAL learn to program himself to be smarter in a runaway feedback loop of increasing intelligence?

See the original post:

Why Stephen Hawking and Bill Gates Are Terrified of ...

Related Posts

Comments are closed.