Stephen Hawking says threat of artificial intelligence a real concern

Stephen Hawking, in an article inspired by the new Johnny Depp flick Transcendence, said it would be the "worst mistake in history" to dismiss the threat of artificial intelligence.

In a paper he co-wrote with University at California, Berkeley computer-science professor Stuart Russell, and Massachusetts Institute of Technology physics professors Max Tegmark and Frank Wilczek, Hawking said cited several achievements in the field of artificial intelligence, including self-driving cars, Siri and the computer that won Jeopardy!

"Such achievements will probably pale against what the coming decades will bring," the article in Britain's Independent said.

"Success in creating AI would be the biggest event in human history," the article continued. "Unfortunately, it might also be the last, unless we learn how to avoid the risks."

The professors wrote that in the future there may be nothing to prevent machines with superhuman intelligence from self-improving, triggering a so-called "singularity."

"One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all," the article said.

"Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks."

Go here to read the rest:

Stephen Hawking says threat of artificial intelligence a real concern

Artificial intelligence could end mankind: Hawking

Karwai Tang | Getty Images

Professor Stephen Hawking attends the gala screening of 'Hawking' on the opening night of the Cambridge Film Festival held at Emmanuel College on September 19, 2013 in Cambridge, Cambridgeshire.

Stephen Hawking and a group of top physicists are sounding the alarm on artificial intelligence, writing in The Independent that success in creating AI could be "the biggest event in human history," but also "the last."

The scientists authored the opinion column against the backdrop of the new AI movie Transcendence, which stars Johnny Depp.

"It's tempting to dismiss the notion of highly intelligent machines as mere science fiction. But this would be a mistake, and potentially our worst mistake in history," they wrote.

The physicists cite Google's recent self-driving car announcement, digital personal assistants like Apple's Siri and her new competitors, and more sinister uses of AI like autonomous weapons that can choose their own targets.

See original here:

Artificial intelligence could end mankind: Hawking

Stephen Hawking Warns: Artificial Intelligence May Enslave Humans – Video


Stephen Hawking Warns: Artificial Intelligence May Enslave Humans
Stephen Hawking Warns: Artificial Intelligence Entity May Enslave Humans and be the biggest mistake in history. *SUBSCRIBE* for more great videos! Mark Dice is a media analyst, political...

By: Mark Dice

Continued here:

Stephen Hawking Warns: Artificial Intelligence May Enslave Humans - Video

Stephen Hawking: Dismissing artificial intelligence would be a mistake

LONDON, May 3 (UPI) -- Stephen Hawking, in an article inspired by the new Johnny Depp flick Transcendence, said it would be the "worst mistake in history" to dismiss the threat of artificial intelligence.

In a paper he co-wrote with University at California, Berkeley computer-science professor Stuart Russell, and Massachusetts Institute of Technology physics professors Max Tegmark and Frank Wilczek, Hawking said cited several achievements in the field of artificial intelligence, including self-driving cars, Siri and the computer that won Jeopardy!

"Such achievements will probably pale against what the coming decades will bring," the article in Britain's Independent said.

"Success in creating AI would be the biggest event in human history," the article continued. "Unfortunately, it might also be the last, unless we learn how to avoid the risks."

The professors wrote that in the future there may be nothing to prevent machines with superhuman intelligence from self-improving, triggering a so-called "singularity."

"One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all," the article said.

"Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks."

More:

Stephen Hawking: Dismissing artificial intelligence would be a mistake

Hawking Concerned Advanced AI Could Spell The End Of Mankind

May 4, 2014

redOrbit Staff & Wire Reports Your Universe Online

One of the greatest thinkers in the world believes that artificial intelligence could be the worst thing to happen to humanity, and that the scenario depicted in the recently-released Johnny Depp film Transcendence should not be simply dismissed as a work of science fiction.

Writing in Thursdays edition of the British newspaper The Independent, internationally recognized theoretical physicist Stephen Hawking said that ignoring the deeper lessons of the movie in which Depps character has his consciousness uploaded into a quantum computer, only to grow more powerful and become virtually omniscient would be a mistake, and potentially our worst mistake in history.

Advancements in artificial Intelligence, including driverless vehicles and digital assistants such as Siri and Cortana, are often viewed as ways to make life easier for mankind, explained Daily Mail reporter Ellie Zolfagharifard.

However, Hawking expresses concern that they could ultimately lead to our downfall unless we prepare for the potential risks such as how to respond to technology that gains the ability to think independently and adapt to its environment.

The potential benefits are huge; everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyones list, Hawking wrote. Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.

One prime concern is the development of autonomous-weapons systems capable of selecting and eliminating targets weapons that the UN and Human Rights Watch have proposed banning via treaty. Such weaponized machines could grow into something straight out of the Terminator movies: becoming self-aware, constantly improving their own design and essentially becoming unstoppable, noted Slashgears Nate Swanner.

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand, said Hawking. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

The Cambridge University Department of Applied Mathematics and Theoretical Physics Research Director also seems dubious about those who claim to be experts in artificial intelligence, according to CNET writer Chris Matyszczyk.

Read the original here:

Hawking Concerned Advanced AI Could Spell The End Of Mankind

Programming Artificial Intelligence For Games – Adabelle Combrink (SGA Conference 2014) – Video


Programming Artificial Intelligence For Games - Adabelle Combrink (SGA Conference 2014)
Talk by Adabelle Combrink, student at The Game Assembly. Recorded at SGA Conference 2014 in Stockholm, Sweden. Learn more about Swedish Game Awards over at: ...

By: SwedishGameAwards2014

See the original post:

Programming Artificial Intelligence For Games - Adabelle Combrink (SGA Conference 2014) - Video

Breaking News Stephen Hawking Says Artificial Intelligence Will Destroy Us All – Video


Breaking News Stephen Hawking Says Artificial Intelligence Will Destroy Us All
The Fortean Slip News 17 In this news we look into Stephen Hawking #39;s recent announcement that artificial intelligence will be the end of humanity. The World Health Organization announces that...

By: Fortean Slip

Visit link:

Breaking News Stephen Hawking Says Artificial Intelligence Will Destroy Us All - Video

Stephen Hawking: The creation of true AI could be the 'greatest event in human history'

Pioneering physicist Stephen Hawking has said the creation of general artificial intelligence systems may be the "greatest event in human history" but, then again, it could also destroy us.

In an Op-Ed in UK newspaper The Independent, the physicist said IBM's Jeopardy!-busting Watson machine, Google Now, Siri, self-driving cars, and Microsoft's Cortana will all "pale against what the coming decades will bring."

We are, in Hawkins words, caught in "an IT arms race fueled by unprecedented investment and building on an increasingly mature theoretical foundation."

These investments, whether made by huge companies such as Google or startups like Vicarious, have the potential to revolutionize our society. But Hawkin worries that though "success in creating AI would be the biggest event in human history. ... it might also be the last, unless we learn how to avoid the risks."

So inevitable is the rise of a general artificial intelligence system that Hawkins cautioned that governments and companies are not doing nearly enough to prepare for its arrival.

"If a superior alien civilization sent us a message saying, 'We'll arrive in a few decades', would we just reply, 'OK, call us when you get here we'll leave the lights on'? Probably not but this is more or less what is happening with AI," Hawking wrote.

The only way to stave off a societal meltdown when AI arrives is to devote serious research at places such as Cambridge's Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future Life Institute, he said.

Hawkins's view is not a fringe one. When Google acquired AI company "DeepMind" earlier this year, its employees are reported to have made the creation of an internal ethics board a condition of the acquisition.

Similarly, in their book The Second Machine Age, academics Erik Brynjolfsson and Andrew McAfee have cautioned that the automation possibilities afforded by new artificial intelligence systems pose a profound threat to the political stability of the world unless governments figure out what to do with the employment disruptions that major AI will trigger.

But for all the worries Hawking displays, it's worth noting that a general artificial intelligence may yet be a long way off. In our own profile of AI pioneer Jeff Hawkins, the inventor said what his company is working on today "is maybe five per cent of how humans learn."

The rest is here:

Stephen Hawking: The creation of true AI could be the 'greatest event in human history'

Stephen Hawking freaks out about artificial intelligence

Stephen Hawking, certified genius, is freaking out aboutour Skynet future. In an article for The Independent, the theoretical physicist and author of A Brief History of Time warns that the development of real artificial intelligence could be potentially our worst mistake in history.

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

Theres nothing particularly new about the notion of runaway AI turning humans into a bunch of meat puppets its one of the oldest and most popular tropes in science fiction. The notability here stems entirely from the fact that the warning comes from Hawking. Someone who understands the physics of black holes and the many-worlds interpretation of quantum mechanics needs to be taken seriously when he warns that were all just one click away from getting plugged into The Matrix.

Right?

Sure, it could happen. But Hawking picks some bad examples as his evidence that we are accelerating towards the strong AI future. He writes Recent landmarks such as self-driving cars, a computer winning at Jeopardy! and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation.

Im not so sure about that increasingly mature theoretical foundation. Googles self-driving cars are amazing, but they are largely a product of advances in cheap sensor technology combined with the increasing feasibility of doing real-time data-crunching. The cars arent autonomous in a self-aware sense analogous to 2001s Hal. The same is more or less true for Siri. We arent really all that much closer to creating real machine intelligence now than we were 20 years ago. Weve just gotten much better at exploiting the brute force of fast processing power and big data-enabled pattern matching to solve problems that previously seemed intractable. These advances are impressive no question about it but not yet scary. The machines arent thinking. Theyre still just doing what theyre told to do.

So Stephen, take a chill pill! At this juncture, we seem more likely to destroy our civilization by overheating the planet than by breeding malevolent AIs. Instead of worry about mistakes we might make, maybe we should focus on the ones weve already made.

Read more here:

Stephen Hawking freaks out about artificial intelligence

Terasem Founder Building Robot Replica of Wife Using Artificial Intelligence – Video


Terasem Founder Building Robot Replica of Wife Using Artificial Intelligence
Terasem Founder Building Robot Replica of Wife Using Artificial Intelligence. *SUBSCRIBE* for more great videos! Mark Dice is a media analyst, political activist, and author who, in an...

By: Mark Dice

Read the original:

Terasem Founder Building Robot Replica of Wife Using Artificial Intelligence - Video

Survival of the fittest: Evolution used to advance artificial intelligence

By NADIA HILL / nadiah@laramieboomerang.com Thursday, May 01, 2014

Computer-simulated blocks gallop across a screen.

Multi-colored, they started as blobs that barely wiggled, but over time and through multiple generations, each one took shape into something similar to a horse or giraffe.

To me, they look alive, not robotic, said Jeff Clune, University of Wyoming computer science assistant professor. Theyre quirky but still functional. They have that je ne sais quoi of nature, with no human input.

Clune started up UWs evolving artificial intelligence lab in January 2013 and since then, four students have published research in peer-reviewed scientific journals. Several of his students have won national awards from Associated for Computing Machinery to NASA space grants.

He currently has five Ph.D. students, two masters students, two undergraduate students and one Laramie High School student.

Clune and his 10 students spend their days using evolution to create smarter robots.

Artificial intelligence in robots is a software limitation, and most robots cant walk across a floor without tripping, Clune said.

When I read news like firefighters dying, I think we should be sending in robots to do that, Clune said. Were trying to harness the power of evolution. Its an extremely creative and powerful design force. Can we use that process to evolve robots? We can harness it, and when we do, evolution comes up with something smarter than humans can design.

The basic concept is Darwinian evolution and survival of the fittest, he said.

Read the original:

Survival of the fittest: Evolution used to advance artificial intelligence

AI in Carmageddon – More Intuition – Artificial Intelligence in Video Games – Video


AI in Carmageddon - More Intuition - Artificial Intelligence in Video Games
Fellow gamers on the Carmageddon forum were skeptical about my claims that the artificial intelligence in Carmageddon could predict in advance when the player was about to waste an opponent,...

By: AmazingArends

Read more here:

AI in Carmageddon - More Intuition - Artificial Intelligence in Video Games - Video