Artificial intelligence: Hawking's fears stir debate

REX/Jason Bye Professor Stephen Hawking at the Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Britain, Oct 19, 2012. There was the psychotic HAL 9000 in "2001: A Space Odyssey," the humanoids which attacked their human masters in "I, Robot" and, of course, "The Terminator", where a robot is sent into the past to kill a woman whose son will end the tyranny of the machines.

Never far from the surface, a dark, dystopian view of artificial intelligence (AI) has returned to the headlines, thanks to British physicist Stephen Hawking.

"The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race," Hawking told the BBC.

"Once humans develop artificial intelligence it would take off on its own, and re-design itself at an ever increasing rate," he said.

But experts interviewed by AFP were divided.

Some agreed with Hawking, saying that the threat, even if it were distant, should be taken seriously. Others said his warning seemed overblown.

"I'm pleased that a scientist from the 'hard sciences' has spoken out. I've been saying the same thing for years," said Daniela Cerqui, an anthropologist at Switzerland's Lausanne University.

Gains in AI are creating machines that outstrip human performance, Cerqui argued. The trend eventually will delegate responsibility for human life to the machine, she predicted.

"It may seem like science fiction, but it's only a matter of degrees when you see what is happening right now," said Cerqui. "We are heading down the road he talked about, one step at a time."

Nick Bostrom, director of a programme on the impacts of future technology at the University of Oxford, said the threat of AI superiority was not immediate.

See the rest here:

Artificial intelligence: Hawking's fears stir debate

Stephen Hawking Says Artificial Intelligence Is A Threat To Human Existence – Video


Stephen Hawking Says Artificial Intelligence Is A Threat To Human Existence
Stephen Hawking is happy with the artificial intelligence system that helps him speak, but remains leery of making technology in general too smart. When we t...

By: GeoBeats News

View original post here:

Stephen Hawking Says Artificial Intelligence Is A Threat To Human Existence - Video

Will Artificial Intelligence Destroy Mankind? | Frugals Take 046 – Video


Will Artificial Intelligence Destroy Mankind? | Frugals Take 046
For many of us, our first exposure to AI came from Hollywood. The murderous lip reading HAL 9000. One of my personal favorites was Colossus The Forbin Project. A story about a powerful computer...

By: FrugalTech

Read more:

Will Artificial Intelligence Destroy Mankind? | Frugals Take 046 - Video

Stephen Hawking Warns Of The Dangers Of Artificial Intelligence – IGN News – Video


Stephen Hawking Warns Of The Dangers Of Artificial Intelligence - IGN News
Professor Stephen Hawking has spoken out about the possible dire consequences of developing advanced artificial intelligence. In other Hawking news, the astrophysicist has voiced interest...

By: IGN

Read the original here:

Stephen Hawking Warns Of The Dangers Of Artificial Intelligence - IGN News - Video

Stephen Hawking warns artificial intelligence will lead to destruction of humanity – Video


Stephen Hawking warns artificial intelligence will lead to destruction of humanity
Artificial intelligence has the potential to end mankind, according to Stephen Hawking. The renowned physicist warns that if machines can match human capabilities, they may decide humans are...

By: RT America

Here is the original post:

Stephen Hawking warns artificial intelligence will lead to destruction of humanity - Video

Wearing Your Intelligence: How to Apply Artificial Intelligence in Wearables and IoT

Wearables and the Internet of Things (IoT) may give the impression that its all about the sensors, hardware, communication middleware, network and data but the real value (and company valuation) is in insights. In this article, we explore artificial intelligence (AI) and machine learning that are becoming indispensable tools for insights, views on AI, and a practical playbook on how to make AI part of your organizations core, defensible strategy.

Before we proceed, lets first define the terms. Otherwise, we risk commingling marketing terms like Big Data and not addressing the actual fields.

Artificial Intelligence: The field of artificial intelligence is the study and design of intelligent agents able to perform tasks that require human intelligence, such as visual perception, speech recognition, and decision-making. In order to pass the Turing test, intelligence must be able to reason, represent knowledge, plan, learn, communicate in natural language and integrate all these skills towards a common goal.

Machine Learning: The subfield of machine learning grew out of the effort of building artificial intelligence. Under the learning trait of AI, machine learning is the subfield that learns and adapts automatically through experience. It focuses on prediction, based on known properties learned from the training data. The origin of machine learning can be traced back to the development of neural network model and later to the decision tree method. Supervised and unsupervised learning algorithms are used to predict the outcome based on the data.

Data Mining: The field of data mining grew out of Knowledge Discovery in Databases (KDD), where data mining represents the analysis step of the KDD process. Data mining focuses on the discovery of previously unknown properties in the data. It originated from research on efficient algorithm for mining association rules in large databases, which then spurred other research on discovering patterns and more efficient mining algorithms. Machine learning and data mining overlap in many ways. Data mining uses many machine learning methods, but often with a slightly different goal in mind. The difference between machine learning and data mining is that in machine learning, performance is usually evaluated with respect to the ability to reproduce known knowledge while in KDD the key task is the discovery of previously unknown knowledge. Unlike machine learning, in KDD, supervised methods cannot be used due to the unavailability of training data.

Though perhaps not explicitly stated, you will find that some at your work hold sci-fi views of AI that could hamper proactive exploration of AI and machine learning within your organization. AI, for some, bring images of HAL 9000 from A Space Odyssey or more recent films such as Her and The Machine.

Many futurists have speculated about the future of artificial intelligence that could rival or exceed human intelligence. One of those futurists is Ray Kurzweil, a recipient of the prestigious National Medal of Technology and Innovation honor.

In The Singularity is Near, Kurzweil elaborates on the singularity hypothesis. Kurzweil predicts that accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing or even ending civilization in an event called the singularity. During this period, he predicts human life will be irreversibly transformed and humans will transcend the limitations of our biological bodies and brain.

Kurzweil claims that machines will pass the Turing AI test by 2029, and that around 2045, the pace of change will be so astonishingly quick that we wont be able to keep up, unless we enhance our own intelligence by merging with the intelligent machines we are creating. He further claims that humans will be a hybrid of biological and non-biological intelligence that becomes increasingly dominated by its non-biological component. Kurzweil envisions nanobots inside our bodies that fight against infections and cancer, replace organs, and improve memory and cognitive abilities. Eventually our bodies will contain so much augmentation that we will be able to alter our physical manifestation at will.

The artificial general intelligence (AGI) or strong AI community, though varying widely in timeframe to reach singularity, are in consensus that its plausible, with most mainstream AI researchers doubting that progress will be rapid.

Visit link:

Wearing Your Intelligence: How to Apply Artificial Intelligence in Wearables and IoT

Hawking Sounds Alarm Over AI's End Game

Artificial intelligence eventually could bring about mankind's demise, renowned physicist Stephen Hawking said in an interview published earlier this week.

"The primitive forms of artificial intelligence we already have have proved very useful, but I think the development of full artificial intelligence could spell the end of the human race," Hawking told the BBC in an interview commemorating the launch of a new system designed to help him communicate.

"Once humans develop artificial intelligence, it would take off on its own and redesign itself at an ever-increasing rate," he added. "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."

Because he is almost entirely paralyzed by a motor neuron disease related to amyotrophic lateral sclerosis, Hawking relies on technology to communicate. His new platform was created by Intel to replace a decades-old system.

Dubbed "ACAT" (Assistive Context Aware Toolkit), the new technology has doubled Hawking's typing speed and enabled a tenfold improvement in common tasks such as navigating the Web and sending emails.

Whereas previously conducting a Web search meant that Hawking had to go through multiple steps -- including exiting from his communication window, navigating a mouse to run the browser, navigating the mouse again to the search bar, and finally typing the search text -- the new system automates all of those steps for a seamless and swift process.

Newly integrated software from SwiftKey has delivered a particularly significant improvement in the system's ability to learn from Hawking to predict his next characters and words; as a result, he now must type less than 20 percent of all characters.

The open and customizable ACAT platform will be available to research and technology communities by January of next year, Intel said.

Hawking's cautionary statements about AI echo similar warnings recently delivered by Elon Musk, CEO of both SpaceX and Tesla Motors.

Musk, Hawking and futurist Ray Kurzweil "all share a vision of autonomous artificial intelligence that will begin evolving and adding capabilities at a rate that we mere humans can't keep up with," said Dan Miller, founder and lead analyst with Opus Research.

Go here to see the original:

Hawking Sounds Alarm Over AI's End Game

Stephen Hawking: ‘AI could spell end of the human race’ – Video


Stephen Hawking: #39;AI could spell end of the human race #39;
Subscribe to BBC News HERE http://bit.ly/1rbfUog Professor Stephen Hawking has told the BBC that artificial intelligence could spell the end for the human race. In an interview after the launch...

By: BBC News

Read the original here:

Stephen Hawking: 'AI could spell end of the human race' - Video

Stephen Hawking: Artificial intelligence could end human …

By Tanya Lewis

Stephen Hawking recently began using a speech synthesizer system that uses artificial intelligence to predict words he might use.(Flickr/NASA HQ PHOTO.)

The eminent British physicist Stephen Hawking warns that the development of intelligent machines could pose a major threat to humanity.

"The development of full artificial intelligence (AI) could spell the end of the human race," Hawking told the BBC.

The famed scientist's warnings about AI came in response to a question about his new voice system. Hawking has a form of the progressive neurological disease called amyotrophic lateral sclerosis (ALS or Lou Gehrig's disease), and uses a voice synthesizer to communicate. Recently, he has been using a new system that employs artificial intelligence. Developed in part by the British company Swiftkey, the new system learns how Hawking thinks and suggests words he might want to use next, according to the BBC. [Super-Intelligent Machines: 7 Robotic Futures]

Humanity's biggest threat?

Fears about developing intelligent machines go back centuries. More recent pop culture is rife with depictions of machines taking over, from the computer HAL in Stanley Kubrick's "2001: A Space Odyssey" to Arnold Schwarzenegger's character in "The Terminator" films.

Inventor and futurist Ray Kurzweil, director of engineering at Google, refers to the point in time when machine intelligence surpasses human intelligence as "the singularity," which he predicts could come as early as 2045. Other experts say such a day is a long way off.

It's not the first time Hawking has warned about the potential dangers of artificial intelligence. In April, Hawking penned an op-ed for The Huffington Post with well-known physicists Max Tegmark and Frank Wilczek of MIT, and computer scientist Stuart Russell of the University of California, Berkeley, forecasting that the creation of AI will be "the biggest event in human history." Unfortunately, it may also be the last, the scientists wrote.

And they're not alone billionaire entrepreneur Elon Musk called artificial intelligence "our biggest existential threat." The CEO of the spaceflight company SpaceX and the electric car company Tesla Motors told an audience at MIT that humanity needs to be "very careful" with AI, and he called for national and international oversight of the field.

More here:

Stephen Hawking: Artificial intelligence could end human ...

Hawking: Artificial intelligence could end human race

By Tanya Lewis

Stephen Hawking recently began using a speech synthesizer system that uses artificial intelligence to predict words he might use.(Flickr/NASA HQ PHOTO.)

The eminent British physicist Stephen Hawking warns that the development of intelligent machines could pose a major threat to humanity.

"The development of full artificial intelligence (AI) could spell the end of the human race," Hawking told the BBC.

The famed scientist's warnings about AI came in response to a question about his new voice system. Hawking has a form of the progressive neurological disease called amyotrophic lateral sclerosis (ALS or Lou Gehrig's disease), and uses a voice synthesizer to communicate. Recently, he has been using a new system that employs artificial intelligence. Developed in part by the British company Swiftkey, the new system learns how Hawking thinks and suggests words he might want to use next, according to the BBC. [Super-Intelligent Machines: 7 Robotic Futures]

Humanity's biggest threat?

Fears about developing intelligent machines go back centuries. More recent pop culture is rife with depictions of machines taking over, from the computer HAL in Stanley Kubrick's "2001: A Space Odyssey" to Arnold Schwarzenegger's character in "The Terminator" films.

Inventor and futurist Ray Kurzweil, director of engineering at Google, refers to the point in time when machine intelligence surpasses human intelligence as "the singularity," which he predicts could come as early as 2045. Other experts say such a day is a long way off.

It's not the first time Hawking has warned about the potential dangers of artificial intelligence. In April, Hawking penned an op-ed for The Huffington Post with well-known physicists Max Tegmark and Frank Wilczek of MIT, and computer scientist Stuart Russell of the University of California, Berkeley, forecasting that the creation of AI will be "the biggest event in human history." Unfortunately, it may also be the last, the scientists wrote.

And they're not alone billionaire entrepreneur Elon Musk called artificial intelligence "our biggest existential threat." The CEO of the spaceflight company SpaceX and the electric car company Tesla Motors told an audience at MIT that humanity needs to be "very careful" with AI, and he called for national and international oversight of the field.

Excerpt from:

Hawking: Artificial intelligence could end human race

Stephen Hawking warns artificial intelligence could be threat to human race

Stephen Hawking has warned that artificial intelligence could one day "spell the end of the human race."

Speaking to the BBC, the eminent theoretical physicist said the artificial intelligence developed so far has been useful but expressed fears of creating something that far exceeded human abilities.

"It would take off on its own, and re-design itself at an ever increasing rate," Hawking said. "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."

Hawking, who has the motor neuron disease ALS, spoke using a new system developed by Intel and Swiftkey. Their technology, already in use in a smartphone keyboard app, learns how the professor thinks and then proposes words he might want to use next.

"I expect it will speed up my writing considerably," he said.

Hawking praised the "primitive forms" of artificial intelligence already in use today, though he eschewed drawing a connection to the machine learning that is required for the predictive capabilities of his speaking device.

Hawking's comments were similar to those made recently by SpaceX and Tesla founder Elon Musk, who called AI a threat to humanity.

"With artificial intelligence, we are summoning the demon," Musk said during an October centennial celebration of the MIT Aeronautics and Astronautics Department. Musk had earlier sent a tweet saying that AI is "potentially more dangerous than nukes."

More broadly, Hawking told the BBC that he saw plenty of benefits from the Internet, but cautioned that it, too, had a dark side.

He called the Internet a "command center for criminals and terrorists," adding, "More must be done by the Internet companies to counter the threat, but the difficulty is to do this without sacrificing freedom and privacy."

See the article here:

Stephen Hawking warns artificial intelligence could be threat to human race

Stephen Hawking says AI could 'end human race'

Barely a month after Elon Musk called artificial intelligence a threat to humanity, another voice a much bigger voice in the scientific world warned that the technology could end mankind.

Barely a month after Elon Musk called artificial intelligence a threat to humanity, another voice a much bigger voice in the scientific world warned that the technology could end mankind.

Stephen Hawking, the renowned physicist, cosmologist and author, in an interview with the BBC this week, said "the development of full artificial intelligence could spell the end of the human race."

The BBC noted that Hawking said the state of artificial intelligence (AI) today holds no threat, but he is concerned about scientists in the future creating technology that can surpass humans in terms of both intelligence and physical strength.

"It would take off on its own, and re-design itself at an ever-increasing rate," Hawking said. "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."

Hawking's comments closely follow those made by high-tech entrepreneur Musk, who raised controversy in late October when he warned an audience at MIT about the dangers behind AI research.

"I think we should be very careful about artificial intelligence," said Musk, CEO of electric car maker Tesla Motors, and CEO and co-founder of the commercial space flight company SpaceX. "If I were to guess at what our biggest existential threat is, it's probably that... With artificial intelligence, we are summoning the demon. In all those stories with the guy with the pentagram and the holy water, and he's sure he can control the demon. It doesn't work out."

Musk, who tweeted this past summer that AI is "potentially more dangerous than nukes," also told the MIT audience that the industry needs national and international oversight.

Musk's comments raised discussion about the state of artificial intelligence, which today is more about robotic vacuum cleaners than Terminator-like robots that shoot people and take over the world.

Yaser Abu-Mostafa, professor of electrical engineering and computer science at the California Institute of Technology, said he was a little surprised that AI is getting so much negative attention since the fearful talk hasn't been preceded by the creation of a new, potentially scary technology.

Read the original:

Stephen Hawking says AI could 'end human race'

Stephen Hawking: Artificial intelligence could end mankind – Wed, 03 Dec 2014 PST

December 3, 2014 in Nation/World

Associated Press

You have viewed free articles or blogs allowed within a 30-day period. Receive FREE access by logging in to or creating your Spokesman.com account.

S-R Media, The Spokesman-Review and Spokesman.com are happy to assist you. Contact Customer Service by email or call 800-338-8801

LONDON Physicist Stephen Hawking has warned that the rise of artificial intelligence could see the human race become extinct.

In an interview with the BBC, the scientist said that while primitive forms of artificial intelligence have proved useful, if the technology is developed to a level that can surpass humans, it could spell the end of the human race.

He said that advanced artificial intelligence would take off on its own, and redesign itself at an ever increasing rate.

Human biological evolution will not be able to compete and would be superseded, he said in the interviewTuesday.

You have viewed 20 free articles or blogs allowed within a 30-day period. FREE registration is now required for uninterrupted access.

View original post here:

Stephen Hawking: Artificial intelligence could end mankind - Wed, 03 Dec 2014 PST

Four ways that technology could destroy mankind

It would rapidly become all-powerful, and we are as capable of understanding what a machine like that could do as a worm is of comprehending Stephen Hawking's immense intellect.

How close are we to that basic thinking machine? Simple artificial intelligence is already being harnessed to design electrical circuits that we dont fully understand. Some antennae designs produced by genetic algorithms, for example, work better than those conceived by humans and we arent always sure why because they're too complex.

Combine this software intelligence with robot bodies and a malevolent motivation and you have a gory science fiction film. But because every aspect of our lives is controlled by computers, such a super-intelligence wouldn't need arms and legs to make life unpleasant.

You can argue that we could do AI experiments on computers isolated from sensitive systems, but we dont seem to be able to keep human hackers in check so why assume we can outwit thinking machines? You can also argue that AI may prove to be friendly, but if they treat us the way that we treat less intelligent creatures then were in a world of trouble.

There were fears that the first atomic bomb tests could ignite the atmosphere, burning alive everyone man, woman and child on Earth. Some believed that the Large Hadron Collider would create a black hole when first booted-up which would consume the Earth. We got away with it, thanks to the fact that both suggestions were hysterical nonsense. But whats to say that one day we wont attempt an experiment which actually does have apocalyptic results?

A decade ago they seemed like distant sci-fi to most people, but were all familiar with 3D printers now: you can buy them on Amazon. Next-day delivery.

We're also creating 3D printers which can replicate themselves by printing the component parts for a second machine.

Imagine a machine capable of doing this which is not only microscopically small, but nanoscopically small. So small that it can stack atoms together to make molecules. This could lead to all sorts of advances in manufacturing and medicine: inject a few thousand into a patient and they'll dissolve a tumour into harmless saline. Millions could float in your car's engine oil, replacing worn metal on vital components and removing the need for human maintenance.

But what if we get it wrong? A single typo in the source code and instead of removing a cancerous lump in a patient these medi-bots could begin to indiscriminately churn out copies of themselves until the patient is converted into a pile of billions of the machines. Then the hospital, too, and the city its in. Finally the whole planet.

This is the grey goo scenario. There would be no way to stop it.

View post:

Four ways that technology could destroy mankind

rFactor Thrust SSC at Jacksonville Superspeedway (AI) {HD} – Video


rFactor Thrust SSC at Jacksonville Superspeedway (AI) {HD}
Driver: Nicolas Soto . Time lap: 30.949 . Car: Thrust SSC (SuperSonic Car) . Track: Jacksonville Superspeedway . Racing Series: Thrust SSC . Controller: Artificial Intelligence . Driving Aids:...

By: Nicolas Soto

Go here to read the rest:

rFactor Thrust SSC at Jacksonville Superspeedway (AI) {HD} - Video

Googles Intelligence Designer

The man behind a startup acquired by Google for $628 million plans to build a revolutionary new artificial intelligence.

Demis Hassabis started playing chess at age four and soon blossomed into a child prodigy. At age eight, success on the chessboard led him to ponder two questions that have obsessed him ever since: first, how does the brain learn to master complex tasks; and second, could computers ever do the same?

Now 38, Hassabis puzzles over those questions for Google, having sold his little-known London-based startup, DeepMind, to the search company earlier this year for a reported 400 million pounds ($650 million at the time).

Google snapped up DeepMind shortly after it demonstrated software capable of teaching itself to play classic video games to a super-human level (see Is Google Cornering the Market on Deep Learning?). At the TED conference in Vancouver this year, Google CEO Larry Page gushed about Hassabis and called his companys technology one of the most exciting things Ive seen in a long time.

Researchers are already looking for ways that DeepMind technology could improve some of Googles existing products, such as search. But if the technology progresses as Hassabis hopes, it could change the role that computers play in many fields.

DeepMind seeks to build artificial intelligence software that can learn when faced with almost any problem. This could help address some of the worlds most intractable problems, says Hassabis. AI has huge potential to be amazing for humanity, he says. It will really accelerate progress in solving disease and all these things were making relatively slow progress on at the moment.

Renaissance Man

Hassabiss quest to understand and create intelligence has led him through three careers: game developer, neuroscientist, and now, artificial-intelligence entrepreneur. After completing high school two years early, he got a job with the famed British games designer Peter Molyneux. At 17, Hassabis led development of the classic simulation game Theme Park, released in 1994. He went on to complete a degree in computer science at the University of Cambridge and founded his own successful games company in 1998.

But the demands of building successful computer games limited how much Hassabis could work on his true calling. I thought it was time to do something that focused on intelligence as a primary thing, he says.

So in 2005, Hassabis began a PhD in neuroscience at University College London, with the idea that studying real brains might turn up clues that could help with artificial intelligence. He chose to study the hippocampus, a part of the brain that underpins memory and spatial navigation, and which is still relatively poorly understood. I picked areas and functions of the brain that we didnt have very good algorithms for, he says.

Read the original:

Googles Intelligence Designer