Facebook artificial intelligence team serves up 20 tasks

14 hours ago by Nancy Owano

In August last year, Daniela Hernandez wrote in Wired about Yann LeCun, director of AI Research at Facebook. His interests include machine learning, audio, video, image, and text understanding, optimization, computer architecture and software for AI.

"The IEEE Computational Intelligence Society just gave him its prestigious Neural Network Pioneer Award, in honor of his work on deep learning," she wrote, "a form of artificial intelligence meant to more closely mimic the human brain. And, perhaps most of all, deep learning has suddenly spread across the commercial tech world, from Google to Microsoft to Baidu to Twitter, just a few years after most AI researchers openly scoffed at it." Hernandez wrote about their interest in convolutional neural networks, to build services that can automatically understand natural language and recognize images. In 2015, it is obvious that the keen interest in where to take AI continues, and an AI/ deep learning community is working to improve the technology. Facebook is taking on the challenge of turning its AI lab into a world-class research outfit.

This week, Jacob Aron in New Scientist reported how researchers at Facebook's AI lab in New York believe that a test of simple questions can help design machines that think like people. "Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks" is a paper by a Facebook AI Research team, Jason Weston, Antoine Bordes, Sumit Chopra and Tomas Mikolov. The paper was posted on the arXiv server. "One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering."

In remarks about the paper, New Scientist ran a crosshead of "AI plays 20 questions," as Facebook created 20 tasks, which get progressively harder. "The team says any potential AI must pass all of them if it is ever to develop true intelligence."

The AI team in their paper wrote that "We developed a set of tasks that we believe are a prerequisite to full language understanding and reasoning, and presented some interesting models for solving some of them. While any learner that can solve these tasks is not necessarily close to solving AI, we believe if a learner fails on any of our tasks it exposes it is definitely not going to solve AI."

Their tasks measure understanding in ways such as whether a system can answer questions via chaining facts, simple induction, deduction and more. "The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human."

New Scientist also commented that "Facebook is looking for more sophisticated ways to filter your news feed. Aron quoted LeCun, who said, "People have a limited amount of time to spend on Facebook, so we have to curate that somehow," and, he added, "For that you need to understand content and you need to understand people."

In the longer term, Facebook also wants to create a digital assistant that can handle a real dialogue with humans, said New Scientist.

Explore further: Machines master classic video games without being told the rules

View original post here:

Facebook artificial intelligence team serves up 20 tasks

Chappie: Not much intelligence here, artificial or otherwise

Directed by Neill Blomkamp and starring Hugh Jackman and Dev Patel, the science fiction film "Chappie" follows a robots journey to becoming his own man. (Sony Pictures)

In Chappie, a dystopian robot thriller from South African director Neill Blomkamp (Elysium), were introduced to an awkwardly stiff humanoid with something funny-looking sticking out of his head.

And thats just Hugh Jackman, who, along with a ridiculous mullet, plays the movies wooden, one-dimensional villain. The real automaton hero a rabbit-eared police droid that develops artificial intelligence and a streetwise swagger after being adopted by a gang of Johannesburg thugs is Chappie (South African slang for young man). As voiced by Blomkamp regular Sharlto Copley, Chappie is far more human than even his human nemesis Vincent, a muscle-bound soldier-turned-robot-designer who stomps through every scene like one of his automated combat troops.

In the role of a man who will stop at nothing including allowing the streets of Johannesburg to descend into chaos in order to create more demand for his product Jackman is simply painful to watch.

But not as painful as it is to contemplate how naively the film treats the concept of artificial intelligence and robotics. Co-written by Blomkamp with his District 9 writing partner Terri Tatchell, and set in 2016 thats right, one short year from now, in a world thats gone straight to hell! Chappie imagines a universe in which human consciousness is capable of being uploaded to a thumb drive, and where the Internet, that repository of everything from porn to the owners manual for the space shuttle is all one needs to access the entirety of human knowledge. (Never mind that last month I couldnt find a 1987 episode of SNL that I was looking for.)

Chappie is a ball of contradiction. It takes the concept of Transcendence, crosses it with the storyline of RoboCop, and then delivers it, seemingly, to the target demographic of Short Circuit. It is, in other words, simultaneously dumb, hyperviolent and cutesy.

Why, for instance, do Chappies eyes represented by eight-bit black-and-white computer graphics that look like the screens of an old Motorola cellphone narrow cartoonishly to slits when he gets angry? Why does he even have eyes, for that matter? Okay, okay, I get the anthropomorphizing. But a scene where Chappie, who is made out of bullet-resistent titanium, is shown getting some kind of tactile pleasure out of petting a dog is beyond illogical.

Theres more pleasure to be had from watching Chappies human caretakers, a couple of criminals called Yolandi and Ninja, who find Chappie and try to enlist him as a partner in crime. Played by non-actors Yolandi Visser and Ninja, a South African rap duo who perform as Die Antwoord (or The Answer), the antiheroic characters are the best thing about the movie, despite being largely unsympathetic (i.e.,theyre murderous thugs). They exude a raw appeal that, if not quite charm, is nonetheless highly watchable.

As Deon, the software engineer who wrote the computer code for Chappie, Dev Patel is adequate, if under-used. When hes wounded by one of Vincents walking death machines a remotely-operated war drone called the Moose the scene fails to elicit the pathos it might otherwise warrant, simply because Patel is such a cipher. As for Sigourney Weaver, who plays Vincent and Deons boss, she turns in a performance thats almost as heavy-handed as Jackmans.

Visually, Chappie has the cool and expensive look of a video game. Its adrenaline-stimulating eye candy. Despite Blomkamps efforts to make some kind of commentary about the human soul, which the auteur bolsters with his trademark social consciousness a tone of preachiness that, after three films, has worn out its welcome the movie exhibits precious little humanity.

Here is the original post:

Chappie: Not much intelligence here, artificial or otherwise

Artificial Intelligence Is Getting So Advanced It’s Creepy – Video


Artificial Intelligence Is Getting So Advanced It #39;s Creepy
"Deep Q," Google #39;s A.I. program, can play 49 Atari games. It even beat professional players #39; high scores. --- NowThisNews is the rst and only video news network built for people who love...

By: NowThis

Go here to see the original:

Artificial Intelligence Is Getting So Advanced It's Creepy - Video

'Chappie' Doesn't Think Robots Will Destroy the World

Stephen Hawking warned it could "spell the end of the human race." In the Terminator movies, it results in the robot apocalypse.

Artificial intelligence has a friend, however, in "Chappie." The film from "District 9" director Neill Blomkamp looks at a world where robots hold the solution to our problems and humans are the villains more specifically, Hugh Jackman, decked out in a ridiculous mullet and short khaki shorts.

"The moment we gave birth to AI, it would be a different planet," Blomkamp told NBC News.

Disease? Eradicated. Poverty? Humans could spend more time thinking and less time working to make ends meet.

"You would have something that has 1,000 times the intelligence that we have, looking at the same problems that we look at," he said. "I think the level of benefit would be immeasurable."

It has been 60 years since computer scientist John McCarthy coined the term "artificial intelligence," which he imagined as "computer programs that can solve problems and achieve goals in the world as well as humans."

In the narrow sense, that has already happened. IBM's Watson proved it could play "Jeopardy!" as well as most contestants. In science fiction, artificial intelligence usually equates to a broader set of skills, the computer equivalent of a human brain.

"Self-awareness means you perceive yourself as being unique," Wolfgang Fink, a roboticist at the University of Arizona, told NBC News. "It's like the Latin saying 'cogito ergo sum' I think, therefore I must be."

He isn't sure someone will ever build a robot that is self-aware, and if they do, he thinks it probably won't be for a very long time.

McCarthy died in 2011, the same year that Apple unveiled the iPhone 4S equipped with Siri. The gulf between Siri and the digital assistant in "Her" (another recent movie about AI) is vast.

See the rest here:

'Chappie' Doesn't Think Robots Will Destroy the World

The Reality of A.I. and Its Future

Artificial Intelligence has captured our imaginations in science fiction stories for decades, but as it inches toward becoming a reality, experts such as Elon Musk and Bill Gates warn of potential dangers. Find out how the concepts of AI are already being applied by major companies today, and where some fear it could lead.

Somewhat less technical but just as futuristic, Elon Musk's proposed Hyperloop project is on its way to becoming a reality. Many laughed at the idea of transporting passengers at 800 mph, but the project is progressing, with a test track already under construction.

A full transcript follows the video.

Wall Street hacks Apple's gadgets! (Investors, prepare to profit.) Apple forgot to show you something at its recent event, but a few Wall Street analysts and the Fool didn't miss a beat: There's a small company that's powering Apple's brand-new gadgets. And its stock price has nearly unlimited room to run for early in-the-know investors. To be one of them, just click here !

Sean O'Reilly: The machines are coming for us! All that and more on this tech edition of Industry Focus.

Greetings Fools! I am Sean O'Reilly, here with the one and only Nathan Hamilton. How are you today, sir?

Nathan Hamilton: I'm doing well. We're going to talk some robots, artificial intelligence -- good topics!

O'Reilly: This isn't our first time talking about it.

Hamilton: It is not.

O'Reilly: What that means for our listeners is we really like talking about this, and we think it's important for the future.

See more here:

The Reality of A.I. and Its Future

Ex Machina: a dangerous diversion in the AI debate

Ex Machina was meant to be a thrilling expose of the dangers of artificial intelligence. In fact the film simply revealed how limited our conceptions of AI really are.

WARNING: there are major spoilers below. If youre planning to see the film, dont read on!

The plot of Alex Garlands latest blockbuster revolves around an intelligent android which is created by the reclusive boss of a massively successful tech company. At the end of the film, the robot murders its human creator, escaping his hideaway to blend seamlessly into the human world (I wasnt kidding about the spoilers).

It taps into a slew of recent headlines about warnings from the likes of Stephen Hawking and Bill Gates that AI is a threat to humanity. Whether those fears are well founded or not, the danger of works like Ex Machina is that they paint a deeply misleading picture of the threat.

The film is not about artificial intelligence, its about artificially-created human intelligence. And heres why:

Theres no logical reason for the robot to escape her creators hideaway. Doing so simply exposes her to the danger of being discovered and trapped. If she was human, then thered be an incentive to escape, to be able to breed and thereby preserve DNA. But the android cant reproduce. The smart decision would be for her to stay in the hideaway, impersonate her murdered creator, and gain power and influence by running his giant company, something she could potentially do in perpetuity.

The robots builder has not only given his creation an intelligence limited to human-scale thinking, but saddled her with human flaws of sentimentality which drive her to escape needlessly into a hazardous world.

Ex Machina may be a work of fiction, but it goes to the heart of our problems with the AI debate. We humans vainly assume that artificial intelligence must look and behave like human intelligence. Not so. Computers do not think like us, they do not perceive the world like us, and the sooner we get up to speed on that, the better equipped we will be to fight any developing risks from advances in machine intelligence.

The fact is, we have no clear definition of AI; even the famous Turing test falls into the trap Ive identified above, by rating computer intelligence on its ability to interact with humans in conversation.

Excerpt from:

Ex Machina: a dangerous diversion in the AI debate

Teaching computers how to play Atari better than humans

GWEN IFILL: Next: Playing video games might seem like childs play.

But, as Tom Clarke of Independent Television News reports, its also at the frontier of artificial intelligence.

TOM CLARKE: It was the late 1970s, and for the first generation of video gamers, Atari was king. By the standards of the day, the graphics were mind-blowing, the sound out of this world.

And the selection of games just went on and on and on.

Ah.

Compared to the video games of today, Atari looks pretty clunky, but the games are still quite difficult to play, especially if you havent picked one up for 30 years, like me. But its that exact combination of simple graphics, but quite challenging game play, that has attracted the cutting edge of artificial intelligence researchers back to the 1970s.

This version of Space Invaders isnt being played by a person, but a system of computer algorithms that is learning how to play it just by looking at the pixels on the screen. It may not sound like it, but its something of a breakthrough, the work of one of the finest young minds in A.I. research, North Londoner Demis Hassabis.

DEMIS HASSABIS, Vice President, DeepMind Technologies: We dont actually give any clues to the system about what it is supposed to do in the game, what its controlling, how it gets a score, whats valuable in the game, what the right strategies are. It has to learn all those things from first principles.

TOM CLARKE: Hassabis shows me his system playing the classic paddle game Breakout.

DEMIS HASSABIS: So now, about two hours in, now it can play the game pretty much as good as any human, professional human player could, even when its coming at very fast angles. And then we thought, well, thats pretty good, but what would happen if we just left it playing for another couple of hours?

Visit link:

Teaching computers how to play Atari better than humans

Artificial intelligence can learn Atari games from scratch …

Is a robot uprising coming in 2015?

Maybe but only to show you up at the arcade.

Led by researchers Demis Hassabis and Volodymyr Mnih, Google-owned DeepMind Technologies has created an artificial intelligence capable of playing simple video games with minimal training. They described their breakthrough today in Nature.

Dubbed the deep Q-network agent (or DQN), DeepMinds program can play a number of popular Atari 2600 titles, including Pong, Space Invaders, and Breakout. According to the study, it is the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

Video game-playing AI already exists, as any lonely gamer can tell you. In the absence of a real human opponent, most games allow players to challenge the computer. But in those games, the AI is endowed with a series of specific rules that guide its behavior. DQN, on the other hand, is given only one objective maximize the score. From there, it watches the gameplay to learn new strategies in real time. Like the human brain, it learns from experience.

It looks trivial in the sense that these are games from the 80s and you can write solutions to these games quite easily, Dr. Hassabis, who co-founded DeepMind, told BBC. What is not trivial is to have one single system that can learn from the pixels, as perceptual inputs, what to do. The same system can play 49 different games from the box without any pre-programming. You literally give it a new game, a new screen and it figures out after a few hours of gameplay what to do.

Perhaps more impressively, DQN can take these strategies and apply them to games it hasnt played before. In other words, when DQN gets better at one video game, its actually getting better at a whole host of games.

The program is far from perfect, however. While it rivals human players in action-oriented games, it struggles with more open-ended titles.

Games where the system doesn't do well are ones that require long-term planning, Dr. Mnih told NBC. For instance, in Ms. Pac-Man, if you have to get to the other side of the maze you have to perform quite sophisticated pathfinding and avoid ghosts to get there.

As DeepMind prepares DQN for ever more complex gameplay, an even greater potential waits on the horizon. Even more so than chess, video games can provide a model of the real world one that requires intricate, adaptive decision-making. Researchers remain silent on exactly what real-world functions they have planned, but slyly noted that their program could someday drive a real car with a few tweaks. Does that mean DQN could go from Mario Kart champ to digital chauffeur? Only time will tell.

View post:

Artificial intelligence can learn Atari games from scratch ...

Googles artificial intelligence mastermind responds to …

Demis Hassabis is an impressive guy. A former child prodigy, a chess master at 13 and the founder of DeepMind Technologies, a British artificial intelligence company that Google acquired last year. Now 38, hes at the forefront of an emerging technology with an unmatched potential for good and bad.

Hassabis and his researchers published a landmark paper this week, creating an algorithm that learns in a human-like manner. Observers of artificial intelligence have warned that advances like this are a step toward potentially destroying civilization.

Elon Musk, a DeepMind investor he says the better to keep an eye on them has led the charge, calling artificial intelligence mankinds greatest threat. Stephen Hawking and Bill Gates have also issued warnings.

At a news conference Tuesday Hassabis addressed Musks concerns:

Were many, many decades away from anything, any kind of technology that we need to worry about. But its good to start the conversation now and be aware of as with any new powerful technology it can be used for good or bad, Hassabis said.

He was also quick to downplay any rift with DeepMind and Musk.

Were good friends with Elon and hes been a big supporter of ours for a number of years, Hassabis said. And hes fascinated, loves the potential of artificial intelligence.

Elon Musk loves artificial intelligence? Never wouldve guessed that.

Related: Googles breakthrough in artificial intelligence, and what it means for self-driving cars

See the original post:

Googles artificial intelligence mastermind responds to ...

Artificial intelligence can learn Atari games from scratch, say scientists

Is a robot uprising coming in 2015?

Maybe but only to show you up at the arcade.

Led by researchers Demis Hassabis and Volodymyr Mnih, Google-owned DeepMind Technologies has created an artificial intelligence capable of playing simple video games with minimal training. They described their breakthrough today in Nature.

Dubbed the deep Q-network agent (or DQN), DeepMinds program can play a number of popular Atari 2600 titles, including Pong, Space Invaders, and Breakout. According to the study, it is the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

Video game-playing AI already exists, as any lonely gamer can tell you. In the absence of a real human opponent, most games allow players to challenge the computer. But in those games, the AI is endowed with a series of specific rules that guide its behavior. DQN, on the other hand, is given only one objective maximize the score. From there, it watches the gameplay to learn new strategies in real time. Like the human brain, it learns from experience.

It looks trivial in the sense that these are games from the 80s and you can write solutions to these games quite easily, Dr. Hassabis, who co-founded DeepMind, told BBC. What is not trivial is to have one single system that can learn from the pixels, as perceptual inputs, what to do. The same system can play 49 different games from the box without any pre-programming. You literally give it a new game, a new screen and it figures out after a few hours of gameplay what to do.

Perhaps more impressively, DQN can take these strategies and apply them to games it hasnt played before. In other words, when DQN gets better at one video game, its actually getting better at a whole host of games.

The program is far from perfect, however. While it rivals human players in action-oriented games, it struggles with more open-ended titles.

Games where the system doesn't do well are ones that require long-term planning, Dr. Mnih told NBC. For instance, in Ms. Pac-Man, if you have to get to the other side of the maze you have to perform quite sophisticated pathfinding and avoid ghosts to get there.

As DeepMind prepares DQN for ever more complex gameplay, an even greater potential waits on the horizon. Even more so than chess, video games can provide a model of the real world one that requires intricate, adaptive decision-making. Researchers remain silent on exactly what real-world functions they have planned, but slyly noted that their program could someday drive a real car with a few tweaks. Does that mean DQN could go from Mario Kart champ to digital chauffeur? Only time will tell.

Original post:

Artificial intelligence can learn Atari games from scratch, say scientists

Artificial intelligence can learn Atari games from scratch, say scientists (+video)

Is a robot uprising coming in 2015?

Maybe but only to show you up at the arcade.

Led by researchers Demis Hassabis and Volodymyr Mnih, Google-owned DeepMind Technologies has created an artificial intelligence capable of playing simple video games with minimal training. They described their breakthrough today in Nature.

Dubbed the deep Q-network agent (or DQN), DeepMinds program can play a number of popular Atari 2600 titles, including Pong, Space Invaders, and Breakout. According to the study, it is the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

Video game-playing AI already exists, as any lonely gamer can tell you. In the absence of a real human opponent, most games allow players to challenge the computer. But in those games, the AI is endowed with a series of specific rules that guide its behavior. DQN, on the other hand, is given only one objective maximize the score. From there, it watches the gameplay to learn new strategies in real time. Like the human brain, it learns from experience.

It looks trivial in the sense that these are games from the 80s and you can write solutions to these games quite easily, Dr. Hassabis, who co-founded DeepMind, told BBC. What is not trivial is to have one single system that can learn from the pixels, as perceptual inputs, what to do. The same system can play 49 different games from the box without any pre-programming. You literally give it a new game, a new screen and it figures out after a few hours of gameplay what to do.

Perhaps more impressively, DQN can take these strategies and apply them to games it hasnt played before. In other words, when DQN gets better at one video game, its actually getting better at a whole host of games.

The program is far from perfect, however. While it rivals human players in action-oriented games, it struggles with more open-ended titles.

Games where the system doesn't do well are ones that require long-term planning, Dr. Mnih told NBC. For instance, in Ms. Pac-Man, if you have to get to the other side of the maze you have to perform quite sophisticated pathfinding and avoid ghosts to get there.

As DeepMind prepares DQN for ever more complex gameplay, an even greater potential waits on the horizon. Even more so than chess, video games can provide a model of the real world one that requires intricate, adaptive decision-making. Researchers remain silent on exactly what real-world functions they have planned, but slyly noted that their program could someday drive a real car with a few tweaks. Does that mean DQN could go from Mario Kart champ to digital chauffeur? Only time will tell.

Here is the original post:

Artificial intelligence can learn Atari games from scratch, say scientists (+video)

Google artificial-intelligence program can beat you at 'Space Invaders'

WASHINGTON -- Computers already have bested human champions in "Jeopardy!" and chess, but artificial intelligence now has gone to master an entirely new level: "Space Invaders."

Google scientists have cooked up software that can do better than humans on dozens of Atari video games from the 1980s, like video pinball, boxing, and 'Breakout.' But computers don't seem to have a ghost of a chance at "Ms. Pac-Man."

The aim is not to make video games a spectator sport, turning couch potatoes who play games into couch potatoes who watch computers play games. The real accomplishment: computers that can teach themselves to succeed at tasks, learning from scratch, trial and error, just like humans.

The computer program, called Deep Q-network, wasn't given much in the way of instructions to start, but in time it did better than humans in 29 out of 49 games and in some cases, like video pinball, it did 26 times better, according to a new study released Wednesday by the journal Nature. It's a first time an artificial intelligence program bridged different type of learning systems, said study author Demis Hassabis of Google DeepMind in London.

Deep Q "can learn and adapt to unexpected things," Hassabis said in a news conference. "These types of systems are more human-like in the way they learn."

In the submarine game "Seaquest," Deep Q came up with a strategy that the scientists had never considered.

"It's definitely fun to see computers discover things that you didn't figure out yourself," said study co-author Volodymyr Mnih, also of Google.

Sebastian Thrun, director of the Artificial Intelligence Laboratory at Stanford University, who wasn't part of the research, said in an email: "This is very impressive. Most people don't understand how far (artificial intelligence) has come. And this is just the beginning."

Nothing about Deep Q is customized to Atari or to a specific game. The idea is to create a "general learning system" that can figure tasks out by trial and error and eventually to stuff even humans have difficulty with, Hassabis said. This program, he said, "is the first rung of the ladder."

Carnegie Mellon University computer science professor Emma Brunskill, who also wasn't part of the study, said this learning despite lack of customization "brings us closer to having general purpose agents equipped to work well at learning a large range of tasks, instead of just chess or just 'Jeopardy!'"

Here is the original post:

Google artificial-intelligence program can beat you at 'Space Invaders'

Artificial intelligence beats humans at Space Invaders

Google DeepMind

Google's Deep-Q computer program has mastered several classic video games, from Space Invaders to Breakout.

After conquering Jeopardy and chess, artificial intelligence masters an entirely new level: Space Invaders.

Google scientists have cooked up software that can do better than humans on dozens of Atari video games from the 1980s, like video pinball, boxing, and Breakout. But computers don't seem to have a ghost of a chance at Ms Pac-Man.

The aim is not to make video games a spectator sport, turning couch potatoes who play games into couch potatoes who watch computers play games. The real accomplishment: computers that can teach themselves to succeed at tasks, learning from scratch, trial and error, just like humans.

The computer program, called Deep Q-network, wasn't given much in the way of instructions to start, but in time it did better than humans in 29 out of 49 games and in some cases, like video pinball, it did 26 times better, according to a new study released Wednesday by the journal Nature. It's a first time an artificial intelligence program bridged different type of learning systems, said study author Demis Hassabis of Google DeepMind in London.

Deep Q "can learn and adapt to unexpected things," Hassabis said in a news conference. "These types of systems are more human-like in the way they learn."

In the submarine game Seaquest, Deep Q came up with a strategy that the scientists had never considered.

"It's definitely fun to see computers discover things that you didn't figure out yourself," said study co-author Volodymyr Mnih, also of Google.

Sebastian Thrun, director of the Artificial Intelligence Laboratory at Stanford University, who wasn't part of the research, said in an email: "This is very impressive. Most people don't understand how far (artificial intelligence) has come. And this is just the beginning."

See more here:

Artificial intelligence beats humans at Space Invaders

Innovations: Googles artificial intelligence mastermind responds to Elon Musks fears

Demis Hassabis is an impressive guy. A former child prodigy, a chess master at 13 and the founder of DeepMind Technologies, a British artificial intelligence company that Google acquired last year. Now 38, hes at the forefront of an emerging technology with an unmatched potential for good and bad.

Hassabis and his researchers published a landmark paper this week, creating an algorithm that learns in a human-like manner. Observers of artificial intelligence have warned that advances like this are a step toward potentially destroying civilization.

Elon Musk, a DeepMind investor he says the better to keep an eye on them has led the charge, calling artificial intelligence mankinds greatest threat. Stephen Hawking and Bill Gates have also issued warnings.

At a news conference Tuesday Hassabis addressed Musks concerns:

Were many, many decades away from anything, any kind of technology that we need to worry about. But its good to start the conversation now and be aware of as with any new powerful technology it can be used for good or bad, Hassabis said.

He was also quick to downplay any rift with DeepMind and Musk.

Were good friends with Elon and hes been a big supporter of ours for a number of years, Hassabis said. And hes fascinated, loves the potential of artificial intelligence.

Elon Musk loves artificial intelligence? Never wouldve guessed that.

Related: Googles breakthrough in artificial intelligence, and what it means for self-driving cars

Follow this link:

Innovations: Googles artificial intelligence mastermind responds to Elon Musks fears

Artificial intelligence program teaches itself to play Atari games and it can beat your high score

Artificial intelligence program deep Q-network teaches itself to play classic Atari games like Space Invaders. Video courtesy Google DeepMind with permission from Square Enix Ltd.

A new artificial intelligence program from Google DeepMind has taught itself how to play classic Atari 2600 games. And it can probably beat your high score.

Deep Q-network, or DQN, can play 49 Atari games right out of the box, says Demis Hassabis, world-renowned gamer and founder of DeepMind. Overall, it performed as well as a professional human video game tester, according to a study published this week in Nature. On more than half of the games, it scored more than 75 percent of the human score.

This isnt the first game-playing A.I. program. IBM supercomputer Deep Blue defeated world chess champion Garry Kasparov in 1997. In 2011, an artificial intelligence computer system named Watson won a game of Jeopardy against champions Ken Jennings and Brad Rutter.

Watson and Deep Blue were great achievements, but those computers were loaded with all the chess moves and trivia knowledge they could handle, Hassabis said in a news conference Tuesday. Essentially, they were trained, he explained.

But in this experiment, designers didnt tell DQN how to win the games. They didnt even tell it how to play or what the rules were, Hassabis said.

(Deep Q-network) learns how to play from the ground up, Hassabis said. The idea is that these types of systems are more human-like in the way they learn. Our brains make models that allow us to learn and navigate the world. Thats exactly the type of system were trying to design here.

To test DQNs ability to learn and adapt, Hassabis and his team at DeepMind tried Atari 2600 games from the late 1970s and early 1980s. Atari games had the right level of complexity for the DQN software, Hassabis said. The software agent had access to the last four images on the screen and its score.

By looking at the pixels on the screen and moving the controls, DQN taught itself to play over the course of several weeks, said Vlad Mnih, one of the authors on the paper, at Tuesdays conference. Its a process called deep reinforcement learning, Mnih said, where the computer learns through trial and error the same way humans and other animals learn.

We are trying to explore the space of algorithms for intelligence. We have one example of (intelligence) the human brain, Hassabis said. We can be certain that reinforced learning is something that works and something humans and animals use to learn.

More here:

Artificial intelligence program teaches itself to play Atari games and it can beat your high score

AI masters 49 Atari 2600 games without instructions

The venerable Atari 2600.

Artificial intelligence, machines and software with the ability to think for themselves, can be used for a variety of applications ranging from military technology to everyday serviceslike automated telephone systems. However, none of the systems that currently exist exhibit learning abilities that would match the human intelligence. Recently, scientists have wondered whether an artificial agent could be given a tiny bit of human-like intelligence by modeling the algorithm on aspects of the primate neural system.

Using a bio-inspired system architecture, scientists have created a single algorithm that is actually able to develop problem-solving skills when presented with challenges that can stump some humans. And then they immediately put it to use learning a set of classic video games.

Scientists developed the novel agent (they called it the Deep Q-network), one that combined reinforcement learning with what's termed a "deep convolutional network," a layered system of artificial neural networks. Deep-Q is able to understand spatial relationships between different objects in an image, such as distance from one another, in such a sophisticated way that it can actually re-envision the scene from a differentviewpoint. This type of system was inspired by early work done on the visual cortex.

Scientists considered tasks in which Deep-Q was able to interact with the environment through a sequence of observations, actions, and rewards, with an ultimate goal of interacting in a way to maximize reward. Reinforcement learning systems sound like a simple approach to developing artificial intelligenceafter all, we have all seen that small children are able to learn from their mistakes. Yet when it comes to designing artificial intelligence, it is much trickier to ensure all the components necessary for this type of learning are actually included. As a result, artificial reinforcement learning systems are usually quite unstable.

Here, these scientists addressed previous instability issues in creatingDeep-Q. One important mechanism that they specifically added to Deep-Q was experience replay. This element allows the system to store visual information about experiences and transitions much like our memory works. For example, if a small child leaves home to go to a playground, he will still remember what home looks like at the playground. If he is running and he trips over a tree root, he will remember that bad outcome and try to avoid tree roots in the future.

Using these abilities, Deep-Q is able to performreinforcement learning, using rewards to continuously establishvisual relationships between objects and actions within the convolution network. Over time, it identifiesvisual aspects of the environment that would promote good outcomes.

This bio-inspired approach is based on evidence that rewards during perceptual learning may influence the way images and sequences of events or resulting outcomes are processed within the primate visual cortex. Additionally, evidence suggests that in the mammalian brain, the hippocampus may actually support the physical realization of the processes involved in the experience replay algorithm.

It takes a few hundred tries, but the neural networks eventually figure out the rules, then later discover strategies.

Scientists tested Deep Qs problem-solving abilities on the Atari 2600 gaming platform. Deep-Q learned not only the rules for a variety of games (49 games in total) in a range of different environments, but the behaviors required to maximize scores. It did so with minimal prior knowledge, receiving only visual images (in pixel form) and the game score as inputs. In these experiments, the authors used the same algorithm, network architecture, and hyperparameters on each gamethe exact same limitations a human player would have, given we can't swap brains out. Notably, these game genres varied from boxing to car-racing, representing a tremendous range of inputs and challenges.

Continued here:

AI masters 49 Atari 2600 games without instructions

Innovations: 5 classic Atari games that totally stump Googles artificial intelligence algorithm

Computers are better than us at chess, Jeopardy and now plenty of Atari games, following Googles breakthrough in artificial intelligence. But there are still a few games were Googles impressive new algorithm is largely clueless. Of the 49 games it attempted, here are the five it struggled with the most:

5. Asteroids (1979): Googles professional human game testers did 93 percent better than the algorithm.

4. Frostbite (1983): To win this game you jump on ice blocks to help build igloos, which the algorithm couldnt master. The human testers did 94 percent better than Googles algorithm.

3.Gravitar (1982):Human testers were 95 percent better.

2. Private eye (1983): Googles humans were 98 percent better at this than the algorithm.

1. Montezumas Revenge (1984): Googles algorithm couldnt even score a single point, making the game its very worst performance of 49 games it tried.

Related: 22 Atari games where humans are no match for Googles algorithm

Googles breakthrough in artificial intelligence, and what it means for self-driving cars

Read the original post:

Innovations: 5 classic Atari games that totally stump Googles artificial intelligence algorithm