AI masters 49 Atari 2600 games without instructions

The venerable Atari 2600.

Artificial intelligence, machines and software with the ability to think for themselves, can be used for a variety of applications ranging from military technology to everyday serviceslike automated telephone systems. However, none of the systems that currently exist exhibit learning abilities that would match the human intelligence. Recently, scientists have wondered whether an artificial agent could be given a tiny bit of human-like intelligence by modeling the algorithm on aspects of the primate neural system.

Using a bio-inspired system architecture, scientists have created a single algorithm that is actually able to develop problem-solving skills when presented with challenges that can stump some humans. And then they immediately put it to use learning a set of classic video games.

Scientists developed the novel agent (they called it the Deep Q-network), one that combined reinforcement learning with what's termed a "deep convolutional network," a layered system of artificial neural networks. Deep-Q is able to understand spatial relationships between different objects in an image, such as distance from one another, in such a sophisticated way that it can actually re-envision the scene from a differentviewpoint. This type of system was inspired by early work done on the visual cortex.

Scientists considered tasks in which Deep-Q was able to interact with the environment through a sequence of observations, actions, and rewards, with an ultimate goal of interacting in a way to maximize reward. Reinforcement learning systems sound like a simple approach to developing artificial intelligenceafter all, we have all seen that small children are able to learn from their mistakes. Yet when it comes to designing artificial intelligence, it is much trickier to ensure all the components necessary for this type of learning are actually included. As a result, artificial reinforcement learning systems are usually quite unstable.

Here, these scientists addressed previous instability issues in creatingDeep-Q. One important mechanism that they specifically added to Deep-Q was experience replay. This element allows the system to store visual information about experiences and transitions much like our memory works. For example, if a small child leaves home to go to a playground, he will still remember what home looks like at the playground. If he is running and he trips over a tree root, he will remember that bad outcome and try to avoid tree roots in the future.

Using these abilities, Deep-Q is able to performreinforcement learning, using rewards to continuously establishvisual relationships between objects and actions within the convolution network. Over time, it identifiesvisual aspects of the environment that would promote good outcomes.

This bio-inspired approach is based on evidence that rewards during perceptual learning may influence the way images and sequences of events or resulting outcomes are processed within the primate visual cortex. Additionally, evidence suggests that in the mammalian brain, the hippocampus may actually support the physical realization of the processes involved in the experience replay algorithm.

It takes a few hundred tries, but the neural networks eventually figure out the rules, then later discover strategies.

Scientists tested Deep Qs problem-solving abilities on the Atari 2600 gaming platform. Deep-Q learned not only the rules for a variety of games (49 games in total) in a range of different environments, but the behaviors required to maximize scores. It did so with minimal prior knowledge, receiving only visual images (in pixel form) and the game score as inputs. In these experiments, the authors used the same algorithm, network architecture, and hyperparameters on each gamethe exact same limitations a human player would have, given we can't swap brains out. Notably, these game genres varied from boxing to car-racing, representing a tremendous range of inputs and challenges.

Continued here:

AI masters 49 Atari 2600 games without instructions

Related Posts

Comments are closed.