DeepMind is Teaching AIs How to Manage Real-World Tasks Through Gaming – Futurism

In BriefGoogle's DeepMind Labs has partnered with BlizzardEntertainment to release an application program interface thatenables artificial intelligence researchers to develop their own AIagents for playing StarCraft II. The hope is that the release willspur innovation in deep reinforcement learning and related areas ofAI research. Glorious Combat Is Upon Us

Last year, Googles DeepMind announced a partnership with Blizzard Entertainment to develop and test artificial intelligence (AI) agents in the popular real-time strategy game StarCraft II. Now, DeepMind has released a series of tools theyre calling StarCraft II Learning Environment (SC2LE)to test their agents against human competitors, as well as enable researchers to develop their own agents for the game.

Testing our agents in games that are not specifically designed for AI research, and where humans play well, is crucial to benchmark agent performance, DeepMinds team wrote in a blog post. The large pool of online StarCraft II players will provide a huge variety of extremely talented opponents from which the AI can learn.

Details of DeepMinds research werepublished in a paperalongside the released toolset, which includes a machine learning API; a dataset of game replays; an open source version ofPySC2, the Python component SC2LE;and more.

Artificially intelligent systems have already beaten humans in a number of games, including chess and some Atari games, and DeepMind has already succeeded at creating an AI that could dominate humans in the ancient Chinese game of Go.

However, StarCraft II presents a different challenge. The game is designed to be won by a single player who must successfully navigatean extremely challenging environment. AI agents have to be capable of managing sub-goals gathering resources, building structures, remembering locations on a partially revealed map, etc. in pursuit of a win, and when combined, these various tasks challenge its memory and ability to plan.

DeepMinds initial StarCraft II tests with AI agents showed that they can manage mini-games that focus on broken-down tasks, but when it came to the full game, the agents werent so successful. Even strong baseline agents [] cannot win a single game against even the easiest built-in AI, according to DeepMinds blog. If they are to be competitive, we will need further breakthroughs in deep [reinforcement learning] and related areas.

The release of the SC2LE toolset is DeepMinds way of asking the AI community for additional help in this endeavor. Our hope is that the release of these new tools will build on the work that the AI community has already done in StarCraft, encouraging more DeepRL research and making it easier for researchers to focus on the frontiers of our field, according to the blog post.

Of course, training AI agents to excel atStarCraft II isnt done just for glorys sake. The idea is that an AI will be more capable of managing real-world tasks if it can successfully navigate a gaming environmentthat requires it to perform layers of computation while engaging a human agent. In that respect, todays expert StarCraft II agent could be tomorrows AIcashierorcustomer service rep.

Here is the original post:

DeepMind is Teaching AIs How to Manage Real-World Tasks Through Gaming - Futurism

Related Posts

Comments are closed.