Google Test Of AI’s Killer Instinct Shows We Should Be Very Careful – Gizmodo

If climate change, nuclear weapons or Donald Trump dont kill us first, theres always artificial intelligence just waiting in the wings. Its been a long time worry that when AI gains a certain level of autonomy it will see no use for humans or even perceive them as a threat. A new study by Googles DeepMind lab may or may not ease those fears.

The researchers at DeepMind have been working with two games to test whether neural networks are more likely to understand motivations to compete or cooperate. They hope that this research could lead to AI being better at working with other AI in situations that contain imperfect information.

In the first game, two AI agents (red and blue) were tasked with gathering the most apples (green) in a rudimentary 2D graphical environment. Each agent had the option of tagging the other with a laser blast that would temporarily remove them from the game.

The game was run thousands of times and the researchers found that red and blue were willing to just gather apples when they were abundant. But as the little green dots became more scarce, the dueling agents were more likely to light each other up with some ray gun blasts to get ahead. This video doesnt really teach us much but its cool to look at:

Using a smaller network, the researchers found a greater likelihood for co-existence. But with a larger, more complex network, the AI was quicker to start sabotaging the other player and horde the apples for itself.

In the second, more optimistic, game called Wolfpack the agents were tasked to play wolves attempting to capture prey. Greater rewards were offered when the wolves were in close proximity during a successful capture. This incentivised the agents to work together rather than heading off to the other side of the screen to pull a lone wolf attack against the prey. The larger network was much quicker to understand that in this situation cooperation was the optimal way to complete the task.

While all of that might seem obvious, this is vital research for the future of AI. More and more complex scenarios will be needed to understand how neural networks learn based on incentives as well as how they react when theyre missing information.

The most practical short-term application of the research is to be able to better understand and control complex multi-agent systems such as the economy, traffic systems, or the ecological health of our planet - all of which depend on our continued cooperation.

For now, DeepMinds research is focused on games with strict rules like the ones above and Go, a strategy game which it famously beat the worlds top champion. But it has recently partnered up with Blizzard in order to start learning Starcraft II, a more complex game in which reading an opponents motivations can be quite tricky. Joel Leibo, the lead author of the paper tells Bloomberg, Going forward it would be interesting to equip agents with the ability to reason about other agents beliefs and goals.

Lets just be glad the DeepMind team is taking things very slowly methodically learning what does and does not motivate AI to start blasting everyone around it.

[DeepMind Blog via Bloomberg]

Read more here:

Google Test Of AI's Killer Instinct Shows We Should Be Very Careful - Gizmodo

Related Posts

Comments are closed.