{"id":176524,"date":"2017-02-10T03:14:40","date_gmt":"2017-02-10T08:14:40","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/googles-deepmind-pits-ai-against-ai-to-see-if-they-fight-or-cooperate-the-verge\/"},"modified":"2017-02-10T03:14:40","modified_gmt":"2017-02-10T08:14:40","slug":"googles-deepmind-pits-ai-against-ai-to-see-if-they-fight-or-cooperate-the-verge","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/googles-deepmind-pits-ai-against-ai-to-see-if-they-fight-or-cooperate-the-verge\/","title":{"rendered":"Google&#8217;s DeepMind pits AI against AI to see if they fight or cooperate &#8211; The Verge"},"content":{"rendered":"<p><p>    In the future, its likely that many aspects of human society    will be controlled  either partly or wholly  by artificial    intelligence. AI computer agents could manage systems from the    quotidian (e.g., traffic lights) to the complex (e.g., a    nations whole economy), but leaving aside the problem of    whether or not they can do their jobs well, there is another    challenge: will these agents be able to play nice with one    another? What happens if one AIs aims conflict with anothers?    Will they fight, or work together?  <\/p>\n<p>    Googles AI subsidiary DeepMind has been exploring this problem    in a new study     published today. The companys researchers decided to test    how AI agents interacted with one another in a series of    social dilemmas. This is a rather generic term for situations    in which individuals can profit from being selfish  but where    everyone loses if everyone is selfish. The most famous    example of this is the prisoners    dilemma, where two individuals can choose to betray one    another for a prize, but lose out if both choose this option.  <\/p>\n<p>    As explained in a blog    post from DeepMind, the companys researchers tested how AI    agents would perform in these sorts of situations, by dropping    them into a pair of very basic video games.  <\/p>\n<p>    In the first game, Gathering, two player have to    collect apples from a central pile. They have the option of    tagging the other player with a laser beam, temporarily    removing them from the game, and giving the first player a    chance to collect more apples. You can see a sample of this    gameplay below:  <\/p>\n<p>    In the second game, Wolfpack, two players have to hunt    a third in an environment filled with obstacles. Points are    claimed not just by the player that captures the prey, but by    all players near to the prey when its captured. You can see a    gameplay sample of this below:  <\/p>\n<p>    What the researchers found was interesting, but perhaps not    surprising: the AI agents altered their behavior, becoming more    cooperative or antagonistic, depending on the context.  <\/p>\n<p>    For example, with the Gathering game, when apples were    in plentiful supply, the agents didnt really bother zapping    one another with the laser beam. But, when stocks dwindled, the    amount of zapping increased. Most interestingly, perhaps, was    when a more computationally-powerful agent was introduced into    the mix, it tended to zap the other player regardless    of how many apples there were. That is to say, the cleverer AI    decided it was better to be aggressive in all situations.  <\/p>\n<p>    AI agents varied their strategy based on the rules of the    game  <\/p>\n<p>    Does that mean that the AI agent thinks being combative is the    best strategy? Not necessarily. The researchers hypothesize    that the increase in zapping behavior by the more-advanced AI    was simply because the act of zapping itself is computationally    challenging. The agent has to aim its weapon at the other    player and track their movement  activities which require more    computing power, and which take up valuable apple-gathering    time. Unless the agent knows these strategies will pay off,    its easier just to cooperate.  <\/p>\n<p>    Conversely, in the Wolfpack game, the cleverer the AI    agent, the more likely it was to cooperate with other players.    As the researchers explain, this is because learning to work    with the other player to track and herd the prey requires more    computational power.  <\/p>\n<p>    The results of the study, then, show that the behavior of AI    agents changes based on the rules theyre faced with. If those    rules reward aggressive behavior (Zap that player to get more    apples) the AI will be more aggressive; if they rewards    cooperative behavior (Work together and you both get points!)    theyll be more cooperative.  <\/p>\n<p>    That means part of the challenge in controlling AI agents in    the future, will be making sure the right rules are in place.    As the researchers conclude in their blog    post: As a consequence [of this research], we may be able    to better understand and control complex multi-agent systems    such as the economy, traffic systems, or the ecological health    of our planet - all of which depend on our continued    cooperation.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>See the original post:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"http:\/\/www.theverge.com\/2017\/2\/9\/14558418\/ai-deepmind-social-dilemma-study\" title=\"Google's DeepMind pits AI against AI to see if they fight or cooperate - The Verge\">Google's DeepMind pits AI against AI to see if they fight or cooperate - The Verge<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> In the future, its likely that many aspects of human society will be controlled either partly or wholly by artificial intelligence. AI computer agents could manage systems from the quotidian (e.g., traffic lights) to the complex (e.g., a nations whole economy), but leaving aside the problem of whether or not they can do their jobs well, there is another challenge: will these agents be able to play nice with one another <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/googles-deepmind-pits-ai-against-ai-to-see-if-they-fight-or-cooperate-the-verge\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":7,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187743],"tags":[],"class_list":["post-176524","post","type-post","status-publish","format-standard","hentry","category-ai"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/176524"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=176524"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/176524\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=176524"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=176524"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=176524"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}