{"id":180981,"date":"2017-03-02T14:18:57","date_gmt":"2017-03-02T19:18:57","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai-scientists-gather-to-plot-doomsday-scenarios-and-solutions-bloomberg\/"},"modified":"2017-03-02T14:18:57","modified_gmt":"2017-03-02T19:18:57","slug":"ai-scientists-gather-to-plot-doomsday-scenarios-and-solutions-bloomberg","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/ai-scientists-gather-to-plot-doomsday-scenarios-and-solutions-bloomberg\/","title":{"rendered":"AI Scientists Gather to Plot Doomsday Scenarios (and Solutions &#8230; &#8211; Bloomberg"},"content":{"rendered":"<p><p>    Artificial intelligence boosters predict a brave new world of    flying cars and cancer cures. Detractors worry about a future    where humans are enslaved to an evil race of robot overlords.    Veteran AI scientist Eric Horvitz and Doomsday Clock guru    Lawrence Krauss, seeking a middle ground, gathered a group of    experts in the Arizona desert to discuss the worst that could    possibly happen -- and how to stop it.  <\/p>\n<p>    Their workshop took place last weekend at Arizona State    University with funding from Tesla Inc. co-founder Elon Musk    and Skype co-founder Jaan Tallinn.Officially dubbed    \"Envisioning and Addressing Adverse AI Outcomes,\"it was a    kind of AI doomsday games that organized some 40 scientists,    cyber-security experts and policy wonks into groups of    attackers -- the red team -- and defenders -- blue team --    playing out AI-gone-very-wrong scenarios, ranging from    stock-market manipulation to global warfare.  <\/p>\n<p>    Horvitz is optimistic -- a good thing because machine    intelligence is his life's work -- but some other, more    dystopian-minded backers of the project seemed to find his    outlook too positive when plans for this event started about    two years ago, said Krauss, a theoretical physicist who directs    ASU's Origins Project, the program running the workshop. Yet    Horvitz said that for these technologies to move forward    successfully and to earn broad public confidence, all concerns    must be fully aired and addressed.  <\/p>\n<p>    \"There is huge potential for AI to transform so many aspects of    our society in so many ways. At the same time, there are rough    edges and potential downsides, like any technology,\" said    Horvitz, managing director of Microsoft's Research Lab in    Redmond, Washington. ``To maximally gain from the upside we    also have to think through possible outcomes in more detail    than we have before and think about how wed deal with them.\"  <\/p>\n<p>    Participants were given \"homework\"to submit entries for    worst-case scenarios. They had to be realistic -- based on    current technologies or those that appear possible -- and five    to 25 years in the future. The entrants with the \"winning\"    nightmares were chosen to lead the panels, which featured about    four experts on each of the two teams to discuss the attack and    how to prevent it.  <\/p>\n<p>    Blue team, including Launchbury, Fisher and Krauss, in the War    and Peace scenario  <\/p>\n<p>    Tessa Eztioni, Origins Project at ASU  <\/p>\n<p>    Turns outmany of these researchers can match    science-fiction writers Arthur C. Clarke and Philip K. Dick for    dystopian visions. In many cases, little imagination was    required -- scenarios like technologybeing used to sway    electionsor new cyber attacks using    AI are being seen in the real world,or are at least    technically possible. Horvitz cited research that shows how to    alter the way a self-driving car sees traffic signs so that the    vehicle misreads a \"stop\" sign as \"yield.''  <\/p>\n<p>    The possibility of intelligent, automated cyber attacks is the    one that most worries John Launchbury, who directs one of the    offices at the U.S.'s Defense Advanced Research Projects    Agency, and Kathleen Fisher, chairwoman of the computer science    department at Tufts University, who led that session. What    happens if someone constructs a cyber weapon designed to hide    itself and evade all attempts to dismantle it? Now imagine it    spreads beyond its intended target to the broader internet.    Think Stuxnet, the computer virus created to attack the Iranian    nuclear program that got out in the wild, but stealthier and    more autonomous.  <\/p>\n<p>    \"We're talking about malware on steroids that is AI-enabled,\"    said Fisher, who is an expert in programming    languages.Fisher presented her scenario under a slide    bearing the words \"What could possibly go wrong?\" which could    have also served as a tagline for the whole event.  <\/p>\n<p>    How did the defending blue team fare on that one? Not well,    said Launchbury. They argued that advanced AI needed for an    attack would require a lot of computing power and    communication, so it would be easier to detect. But the red    team felt that it would be easy to hide behind innocuous    activities, Fisher said. For example, attackers could get    innocent users to play an addictive video game to cover up    their work.  <\/p>\n<p>        Exclusive insights on technology around the world.      <\/p>\n<p>        Get Fully Charged, from Bloomberg Technology.      <\/p>\n<p>    To prevent a stock-market manipulation scenario dreamed up by    University of Michigan computer science professor Michael    Wellman, blue team members suggested treating attackers like    malware by trying to recognize them via a database on known    types of hacks. Wellman, who has been in AI for more than 30    years and calls himself an old-timer on the subject, said that    approach could be useful in finance.  <\/p>\n<p>    Beyond actual solutions, organizers hope the doomsday workshop    started conversations on what needs to happen, raised awareness    and combined ideas from different disciplines. The Origins    Project plans to make public materials from the closed-door    sessions and may design further workshops around a specific    scenario or two, Krauss said.  <\/p>\n<p>    DARPA's Launchbury hopes the presence of policy figures among    the participants will foster concrete steps, like agreements on    rules of engagement for cyber war, automated weapons and robot    troops.  <\/p>\n<p>    Krauss, chairman of the board of sponsors of the group behind    the Doomsday Clock, a symbolic measure of how close we are to    global catastrophe, said some of what he saw at the workshop    \"informed\" his thinking on whether the clock ought to shift    even closer to midnight. But don't go stocking up on canned    food and moving into a bunker in the wilderness just yet.  <\/p>\n<p>    \"Some things we think of as cataclysmicmay turn out to be    just fine,\" he said.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Go here to read the rest: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/www.bloomberg.com\/news\/articles\/2017-03-02\/ai-scientists-gather-to-plot-doomsday-scenarios-and-solutions\" title=\"AI Scientists Gather to Plot Doomsday Scenarios (and Solutions ... - Bloomberg\">AI Scientists Gather to Plot Doomsday Scenarios (and Solutions ... - Bloomberg<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Artificial intelligence boosters predict a brave new world of flying cars and cancer cures. Detractors worry about a future where humans are enslaved to an evil race of robot overlords. Veteran AI scientist Eric Horvitz and Doomsday Clock guru Lawrence Krauss, seeking a middle ground, gathered a group of experts in the Arizona desert to discuss the worst that could possibly happen -- and how to stop it.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/ai-scientists-gather-to-plot-doomsday-scenarios-and-solutions-bloomberg\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":8,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187743],"tags":[],"class_list":["post-180981","post","type-post","status-publish","format-standard","hentry","category-ai"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/180981"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/8"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=180981"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/180981\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=180981"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=180981"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=180981"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}