{"id":177395,"date":"2017-02-14T11:37:07","date_gmt":"2017-02-14T16:37:07","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-intelligence-is-not-a-threatyet-scientific-american\/"},"modified":"2017-02-14T11:37:07","modified_gmt":"2017-02-14T16:37:07","slug":"artificial-intelligence-is-not-a-threatyet-scientific-american","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/superintelligence\/artificial-intelligence-is-not-a-threatyet-scientific-american\/","title":{"rendered":"Artificial Intelligence Is Not a ThreatYet &#8211; Scientific American"},"content":{"rendered":"<p><p>    In 2014 SpaceX CEO Elon Musk tweeted: Worth reading    Superintelligence by Bostrom. We need to be super careful with    AI. Potentially more dangerous than nukes. That same year    University of Cambridge cosmologist Stephen Hawking told the    BBC: The development of full artificial intelligence could    spell the end of the human race. Microsoft co-founder Bill    Gates also cautioned: I am in the camp that is concerned about    super intelligence.  <\/p>\n<p>    How the AI apocalypse might unfold was outlined by computer    scientist Eliezer Yudkowsky in a paper in the 2008 book    Global Catastrophic Risks: How likely is it that AI    will cross the entire vast gap from amoeba to village idiot,    and then stop at the level of human genius? His answer: It    would be physically possible to build a brain that computed a    million times as fast as a human brain.... If a human mind were    thus accelerated, a subjective year of thinking would be    accomplished for every 31 physical seconds in the outside    world, and a millennium would fly by in eight-and-a-half    hours. Yudkowsky thinks that if we don't get on top of this    now it will be too late: The AI runs on a different timescale    than you do; by the time your neurons finish thinking the words    I should do something you have already lost.  <\/p>\n<p>    The paradigmatic example is University of Oxford philosopher    Nick Bostrom's thought experiment of the so-called paperclip    maximizer presented in his Superintelligence book: An AI    is designed to make paperclips, and after running through its    initial supply of raw materials, it utilizes any available    atoms that happen to be within its reach, including humans. As    he described in a 2003 paper, from there it starts    transforming first all of earth and then increasing portions of    space into paperclip manufacturing facilities. Before long,    the entire universe is made up of paperclips and paperclip    makers.  <\/p>\n<p>    I'm skeptical. First, all such doomsday scenarios involve a    long sequence of if-then contingencies, a failure of which at    any point would negate the apocalypse. University of West    England Bristol professor of electrical engineering Alan    Winfield put it this way in a 2014 article: If we    succeed in building human equivalent AI and if that AI    acquires a full understanding of how it works, and if it    then succeeds in improving itself to produce super-intelligent    AI, and if that super-AI, accidentally or maliciously,    starts to consume resources, and if we fail to pull the    plug, then, yes, we may well have a problem. The risk, while    not impossible, is improbable.  <\/p>\n<p>    Second, the development of AI has been much slower than    predicted, allowing time to build in checks at each stage. As    Google executive chairman Eric Schmidt said in response to Musk    and Hawking: Don't you think humans would notice this    happening? And don't you think humans would then go about    turning these computers off? Google's own DeepMind has    developed the concept of an AI off switch, playfully described    as a big red button to be pushed in the event of an attempted    AI takeover. As Baidu vice president Andrew Ng put it (in a jab    at Musk), it would be like worrying about overpopulation on    Mars when we have not even set foot on the planet yet.  <\/p>\n<p>    Third, AI doomsday scenarios are often predicated on a false    analogy between natural intelligence and artificial    intelligence. As Harvard University experimental    psychologist Steven Pinker elucidated in his answer to the 2015    Edge.org Annual Question What Do You Think about Machines That    Think?: AI dystopias project a parochial alpha-male    psychology onto the concept of intelligence. They assume that    superhumanly intelligent robots would develop goals like    deposing their masters or taking over the world. It is equally    possible, Pinker suggests, that artificial intelligence will    naturally develop along female lines: fully capable of solving    problems, but with no desire to annihilate innocents or    dominate the civilization.  <\/p>\n<p>    Fourth, the implication that computers will want to do    something (like convert the world into paperclips) means AI has    emotions, but as science writer Michael Chorost notes, the    minute an A.I. wants anything, it will live in a    universe with rewards and punishmentsincluding punishments    from us for behaving badly.  <\/p>\n<p>    Given the zero percent historical success rate of apocalyptic    predictions, coupled with the incrementally gradual development    of AI over the decades, we have plenty of time to build in    fail-safe systems to prevent any such AI apocalypse.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>See the rest here:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/www.scientificamerican.com\/article\/artificial-intelligence-is-not-a-threat-mdash-yet\/\" title=\"Artificial Intelligence Is Not a ThreatYet - Scientific American\">Artificial Intelligence Is Not a ThreatYet - Scientific American<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> In 2014 SpaceX CEO Elon Musk tweeted: Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/superintelligence\/artificial-intelligence-is-not-a-threatyet-scientific-american\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187765],"tags":[],"class_list":["post-177395","post","type-post","status-publish","format-standard","hentry","category-superintelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/177395"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=177395"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/177395\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=177395"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=177395"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=177395"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}