{"id":182311,"date":"2017-03-08T13:34:14","date_gmt":"2017-03-08T18:34:14","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/why-not-all-forms-of-artificial-intelligence-are-equally-scary-vox\/"},"modified":"2017-03-08T13:34:14","modified_gmt":"2017-03-08T18:34:14","slug":"why-not-all-forms-of-artificial-intelligence-are-equally-scary-vox","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/superintelligence\/why-not-all-forms-of-artificial-intelligence-are-equally-scary-vox\/","title":{"rendered":"Why not all forms of artificial intelligence are equally scary &#8211; Vox"},"content":{"rendered":"<p><p>    How worried should we be about artificial intelligence?  <\/p>\n<p>    Recently,     I asked a number of AI researchers this question. The    responses I received vary considerably; it turns out there is    not much agreement about the risks or implications.  <\/p>\n<p>    Non-experts are even more confused about AI and its attendant    challenges. Part of the problem is that artificial    intelligence is an ambiguous term. By AI one can mean a Roomba    vacuum cleaner, a self-driving truck, or one of those    death-dealing Terminator robots.  <\/p>\n<p>    There are, generally speaking, three forms of AI: weak AI,    strong AI, and superintelligence. At present, only weak AI    exists. Strong AI and superintelligence are theoretically    possible, even probable, but were not there yet.  <\/p>\n<p>    Understanding the differences between these forms of AI is    essential to analyzing the potential risks and benefits of this    technology. There are a whole range of concerns that correspond    to different kinds of AI, some more worrisome than others.  <\/p>\n<p>    To help make sense of this, here are some key distinctions you    need to know.  <\/p>\n<p>    Artificial Narrow Intelligence (often called weak AI) is an    algorithmic or specialized intelligence. This has existed for    several years. Think of the     Deep Blue machine that beat world champion Garry Kasparov    in chess. Or Siri on your iPhone. Or even speech recognition    and processing software. These are forms of nonsentient    intelligence with a relatively narrow focus.  <\/p>\n<p>    It might be too much to call weak AI a form of intelligence at    all. Weak AI is smart and can outperform humans at a single    task, but thats all it can do. Its not self-aware or    goal-driven, and so it doesnt present any apocalyptic threats.    But to the extent that weak AI controls vital software that    keeps our civilization humming along, our dependence upon it    does create some vulnerabilities. George Dvorsky, a Canadian    bioethicist and futurist,     explores some of these issues here.  <\/p>\n<p>    Then theres Artificial General Intelligence, or strong AI;    this refers to a general-purpose system, or what you might call    a thinking machine. Artificial General Intelligence, in    theory, would be as smart  or smarter  than a human being at    a wide range of tasks; it would be able to think, reason, and    solve complex problems in myriad ways.  <\/p>\n<p>    Its debatable whether strong AI could be called conscious;    at the very least, it would demonstrate behaviors typically    associated with consciousness  commonsense reasoning, natural    language understanding, creativity, strategizing, and generally    intelligent action.  <\/p>\n<p>    Artificial General Intelligence does not yet exist. A common    estimate is that were perhaps 20 years away from this    breakthrough. But nearly everyone concedes that its coming.    Organizations like the Allen    Institute for Artificial Intelligence (founded by Microsoft    co-founder Paul Allen) and Googles DeepMind project, along    with many others across the world, are making incremental    progress.  <\/p>\n<p>    There are surely more complications involved with this form of    AI, but its not the stuff of dystopian science fiction. Strong    AI would aim at a general-purpose human level intelligence;    unless it undergoes rapid recursive self-improvement, its    unlikely to pose a catastrophic threat to human life.  <\/p>\n<p>    The major challenges with strong AI are economic and cultural:    job loss due to automation, economic displacement, privacy and    data management, software vulnerabilities, and militarization.  <\/p>\n<p>    Finally, theres Artificial Superintelligence. Oxford    philosopher Nick Bostrom defined this form of AI in a     2014 interview with Vox as any intellect that radically    outperforms the best human minds in every field, including    scientific creativity, general wisdom and social skills. When    people fret about the hazards of AI, this is what theyre    talking about.  <\/p>\n<p>    A truly superintelligent machine would, in Bostroms words,    become extremely powerful to the point of being able to shape    the future according to its preferences. As yet, were nowhere    near a fully developed superintelligence. But the research is    underway, and the incentives for advancement are too great to    constrain.  <\/p>\n<p>    Economically, the incentives are obvious: The first company to    produce artificial superintelligence will profit enormously.    Politically and militarily, the potential applications of such    technology are infinite. Nations, if they dont see this    already as a winner-take-all scenario, are at the very least    eager to be first. In other words, the technological arms race    is afoot.  <\/p>\n<p>    The question, then, is how far away from this technology are    we, and what are the implications for human life?  <\/p>\n<p>    For his book     Superintelligence, Bostrom surveyed the    top experts in the field. One of the questions he asked was,    \"by what year do you think there is a 50 percent probability    that we will have human-level machine intelligence?\" The median    answer to that was somewhere between 2040 and 2050. That, of    course, is just a prediction, but its an indication of how    close we might be.  <\/p>\n<p>    Its hard to know when an artificial superintelligence will    emerge, but we can say with relative confidence that it will at    some point. If, in fact, intelligence is a matter of    information processing, and if we assume that we will continue    to build computational systems at greater and greater    processing speeds, then it seems inevitable that we will create    an artificial superintelligence. Whether were 50 or 100 or 300    years away, we are likely to cross the threshold eventually.  <\/p>\n<p>    When it does happen, our world will change in ways we cant    possibly predict.  <\/p>\n<p>    We cannot assume that a vastly superior intelligence is    containable; it would likely work to improve itself, to enhance    its capabilities. (This is what Bostrom calls the control    problem.) A hyper-intelligent machine might also achieve    self-awareness, in which case it would begin to develop its own    ends, its own ambitions. The hope that such machines will    remain instruments of human production is just that  a hope.  <\/p>\n<p>    If an artificial superintelligence does become goal-driven, it    might develop goals incompatible with human well-being. Or, in    the case of Artificial General Intelligence, it may pursue    compatible goals via incompatible means. The canonical thought    experiment here was developed by Bostrom. Lets call it the    paperclip scenario.  <\/p>\n<p>    Heres the short version: Humans create an AI designed to    produce paperclips. It has one utility function  to maximize    the number of paperclips in the universe. Now, if that machine    were to undergo an intelligence explosion, it would likely work    to optimize its single function  producing paperclips. Such a    machine would continually innovate new ways to make more    paperclips. Eventually, Bostrom says, that machine might decide    that converting all of the matter it can  including people     into paperclips is the best way to achieve its singular goal.  <\/p>\n<p>    Admittedly, this sounds a bit stupid. But its not, and it only    appears so when you think about it from the perspective of a    moral agent. Human behavior is guided and constrained by values     self-interest, compassion, greed, love, fear, etc. An    Advanced General Intelligence, presumably, would be driven only    by its original goal, and that could lead to dangerous, and    unanticipated, consequences.  <\/p>\n<p>    Again, the paperclip scenario applies to strong AI, not    superintelligence. The behavior of an a superintelligent    machine would be even less predictable. We have no idea what    such a being would want, or why it would want it, or how it    would pursue the things it wants. What we can be reasonably    sure of is that it will find human needs less important than    its own needs.  <\/p>\n<p>    Perhaps its better to say that it will be indifferent to human    needs, just as human beings are indifferent to the needs of    chimps or alligators. Its not that human beings are committed    to destroying chimps and alligators; we just happen to do so    when the pursuit of our goals conflicts with the wellbeing of    less intelligent creatures.  <\/p>\n<p>    And this is the real fear that people like Bostrom have of    superintelligence. We have to prepare for the inevitable,        he told me recently, and take seriously the possibility    that things could go radically wrong.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>See the original post here:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"http:\/\/www.vox.com\/science-and-health\/2017\/3\/8\/14830108\/artificial-intelligence-science-technology-robots-singularity-bostrom\" title=\"Why not all forms of artificial intelligence are equally scary - Vox\">Why not all forms of artificial intelligence are equally scary - Vox<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> How worried should we be about artificial intelligence?  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/superintelligence\/why-not-all-forms-of-artificial-intelligence-are-equally-scary-vox\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":8,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187765],"tags":[],"class_list":["post-182311","post","type-post","status-publish","format-standard","hentry","category-superintelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/182311"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/8"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=182311"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/182311\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=182311"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=182311"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=182311"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}