{"id":194930,"date":"2017-05-26T04:04:16","date_gmt":"2017-05-26T08:04:16","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/when-artificial-intelligence-gets-too-clever-by-half-undark-magazine\/"},"modified":"2017-05-26T04:04:16","modified_gmt":"2017-05-26T08:04:16","slug":"when-artificial-intelligence-gets-too-clever-by-half-undark-magazine","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-intelligence\/when-artificial-intelligence-gets-too-clever-by-half-undark-magazine\/","title":{"rendered":"When Artificial Intelligence Gets Too Clever by Half &#8211; Undark Magazine"},"content":{"rendered":"<p><p>    Picture a    crew of engineers building a dam. Theres an anthill in    the way, but the engineers dont care or even notice; they    flood the area anyway, and too bad for the ants.  <\/p>\n<p>        Just as we now have power to dictate the fate of        less intelligent beings, so might such computers someday        exert life-and-death power over us.      <\/p>\n<p>    Now replace the ants with humans, happily going about their own    business, and the engineers with a race of superintelligent    computers that happen to have other priorities. Just as we now    have power to dictate the fate of less intelligent beings, so    might such computers someday exert life-and-death power over    us.  <\/p>\n<p>    Thats the analogy the superstar physicist Stephen Hawking        used in 2015 to describe the mounting perils he sees in the    current explosion of artificial intelligence. And lately the    alarms have been sounding louder than ever. Allan Dafoe of Yale    and Stuart Russell of Berkeley wrote an     essay in MIT Technology Review titled Yes, We Are Worried    About the Existential Risk of Artificial Intelligence. The    computing giants Bill Gates and Elon Musk have issued similar    warnings online.  <\/p>\n<p>    Should we be worried?  <\/p>\n<p>    Perhaps the most influential case that we should be was made by    the Oxford philosopher Nick Bostrom, whose 2014 book, Superintelligence:    Paths, Dangers, Strategies, was a New York Times best seller.    The book catapulted the term superintelligence into popular    consciousness and bestowed authority on an idea many had viewed    as science fiction.  <\/p>\n<p>    Bostrom defined superintelligence as any intellect that    greatly exceeds the cognitive performance of humans in    virtually all domains of interest, with the hypothetical power    to vastly outmaneuver us, just like Hawkings engineers.  <\/p>\n<p>    And it could have very good reasons for doing so. In the title    of his eighth chapter, Bostrom asks, Is the default outcome    doom?, and he suggests that the unnerving answer might be    yes. He points to a number of goals that superintelligent    machines might adopt, including resource acquisition,    self-preservation, and cognitive improvements, with potentially    disastrous consequences for us and the planet.  <\/p>\n<p>    Bostrom illustrates his point with a colorful thought    experiment. Suppose we develop an AI tasked with building as    many paper clips as possible. This paper clip maximizer might    simply convert everything, humanity included, into paper clips.    Ousting humans would also facilitate self-preservation,    eliminating our unfortunate knack for switching off machines.    Theres also the possibility of an intelligence explosion,    where even a modestly capable general AI might undergo a rapid    period of self-improvement in order to better achieve its    goals, swiftly bypassing humanity in the process.  <\/p>\n<p>    Many critics are skeptical of this line of argument, seeing a    fundamental disconnect between the kinds of AI that might    result in an intelligence explosion and the state of the field    today. Contemporary AI, they note, is effective only at    specific tasks, like driving and winning at Jeopardy!  <\/p>\n<p>    Oren Etzioni, the CEO of the Allen Institute for Artificial    Intelligence,     writes that many researchers place superintelligence    beyond the foreseeable horizon, and the philosopher Luciano    Floridi     argues in Aeon that we should not lose sleep over the    possible appearance of some ultraintelligence  we have no idea    how we might begin to engineer it. Roboticist Rodney Brooks    sums    up these critiques well, likening fears over    superintelligence today to seeing more efficient internal    combustion engines appearing and jumping to the conclusion that    warp drives are just around the corner.  <\/p>\n<p>    To these and other critics, superintelligence is not just a    waste of time but, in Floridis words, irresponsibly    distracting, diverting attention from more pressing problems.    One such problem is inequality: AI software used to assess the    risks of recidivism, for example, shows clear     racial bias, being twice as likely to flag black    individuals incorrectly. Women searching Google are     less likely than men to be shown ads for high-paying jobs.    Add to this a host of emerging issues, including driverless    cars, autonomous weapons, and the automation of jobs, and it is    clear there are many areas needing immediate attention.  <\/p>\n<p>    To the Microsoft researcher Kate Crawford, the hand-wringing    over superintelligence is symptomatic of AIs white guy    problem, an endemic lack of diversity in the field.     Writing in The New York Times, she opines that while the    rise of an artificially intelligent apex predator may be the    biggest risk for the affluent white men who dominate public    discourse on AI, for those who already face marginalization or    bias, the threats are here.  <\/p>\n<p>    But these arguments, however valid, do not go to the heart of    what Bostrom and like-minded thinkers are worried about.    Critics who emphasize the low probability of an intelligence    explosion neglect a core component of Bostroms thesis. In the    preface of Superintelligence, he writes that it is no part    of the argument in this book that we are on the threshold of a    big breakthrough in artificial intelligence, or that we can    predict with any precision when such a development might    occur. Instead, his argument hinges on the logical possibility    of an intelligence explosion  something few deny  and the    need to consider the problem in advance, given the    consequences.  <\/p>\n<p>    That    superintelligence might distract us from addressing    existing problems is a legitimate concern, but aside from an    (admittedly successful) appeal to intuition, no evidence is    actually offered in support of this claim.  <\/p>\n<p>        There is room to consider both long-term and        short-term consequences of AI.      <\/p>\n<p>    Its more likely Bostrom and company have had the opposite    impact, with the problems of contemporary AI benefiting from    increased political, media, and public attention, as well as    the accompanying injection of funds into the field. A case in    point is the new Leverhulme Center for the Future of    Intelligence. Based at the University of Cambridge, the center    was founded with $13 million secured largely through the work    of its sister organization, the Center for the Study of    Existential Risk, known for its work on advanced AI risks.  <\/p>\n<p>    This is not an either\/or debate, nor do we need to neglect    existing problems in order to pay attention to the risks of    superintelligence. It is important not to allow concerns for    short-term exigencies to overwhelm concern for the future (and    vice versa)  something at which humanity has a very poor track    record. There is room to consider both long-term and short-term    consequences of AI, and given the enormous opportunities and    risks it is imperative we do so.  <\/p>\n<p>    Robert Hart is a researcher and writer on the politics of    science and technology, with special interests in    biotechnology, animal behavior, and artificial intelligence. He    can be reached on Twitter @Rob_Hart17.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>See the original post:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/undark.org\/article\/artificial-intelligence-risks-hawking-bostrom\/\" title=\"When Artificial Intelligence Gets Too Clever by Half - Undark Magazine\">When Artificial Intelligence Gets Too Clever by Half - Undark Magazine<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Picture a crew of engineers building a dam.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-intelligence\/when-artificial-intelligence-gets-too-clever-by-half-undark-magazine\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187742],"tags":[],"class_list":["post-194930","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/194930"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=194930"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/194930\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=194930"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=194930"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=194930"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}