{"id":182367,"date":"2015-02-11T16:42:14","date_gmt":"2015-02-11T21:42:14","guid":{"rendered":"http:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/uncategorized\/our-fear-of-artificial-intelligence.php"},"modified":"2015-02-11T16:42:14","modified_gmt":"2015-02-11T21:42:14","slug":"our-fear-of-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/our-fear-of-artificial-intelligence.php","title":{"rendered":"Our Fear of Artificial Intelligence"},"content":{"rendered":"<p><p>    Years ago I had coffee with a friend who ran a startup. He had    just turned 40. His father was ill, his back was sore, and he    found himself overwhelmed by life. Dont laugh at me, he    said, but I was counting on the singularity.  <\/p>\n<p>    My friend worked in technology; hed seen the changes that    faster microprocessors and networks had wrought. It wasnt that    much of a step for him to believe that before he was beset by    middle age, the intelligence of machines would exceed that of    humansa moment that futurists call the singularity. A    benevolent superintelligence might analyze the human genetic    code at great speed and unlock the secret to eternal youth. At    the very least, it might know how to fix your back.  <\/p>\n<p>    But what if it wasnt so benevolent? Nick Bostrom, a    philosopher who directs the Future of Humanity Institute at the    University of Oxford, describes the following scenario in his    book Superintelligence, which has prompted a great    deal of debate about the future of artificial intelligence.    Imagine a machine that we might call a paper-clip    maximizerthat is, a machine programmed to make as many paper    clips as possible. Now imagine that this machine somehow became    incredibly intelligent. Given its goals, it might then decide    to create new, more efficient paper-clip-manufacturing    machinesuntil, King Midas style, it had converted essentially    everything to paper clips.  <\/p>\n<p>    No worries, you might say: you could just program it to make    exactly a million paper clips and halt. But what if it makes    the paper clips and then decides to check its work? Has it    counted correctly? It needs to become smarter to be sure. The    superintelligent machine manufactures some as-yet-uninvented    raw-computing material (call it computronium) and uses that    to check each doubt. But each new doubt yields further digital    doubts, and so on, until the entire earth is converted to    computronium. Except for the million paper clips.  <\/p>\n<p>    Bostrom does not believe that the paper-clip maximizer will    come to be, exactly; its a thought experiment, one designed to    show how even careful system design can fail to restrain    extreme machine intelligence. But he does believe that    superintelligence could emerge, and while it could be great, he    thinks it could also decide it doesnt need humans around. Or    do any number of other things that destroy the world. The title    of chapter 8 is: Is the default outcome doom?  <\/p>\n<p>    If this sounds absurd to you, youre not alone. Critics such as    the robotics pioneer Rodney Brooks say that people who fear a    runaway AI misunderstand what computers are doing when we say    theyre thinking or getting smart. From this perspective, the    putative superintelligence Bostrom describes is far in the    future and perhaps impossible.  <\/p>\n<p>    Yet a lot of smart, thoughtful people agree with Bostrom and    are worried now. Why?  <\/p>\n<p>    Volition  <\/p>\n<p>    The question Can a machine think? has shadowed computer    science from its beginnings. Alan Turing proposed in 1950 that    a machine could be taught like a child; John McCarthy, inventor    of the programming language LISP, coined the term artificial    intelligence in 1955. As AI researchers in the 1960s and 1970s    began to use computers to recognize images, translate between    languages, and understand instructions in normal language and    not just code, the idea that computers would eventually develop    the ability to speak and thinkand thus to do evilbubbled into    mainstream culture. Even beyond the oft-referenced HAL from    2001: A Space Odyssey, the 1970 movie Colossus:    The Forbin Project featured a large blinking mainframe    computer that brings the world to the brink of nuclear    destruction; a similar theme was explored 13 years later in    WarGames. The androids of 1973s Westworld    went crazy and started killing.  <\/p>\n<p>      Extreme AI predictions are comparable to seeing more      efficient internal combustion engines and jumping to the      conclusion that the warp drives are just around the corner,      Rodney Brooks writes.    <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>More here:<\/p>\n<p><a target=\"_blank\" href=\"http:\/\/www.technologyreview.com\/review\/534871\/our-fear-of-artificial-intelligence\" title=\"Our Fear of Artificial Intelligence\">Our Fear of Artificial Intelligence<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Years ago I had coffee with a friend who ran a startup. He had just turned 40. His father was ill, his back was sore, and he found himself overwhelmed by life.  <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/our-fear-of-artificial-intelligence.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[13],"tags":[],"class_list":["post-182367","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/182367"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=182367"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/182367\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=182367"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=182367"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=182367"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}