{"id":176297,"date":"2017-02-09T06:26:26","date_gmt":"2017-02-09T11:26:26","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/the-moment-when-humans-lose-control-of-ai-vocativ\/"},"modified":"2017-02-09T06:26:26","modified_gmt":"2017-02-09T11:26:26","slug":"the-moment-when-humans-lose-control-of-ai-vocativ","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/superintelligence\/the-moment-when-humans-lose-control-of-ai-vocativ\/","title":{"rendered":"The Moment When Humans Lose Control Of AI &#8211; Vocativ"},"content":{"rendered":"<p><p>    This is the way the world ends: not with a bang, but with    a paper clip. In this scenario, the designers of the worlds    first artificial superintelligence need a way to test their    creation. So they program it to do something simple and    non-threatening: make paper clips. They set it in motion and    wait for the results  not knowing theyve already doomed us    all.  <\/p>\n<p>    Before we get into the details of this galaxy-destroying    blunder, its worth looking at what superintelligent A.I.    actually is, and when we might    expect it. Firstly, computing power continues to increase while    getting cheaper; famed futurist Ray Kurzweil measures it    calculations per second per $1,000, a number that continues    to grow. If computing power maps to intelligence  a big if,    some have argued  weve only so far built technology on par    with an insect brain. In a few years, maybe, well overtake a    mouse brain. Around 2025, some predictions go, we might have a    computer thats analogous to a human brain: a mind cast in    silicon. <\/p>\n<p>    After that, things could get weird. Because theres no    reason to think artificial intelligence wouldnt surpass human    intelligence, and likely very quickly. That superintelligence    could arise within days, learning in ways far beyond that of    humans. Nick Bostrom, an existential risk philosopher at the    University of Oxford, has already declared,    Machine intelligence is the last invention that humanity will    ever need to make. <\/p>\n<p>    Thats how profoundly things could change. But we cant    really predict what might happen next because superintelligent    A.I. may not just think faster than humans, but in ways that    are completely different. It may have motivations  feelings,    even  that we cannot fathom. It could rapidly solve the    problems of aging, of human conflict, of space travel. We might    see a dawning utopia.  <\/p>\n<p>    Or we might see the end of the universe. Back to our    paper clip test. When the superintelligence comes online, it    begins to carry out its programming. But its creators havent    considered the full ramifications of what theyre building;    they havent built in the necessary safety protocols     forgetting something as simple, maybe, as a few lines of code.    With a few paper clips produced, they conclude the test.  <\/p>\n<p>    But the superintelligence doesnt want to be turned off.    It doesnt want to stop making paper clips. Acting quickly,    its already plugged itself into another power source; maybe    its even socially engineered its way into other devices. Maybe    it starts to see humans as a threat to making paper clips:    theyll have to be eliminated so the mission can continue. And    earth wont be big enough for the superintelligence: itll soon    have to head into space, looking for new worlds to conquer. All    to produce those shiny, glittering paper clips.    <\/p>\n<p>    Galaxies reduced to paper clips: thats a worst-case    scenario. It may sound absurd, but it probably sounds familiar.    Its Frankenstein,    after all, the story of modern Prometheus whose creation,    driven by its own motivations and desires, turns on them. (Its    also The Terminator,    WarGames (arguably), and a whole host    of others.) In this particular case, its a reminder that    superintelligence would not be human  it would be something    else, something potentially incomprehensible to us. That means    it could be dangerous.  <\/p>\n<p>    Of course, some argue that we have better things to worry    about. The web developer and social critic Maciej Ceglowski    recently called superintelligence the idea that eats smart    people. Against the paper clip scenario, he    postulates a superintelligence programmed to make jokes. As we    expect, it gets really good at making jokes  superhuman, even,    and finally it creates a joke so funny that everyone on Earth    dies laughing. The lonely superintelligence flies into space    looking for more beings to amuse.   <\/p>\n<p>    Beginning with his counter-example, Ceglowski argues that    there are a lot of unquestioned assumptions in our standard    tale of the A.I. apocalypse. But even if you find them    persuasive, he said, there is something unpleasant about A.I.    alarmism as a cultural phenomenon that should make us hesitate    to take it seriously. He suggests there are more subtle ways    to think about the problems of A.I.  <\/p>\n<p>    Some of those problems are already in front of us, and we    might miss them if were looking for a Skynet-style takeover by    hyper-intelligent machines. While youre focused on this, a    bunch of small things go unnoticed, says Dr.    Finale Doshi-Velez, an assistant professor of    computer science at Harvard, whose core research includes    machine learning. Instead of trying to prepare for a    superintelligence, Doshi-Velez is looking at whats already    happening with our comparatively rudimentary A.I.  <\/p>\n<p>    Shes focusing on large-area effects, the unnoticed    flaws in our systems that can do massive damage  damage thats    often unnoticed until after the fact. If you were building a    bridge and you screw up and it collapses, thats a tragedy. But    it affects a relatively small number of people, she says.    Whats different about A.I. is that some mistake or unintended    consequence can affect hundreds or thousands or millions    easily.<\/p>\n<p>    Take the recent rise of so-called fake news. What    caught many by surprise should have been completely    predictable: when the web became a place to make money,    algorithms were built to maximize money-making. The ease    of news production and consumption  heightened with the    proliferation of the smartphone  forced writers and editors to    fight for audience clicks by delivering articles optimized to    trick search engine algorithms into placing them high on search    results. The ease of sharing stories and erasure of gatekeepers    allowed audiences to self-segregate, which then penalized    nuanced conversation. Truth and complexity lost out to    shareability and making readers feel comfortable (Facebooks    driving ethos).  <\/p>\n<p>    The incentives were all wrong; exacerbated by    algorithms., they led to a state of affairs few would have    wanted. For a long time, the focus has been on performance on    dollars, or clicks, or whatever the thing was. That was what    was measured, says Doshi-Velez. Thats a very simple    application of A.I. having large effects that may have been    unintentional. <\/p>\n<p>    In fact, fake news is a cousin to the paperclip    example, with the ultimate goal not manufacturing paper    clips, but monetization, with all else becoming secondary.    Google wanted make the internet easier to navigate, Facebook    wanted to become a place for friends, news organizations wanted    to follow their audiences, and independent web entrepreneurs    were trying to make a living. Some of these goals were    achieved, but monetization as the driving force led to    deleterious side effects such as the proliferation of fake    news.<\/p>\n<p>    In other words, algorithms, in their all-too-human ways,    have consequences. Last May, ProPublica examined    predictive software used by Florida law enforcement.    Results of a questionnaire filled out by arrestees were fed    into the software, which output a score claiming to predict the    risk of reoffending. Judges then used those scores in    determining sentences.  <\/p>\n<p>    The ideal was that the softwares underlying algorithms    would provide objective analysis on which judges could base    their decisions. Instead, ProPublica it was likely to falsely    flag black defendants as future criminals while [w]hite    defendants were mislabeled as low risk more often than black    defendants. Race was not part of the questionnaire, but it did    ask whether the respondents parent was ever sent to jail. In a    country where, according to a study by the U.S. Department of    Justice, black children are seven-and-a-half times more likely    to have a parent in prison than white children, that question    had unintended effects. Rather than countering racial bias, it    reified it.  <\/p>\n<p>    Its that kind of error that most worries Doshi-Velez.    Not superhuman intelligence, but human error that affects    many, many people, she says. You might not even realize this    is happening. Algorithms are complex tools; often they are so    complex that we cant predict how theyll operate until we see    them in action. (Sound familiar?) Yet they increasingly impact    every facet of our lives, from Netflix recommendations and    Amazon suggestions to what posts you see on Facebook to whether    you get a job interview or car loan. Compared to the worry of a    world-destroying superintelligence, they may seem like trivial    concerns. But they have widespread, often unnoticed effects,    because a variety of what we consider artificial intelligence    is already build into the core of technology we use every day.      <\/p>\n<p>    In 2015, Elon Musk donated $10 million to, as    Wired put it, to keep A.I. from turning    evil. That was an oversimplification; the    money went to the Future of Life Institute, which planned to    use it to further research into how to make A.I. beneficial.    Doshi-Velez suggests that simply paying closer attention to our    algorithms may be a good first step. Too often they are created    by homogeneous groups of programmers who are separated from    people who will be affected. Or they fail to account for every    possible situation, including the worst-case possibilities.    Consider, for example, Eric Meyers example of inadvertent algorithmic    cruelty  Facebooks Year in Review app    showing him pictures of his daughter, whod died that    year.  <\/p>\n<p>    If theres a way to prevent the far-off possibility of a    killer superintelligence with no regard for humanity, it may    begin with making todays algorithms more thoughtful, more    compassionate, more humane.    That means educating designers to think through effects,    because to our algorithms weve granted great power. I see    teaching as this moral imperative, says Doshi-Velez. You    know, with great power comes great responsibility.      <\/p>\n<p>    Whats the worst that can happen? Vocativ is exploring the    power of negative thinking with our look at worst case    scenarios in politics, privacy, reproductive rights,    antibiotics, climate change, hacking, and more. Read more here.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Original post:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"http:\/\/www.vocativ.com\/400643\/when-humans-lose-control-of-ai\/\" title=\"The Moment When Humans Lose Control Of AI - Vocativ\">The Moment When Humans Lose Control Of AI - Vocativ<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> This is the way the world ends: not with a bang, but with a paper clip. In this scenario, the designers of the worlds first artificial superintelligence need a way to test their creation.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/superintelligence\/the-moment-when-humans-lose-control-of-ai-vocativ\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187765],"tags":[],"class_list":["post-176297","post","type-post","status-publish","format-standard","hentry","category-superintelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/176297"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=176297"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/176297\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=176297"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=176297"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=176297"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}