{"id":176735,"date":"2017-02-11T08:28:32","date_gmt":"2017-02-11T13:28:32","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/we-need-a-plan-for-when-ai-becomes-smarter-than-us-futurism\/"},"modified":"2017-02-11T08:28:32","modified_gmt":"2017-02-11T13:28:32","slug":"we-need-a-plan-for-when-ai-becomes-smarter-than-us-futurism","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/we-need-a-plan-for-when-ai-becomes-smarter-than-us-futurism\/","title":{"rendered":"We Need a Plan for When AI Becomes Smarter Than Us &#8211; Futurism"},"content":{"rendered":"<p><p>In BriefThere will come a time when artificial intelligence systemsare smarter than humans. When this time comes we will need to buildmore AI systems to monitor and improve current systems. This willlead to a cycle of AI creating better AI, with little to no humaninvolvement.  <\/p>\n<p>    When Apple released its software application, Siri, in 2011,    iPhone users had high expectations for their intelligent    personal assistants. Yet despite its impressive and growing    capabilities, Siri often makes mistakes. The softwares    imperfections highlight the clear limitations of current AI:    todays machine intelligence cant understand the varied and    changing needs and preferences of human life.  <\/p>\n<p>    However, as artificial intelligence advances, experts believe    that intelligent machines will eventually  and probably soon     understand the world better than humans. While it might be easy    to understand how or why Siri makes a mistake, figuring out why    a superintelligent AI made the decision it did will be much    more challenging.  <\/p>\n<p>    If humans cannot understand and evaluate these machines, how    will they control them?  <\/p>\n<p>    Paul Christiano, a Ph.D. student in computer science at UC    Berkeley, has been working on addressing this problem. He    believes that to ensure safe and beneficial AI, researchers and    operators must learn to measure how well intelligent machines    do what humans want, even as these machines surpass human    intelligence.  <\/p>\n<p>    The most obvious way to supervise the development of an AI    system also happens to be the hard way. As Christiano explains:    One way humans can communicate what they want, is by spending    a lot of time digging down on some small decision that was made    [by an AI], and try to evaluate how good that decision was.  <\/p>\n<p>    But while this is theoretically possible, the human researchers    would never have the time or resources to evaluate every    decision the AI made. If you want to make a good evaluation,    you could spend several hours analyzing a decision that the    machine made in one second, says Christiano.  <\/p>\n<p>    For example, suppose an amateur chess player wants to    understand a better chess players previous move. Merely    spending a few minutes evaluating this move wont be enough,    but if she spends a few hours she could consider every    alternative and develop a meaningful understanding of the    better players moves.  <\/p>\n<p>    Fortunately for researchers, they dont need to evaluate every    decision an AI makes in order to be confident in its behavior.    Instead, researchers can choose the machines most interesting    and informative decisions, where getting feedback would most    reduce our uncertainty, Christiano explains.  <\/p>\n<p>    Say your phone pinged you about a calendar event while you    were on a phone call, he elaborates, That event is not    analogous to anything else it has done before, so its not sure    whether it is good or bad. Due to this uncertainty, the phone    would send the transcript of its decisions to an evaluator at    Google, for example. The evaluator would study the transcript,    ask the phone owner how he felt about the ping, and determine    whether pinging users during phone calls is a desirable or    undesirable action. By providing this feedback, Google teaches    the phone when it should interrupt users in the future.  <\/p>\n<p>    This active learning process is an efficient method for humans    to train AIs, but what happens when humans need to evaluate AIs    that exceed human intelligence?  <\/p>\n<p>    Consider a computer that is mastering chess. How could a human    give appropriate feedback to the computer if the human has not    mastered chess? The human might criticize a move that the    computer makes, only to realize later that the machine was    correct.  <\/p>\n<p>    With increasingly intelligent phones and computers, a similar    problem is bound to occur. Eventually, Christiano explains, we    need to handle the case where AI systems surpass human    performance at basically everything.  <\/p>\n<p>    If a phone knows much more about the world than its human    evaluators, then the evaluators cannot trust their human    judgment. They will need to enlist the help of more AI    systems, Christiano explains.  <\/p>\n<p>    When a phone pings a user while he is on a call, the users    reaction to this decision is crucial in determining whether the    phone will interrupt users during future phone calls. But, as    Christiano argues, if a more advanced machine is much better    than human users at understanding the consequences of    interruptions, then it might be a bad idea to just ask the    human should the phone have interrupted you right then? The    human might express annoyance at the interruption, but the    machine might know better and understand that this annoyance    was necessary to keep the users life running smoothly.  <\/p>\n<p>    In these situations, Christiano proposes that human evaluators    use other intelligent machines to do the grunt work of    evaluating an AIs decisions. In practice, a less capable    System 1 would be in charge of evaluating the more capable    System 2. Even though System 2 is smarter, System 1 can process    a large amount of information quickly, and can understand how    System 2 should revise its behavior. The human trainers would    still provide input and oversee the process, but their role    would be limited.  <\/p>\n<p>    This training process would help Google understand how to    create a safer and more intelligent AI  System 3  which the    human researchers could then train using System 2.  <\/p>\n<p>    Christiano explains that these intelligent machines would be    like little agents that carry out tasks for humans. Siri    already has this limited ability to take human input and figure    out what the human wants, but as AI technology advances,    machines will learn to carry out complex tasks that humans    cannot fully understand.  <\/p>\n<p>    As Google and other tech companies continue to improve their    intelligent machines with each evaluation, the human trainers    will fulfill a smaller role. Eventually, Christiano explains,    its effectively just one machine evaluating another machines    behavior.  <\/p>\n<p>    Ideally, each time you build a more powerful machine, it    effectively models human values and does what humans would    like, says Christiano. But he worries that these machines may    stray from human values as they surpass human intelligence. To    put this in human terms: a complex intelligent machine would    resemble a large organization of humans. If the organization    does tasks that are too complex for any individual human to    understand, it may pursue goals that humans wouldnt like.  <\/p>\n<p>    In order to address these control issues, Christiano is working    on an end-to-end    description of this machine learning process, fleshing out    key technical problems that seem most relevant. His research    will help bolster the understanding of how humans can use AI    systems to evaluate the behavior of more advanced AI systems.    If his work succeeds, it will be a significant step in building    trustworthy artificial intelligence.  <\/p>\n<p>    You can learn more about Paul Christianos    workhere.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read the original: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/futurism.com\/we-need-a-plan-for-when-ai-becomes-smarter-than-us\/\" title=\"We Need a Plan for When AI Becomes Smarter Than Us - Futurism\">We Need a Plan for When AI Becomes Smarter Than Us - Futurism<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> In BriefThere will come a time when artificial intelligence systemsare smarter than humans. When this time comes we will need to buildmore AI systems to monitor and improve current systems <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/we-need-a-plan-for-when-ai-becomes-smarter-than-us-futurism\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":7,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187743],"tags":[],"class_list":["post-176735","post","type-post","status-publish","format-standard","hentry","category-ai"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/176735"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=176735"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/176735\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=176735"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=176735"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=176735"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}