{"id":238403,"date":"2017-08-25T00:56:26","date_gmt":"2017-08-25T04:56:26","guid":{"rendered":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/uncategorized\/friendly-artificial-intelligence-wikipedia-2.php"},"modified":"2017-08-25T00:56:26","modified_gmt":"2017-08-25T04:56:26","slug":"friendly-artificial-intelligence-wikipedia-2","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/superintelligence\/friendly-artificial-intelligence-wikipedia-2.php","title":{"rendered":"Friendly artificial intelligence &#8211; Wikipedia"},"content":{"rendered":"<p><p>    A friendly artificial intelligence (also friendly    AI or FAI) is a hypothetical artificial general    intelligence (AGI) that would have a positive rather than    negative effect on humanity. It is a part of the ethics of artificial    intelligence and is closely related to machine    ethics. While machine ethics is concerned with how an    artificially intelligent agent should behave, friendly    artificial intelligence research is focused on how to    practically bring about this behaviour and ensuring it is    adequately constrained.  <\/p>\n<p>    The term was coined by Eliezer Yudkowsky[1] to    discuss superintelligent artificial agents that    reliably implement human values. Stuart J.    Russell and Peter Norvig's leading artificial intelligence textbook,    Artificial    Intelligence: A Modern Approach, describes the    idea:[2]  <\/p>\n<p>      Yudkowsky (2008) goes into more detail about how to design a      Friendly AI. He asserts that friendliness (a desire      not to harm humans) should be designed in from the start, but      that the designers should recognize both that their own      designs may be flawed, and that the robot will learn and      evolve over time. Thus the challenge is one of mechanism      designto define a mechanism for evolving AI systems under a      system of checks and balances, and to give the systems      utility functions that will remain friendly in the face of      such changes.    <\/p>\n<p>    'Friendly' is used in this context as technical terminology, and picks    out agents that are safe and useful, not necessarily ones that    are \"friendly\" in the colloquial sense. The concept is    primarily invoked in the context of discussions of recursively    self-improving artificial agents that rapidly explode in intelligence, on the    grounds that this hypothetical technology would have a large,    rapid, and difficult-to-control impact on human    society.[3]  <\/p>\n<p>    The roots of concern about artificial intelligence are very    old. Kevin LaGrandeur    showed that the dangers specific to AI can be seen in ancient    literature concerning artificial humanoid servants such as the    golem, or the    proto-robots of Gerbert of    Aurillac and Roger Bacon. In those stories, the extreme    intelligence and power of these humanoid creations clash with    their status as slaves (which by nature are seen as sub-human),    and cause disastrous conflict.[4] By 1942 these    themes prompted Isaac Asimov to create the \"Three Laws of Robotics\" -    principles hard-wired into all the robots in his fiction,    intended to prevent them from turning on their creators, or    allow them to come to harm.[5]  <\/p>\n<p>    In modern times as the prospect of superintelligent AI looms nearer,    philosopher Nick Bostrom has said that superintelligent    AI systems with goals that are not aligned with human ethics    are intrinsically dangerous unless extreme measures are taken    to ensure the safety of humanity. He put it this way:  <\/p>\n<p>      Basically we should assume that a 'superintelligence' would      be able to achieve whatever goals it has. Therefore, it is      extremely important that the goals we endow it with, and its      entire motivation system, is 'human friendly.'    <\/p>\n<p>    Ryszard Michalski, a pioneer of machine    learning, taught his Ph.D. students decades ago that any    truly alien mind, including a machine mind, was unknowable and    therefore dangerous to humans.[citation    needed]  <\/p>\n<p>    More recently, Eliezer Yudkowsky has called for the creation of    friendly AI to mitigate existential    risk from advanced artificial intelligence. He explains:    \"The AI does not hate you, nor does it love you, but you are    made out of atoms which it can use for something else.\"[6]  <\/p>\n<p>    Steve    Omohundro says that a sufficiently advanced AI system will,    unless explicitly counteracted, exhibit a number of basic \"drives\", such as resource    acquisition, because of the intrinsic nature of goal-driven    systems and that these drives will, \"without special    precautions\", cause the AI to exhibit undesired    behavior.[7][8]  <\/p>\n<p>    Alexander Wissner-Gross says that    AIs driven to maximize their future freedom of action (or    causal path entropy) might be considered friendly if their    planning horizon is longer than a certain threshold, and    unfriendly if their planning horizon is shorter than that    threshold.[9][10]  <\/p>\n<p>    Luke Muehlhauser, writing for the Machine Intelligence    Research Institute, recommends that machine    ethics researchers adopt what Bruce    Schneier has called the \"security mindset\": Rather than    thinking about how a system will work, imagine how it could    fail. For instance, he suggests even an AI that only makes    accurate predictions and communicates via a text interface    might cause unintended harm.[11]  <\/p>\n<p>    Yudkowsky advances the Coherent Extrapolated Volition (CEV)    model. According to him, coherent extrapolated volition is    people's choices and the actions people would collectively take    if \"we knew more, thought faster, were more the people we    wished we were, and had grown up closer together.\"[12]  <\/p>\n<p>    Rather than a Friendly AI being designed directly by human    programmers, it is to be designed by a \"seed AI\" programmed to    first study human nature and then produce the AI which    humanity would want, given sufficient time and insight, to    arrive at a satisfactory answer.[12] The appeal to an    objective though contingent human    nature (perhaps expressed, for mathematical purposes, in    the form of a utility function or    other decision-theoretic formalism), as    providing the ultimate criterion of \"Friendliness\", is an    answer to the meta-ethical problem of defining an    objective morality; extrapolated    volition is intended to be what humanity objectively would    want, all things considered, but it can only be defined    relative to the psychological and cognitive qualities of    present-day, unextrapolated humanity.  <\/p>\n<p>    Ben    Goertzel, an artificial general    intelligence researcher, believes that friendly AI cannot    be created with current human knowledge. Goertzel suggests    humans may instead decide to create an \"AI Nanny\" with \"mildly    superhuman intelligence and surveillance powers\", to protect    the human race from existential risks    like nanotechnology and to delay the    development of other (unfriendly) artificial intelligences    until and unless the safety issues are solved.[13]  <\/p>\n<p>    Steve    Omohundro has proposed a \"scaffolding\" approach to AI    safety, in which one provably safe AI generation helps build    the next provably safe generation.[14]  <\/p>\n<p>    Stefan Pernar argues along the lines of Meno's paradox    to point out that attempting to solve the FAI problem is either    pointless or hopeless depending on whether one assumes a    universe that exhibits moral realism or not. In the former case a    transhuman AI would independently reason itself into the proper    goal system and assuming the latter, designing a friendly AI    would be futile to begin with since morals can not be reasoned    about.[15]  <\/p>\n<p>    James    Barrat, author of Our Final Invention,    suggested that \"a public-private partnership has to be created    to bring A.I.-makers together to share ideas about    securitysomething like the International Atomic Energy Agency,    but in partnership with corporations.\" He urges AI researchers    to convene a meeting similar to the Asilomar Conference on    Recombinant DNA, which discussed risks of    biotechnology.[14]  <\/p>\n<p>    John    McGinnis encourages governments to accelerate friendly AI    research. Because the goalposts of friendly AI aren't    necessarily clear, he suggests a model more like the National Institutes of    Health, where \"Peer review panels of computer and cognitive    scientists would sift through projects and choose those that    are designed both to advance AI and assure that such advances    would be accompanied by appropriate safeguards.\" McGinnis feels    that peer review is better \"than regulation to address    technical issues that are not possible to capture through    bureaucratic mandates\". McGinnis notes that his proposal stands    in contrast to that of the Machine Intelligence    Research Institute, which generally aims to avoid    government involvement in friendly AI.[16]  <\/p>\n<p>    According to Gary Marcus, the annual amount of money being    spent on developing machine morality is tiny.[17]  <\/p>\n<p>    Some critics believe that both human-level AI and    superintelligence are unlikely, and that therefore friendly AI    is unlikely. Writing in The Guardian, Alan Winfeld compares    human-level artificial intelligence with faster-than-light    travel in terms of difficulty, and states that while we need to    be \"cautious and prepared\" given the stakes involved, we \"don't    need to be obsessing\" about the risks of    superintelligence.[18]  <\/p>\n<p>    Some philosophers claim that any truly \"rational\" agent,    whether artificial or human, will naturally be benevolent; in    this view, deliberate safeguards designed to produce a friendly    AI could be unnecessary or even harmful.[19] Other    critics question whether it is possible for an artificial    intelligence to be friendly. Adam Keiper and Ari N. Schulman,    editors of the technology journal The New Atlantis, say that    it will be impossible to ever guarantee \"friendly\" behavior in    AIs because problems of ethical complexity will not yield to    software advances or increases in computing power. They write    that the criteria upon which friendly AI theories are based    work \"only when one has not only great powers of prediction    about the likelihood of myriad possible outcomes, but certainty    and consensus on how one values the different outcomes.[20]  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read the original here: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/en.wikipedia.org\/wiki\/Friendly_artificial_intelligence\" title=\"Friendly artificial intelligence - Wikipedia\">Friendly artificial intelligence - Wikipedia<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive rather than negative effect on humanity.  <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/superintelligence\/friendly-artificial-intelligence-wikipedia-2.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[431612],"tags":[],"class_list":["post-238403","post","type-post","status-publish","format-standard","hentry","category-superintelligence"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/238403"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=238403"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/238403\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=238403"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=238403"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=238403"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}