{"id":175542,"date":"2017-02-06T15:38:44","date_gmt":"2017-02-06T20:38:44","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/experts-have-come-up-with-23-guidelines-to-avoid-an-ai-apocalypse-sciencealert\/"},"modified":"2017-02-06T15:38:44","modified_gmt":"2017-02-06T20:38:44","slug":"experts-have-come-up-with-23-guidelines-to-avoid-an-ai-apocalypse-sciencealert","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/superintelligence\/experts-have-come-up-with-23-guidelines-to-avoid-an-ai-apocalypse-sciencealert\/","title":{"rendered":"Experts have come up with 23 guidelines to avoid an AI apocalypse &#8230; &#8211; ScienceAlert"},"content":{"rendered":"<p><p>    It's the stuff of many a sci-fi book or movie - could robots    one day become smart enough to overthrow us? Well, a group of    the world's most eminent artificial intelligence experts have    worked together to try and make sure that doesn't happen.  <\/p>\n<p>    They've put together a set of 23 principles to guide future    research into AI, which have since been endorsed by hundreds    more professionals, including Stephen Hawking and SpaceX CEO    Elon Musk.  <\/p>\n<\/p>\n<p>    Called the Asilomar AI    Principles (after the beach in California, where they were    thought up), the guidelines cover research issues, ethics and    values, and longer-term issues - everything from how scientists    should work with governments to how lethal weapons should be    handled.  <\/p>\n<p>    On that point: \"An arms race in lethal autonomous weapons    should be avoided,\" says principle 18. You can read the    full    listbelow.  <\/p>\n<p>    \"We hope that these principles will provide material for    vigorous discussion and also aspirational goals for how the    power of AI can be used to improve everyone's lives in coming    years,\"     write the organisers of the Beneficial AI 2017    conference, where the principles were worked out.  <\/p>\n<p>    For a principle to be included, at least 90 percent of the 100+    conference attendees had to agree to it. Experts at the event    included academics, engineers, and representatives from tech    companies, including Google co-founder Larry Page.  <\/p>\n<p>    Perhaps the most telling guideline is principle 23, entitled    'Common Good': \"Superintelligence should only be developed in    the service of widely shared ethical ideals, and for the    benefit of all humanity rather than one state or organisation.\"  <\/p>\n<p>    Other principles in the list    suggest that any AI allowed to self-improve must be strictly    monitored, and that developments in the tech should be \"shared    broadly\" and \"benefit all of humanity\".  <\/p>\n<\/p>\n<p>    \"To think AI merely automates human decisions is like thinking    electricity is just a replacement for candles,\" conference    attendee Patrick Lin, from California Polytechnic State    University,     told George Dvorsky at Gizmodo.  <\/p>\n<p>    \"Given the massive potential for disruption and harm, as well    as benefits, it makes sense to establish some norms early on to    help steer research in a good direction, as opposed to letting    a market economy that's fixated mostly on efficiency and    profit... shape AI.\"  <\/p>\n<p>    Meanwhile the principles also call for scientists to work    closely with governments and lawmakers to make sure our society    keeps pace with the development of AI.  <\/p>\n<p>    All of which sounds very good to us - let's just hope the    robots are listening.  <\/p>\n<p>    The guidelines also rely on a certain amount of consensus about    specific terms - such as what's beneficial to humankind and    what isn't - but for the experts behind the list it's a    question of getting something recorded at this early stage of    AI research.  <\/p>\n<p>    With artificial intelligence systems now     beating us at poker and getting smart enough to spot    skin cancers, there's a definite need to have guidelines    and limits in place that researchers can work to.  <\/p>\n<p>    And then we also need to decide     what rights super-smart robots have when they're living    among us.  <\/p>\n<p>    For now the guidelines should give us some helpful pointers for    the future.  <\/p>\n<p>    \"No current AI system is going to 'go rogue' and be dangerous,    and it's important that people know that,\" conference attendee    Anthony Aguirre, from the University of California, Santa Cruz,        told Gizmodo.  <\/p>\n<p>    \"At the same time, if we envision a time when AIs exist that    are as capable or more so than the smartest humans, it would be    utterly naive to believe that the world will not fundamentally    change.\"  <\/p>\n<p>    \"So how seriously we take AI's opportunities and risks has to    scale with how capable it is, and having clear assessments and    forecasts - without the press, industry or research hype that    often accompanies advances - would be a good starting point.\"  <\/p>\n<p>    The principles have been published by the Future Of Life Institute.  <\/p>\n<p>    You can see them in full and add your support over on their site.  <\/p>\n<p>    Research issues  <\/p>\n<p>    1. Research Goal:The goal of AI research should be to    create not undirected intelligence, but beneficial    intelligence.  <\/p>\n<p>    2. Research Funding:Investments in AI should be    accompanied by funding for research on ensuring its beneficial    use, including thorny questions in computer science, economics,    law, ethics, and social studies, such as:  <\/p>\n<p>    3. Science-Policy Link:There should be constructive and    healthy exchange between AI researchers and policy-makers.  <\/p>\n<p>    4. Research Culture:A culture of cooperation, trust, and    transparency should be fostered among researchers and    developers of AI.  <\/p>\n<p>    5. Race Avoidance:Teams developing AI systems should    actively cooperate to avoid corner-cutting on safety standards.  <\/p>\n<p>    Ethics and values  <\/p>\n<p>    6. Safety:AI systems should be safe and secure throughout    their operational lifetime, and verifiably so where applicable    and feasible.  <\/p>\n<p>    7. Failure Transparency:If an AI system causes harm, it    should be possible to ascertain why.  <\/p>\n<p>    8. Judicial Transparency:Any involvement by an autonomous    system in judicial decision-making should provide a    satisfactory explanation auditable by a competent human    authority.  <\/p>\n<p>    9. Responsibility:Designers and builders of advanced AI    systems are stakeholders in the moral implications of their    use, misuse, and actions, with a responsibility and opportunity    to shape those implications.  <\/p>\n<p>    10. Value Alignment:Highly autonomous AI systems should    be designed so that their goals and behaviours can be assured    to align with human values throughout their operation.  <\/p>\n<p>    11. Human Values:AI systems should be designed and    operated so as to be compatible with ideals of human dignity,    rights, freedoms, and cultural diversity.  <\/p>\n<p>    12. Personal Privacy:People should have the right to    access, manage and control the data they generate, given AI    systems power to analyse and utilise that data.  <\/p>\n<p>    13. Liberty and Privacy:The application of AI to personal    data must not unreasonably curtail peoples real or perceived    liberty.  <\/p>\n<p>    14. Shared Benefit:AI technologies should benefit and    empower as many people as possible.  <\/p>\n<p>    15. Shared Prosperity:The economic prosperity created by    AI should be shared broadly, to benefit all of humanity.  <\/p>\n<p>    16. Human Control:Humans should choose how and whether to    delegate decisions to AI systems, to accomplish human-chosen    objectives.  <\/p>\n<p>    17. Non-subversion:The power conferred by control of    highly advanced AI systems should respect and improve, rather    than subvert, the social and civic processes on which the    health of society depends.  <\/p>\n<p>    18. AI Arms Race:An arms race in lethal autonomous    weapons should be avoided.  <\/p>\n<p>    Longer term issues  <\/p>\n<p>    19. Capability Caution:There being no consensus, we    should avoid strong assumptions regarding upper limits on    future AI capabilities.  <\/p>\n<p>    20. Importance:Advanced AI could represent a profound    change in the history of life on Earth, and should be planned    for and managed with commensurate care and resources.  <\/p>\n<p>    21. Risks:Risks posed by AI systems, especially    catastrophic or existential risks, must be subject to planning    and mitigation efforts commensurate with their expected impact.  <\/p>\n<p>    22. Recursive Self-Improvement:AI systems designed to    recursively self-improve or self-replicate in a manner that    could lead to rapidly increasing quality or quantity must be    subject to strict safety and control measures.  <\/p>\n<p>    23. Common Good:Superintelligence should only be    developed in the service of widely shared ethical ideals, and    for the benefit of all humanity rather than one state or    organisation.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Original post:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"http:\/\/www.sciencealert.com\/experts-have-come-up-with-23-guidelines-to-avoid-an-ai-apocalypse\" title=\"Experts have come up with 23 guidelines to avoid an AI apocalypse ... - ScienceAlert\">Experts have come up with 23 guidelines to avoid an AI apocalypse ... - ScienceAlert<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> It's the stuff of many a sci-fi book or movie - could robots one day become smart enough to overthrow us? Well, a group of the world's most eminent artificial intelligence experts have worked together to try and make sure that doesn't happen. They've put together a set of 23 principles to guide future research into AI, which have since been endorsed by hundreds more professionals, including Stephen Hawking and SpaceX CEO Elon Musk <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/superintelligence\/experts-have-come-up-with-23-guidelines-to-avoid-an-ai-apocalypse-sciencealert\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187765],"tags":[],"class_list":["post-175542","post","type-post","status-publish","format-standard","hentry","category-superintelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/175542"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=175542"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/175542\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=175542"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=175542"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=175542"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}