{"id":175544,"date":"2017-02-06T15:38:45","date_gmt":"2017-02-06T20:38:45","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/stephen-hawking-and-elon-musk-endorse-23-asilomar-principles-inverse\/"},"modified":"2017-02-06T15:38:45","modified_gmt":"2017-02-06T20:38:45","slug":"stephen-hawking-and-elon-musk-endorse-23-asilomar-principles-inverse","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/superintelligence\/stephen-hawking-and-elon-musk-endorse-23-asilomar-principles-inverse\/","title":{"rendered":"Stephen Hawking and Elon Musk Endorse 23 Asilomar Principles &#8230; &#8211; Inverse"},"content":{"rendered":"<p><p>    Artificial intelligence is an amazing    technology thats changing the world in fantastic ways, but    anybody who has ever seen the movie Terminator knows    that there are some dangers associated with advanced A.I.    Thats why Elon Musk, Stephen Hawking, and hundreds of    other researchers, tech leaders, and scientists have endorsed a    list of 23 guiding principles that should steer A.I.    development in a productive, ethical, and safe direction.  <\/p>\n<p>    The Asilomar A.I. Principles were developed after the    Future of Life Institute brought dozens of experts together for    their Beneficial A.I. 2017 conference. The    experts, whose ranks consisted of roboticists, physicists,    economists, philosophers, and more had fierce debates about    A.I. safety, economic impact on human workers, and programming    ethics, to name a few. In order to make the final list, 90    percent of the experts had to agree on its inclusion.  <\/p>\n<p>    What remained was a list of 23 principles ranging from    research strategies to data rights to future issues including    potential super-intelligence, which was signed by those wishing    to associate their name with the list, Future of Lifes    website explains. This collection of principles is by no means    comprehensive and its certainly open to differing    interpretations, but it also highlights how the current    default behavior around many relevant issues could violate    principles that most participants agreed are important to    uphold.  <\/p>\n<p>    Since then, 892 A.I. or Robotics researchers and 1445 others    experts, including Tesla CEO Elon    Musk and famed physicist Stephen Hawking, have endorsed the    principles.  <\/p>\n<p>    Some of the principles  like transparency and open research    sharing among competitive companies  seem less likely than    others. Even if theyre not fully implemented, the 23    principles could go a long way towards improving A.I.    development and ensuring that its ethical  and preventing the    rise of Skynet.  <\/p>\n<p>    1. Research Goal: The goal of A.I. research    should be to create not undirected intelligence, but beneficial    intelligence.  <\/p>\n<p>    2. Research Funding: Investments in A.I.    should be accompanied by funding for research on ensuring its    beneficial use, including thorny questions in computer science,    economics, law, ethics, and social studies, such as:  <\/p>\n<p>    3. Science-Policy Link: There should be    constructive and healthy exchange between A.I. researchers and    policy-makers.  <\/p>\n<p>    4. Research Culture: A culture of cooperation,    trust, and transparency should be fostered among researchers    and developers of A.I.  <\/p>\n<p>    5. Race Avoidance: Teams developing A.I.    systems should actively cooperate to avoid corner-cutting on    safety standards.  <\/p>\n<p>    6. Safety: A.I. systems should be safe and    secure throughout their operational lifetime, and verifiably so    where applicable and feasible.  <\/p>\n<p>    7. Failure Transparency: If an A.I. system    causes harm, it should be possible to ascertain why.  <\/p>\n<p>    8. Judicial Transparency: Any involvement by    an autonomous system in judicial decision-making should provide    a satisfactory explanation auditable by a competent human    authority.  <\/p>\n<p>    9. Responsibility: Designers and builders of    advanced A.I. systems are stakeholders in the moral    implications of their use, misuse, and actions, with a    responsibility and opportunity to shape those implications.  <\/p>\n<p>    10. Value Alignment: Highly autonomous A.I.    systems should be designed so that their goals and behaviors    can be assured to align with human values throughout their    operation.  <\/p>\n<p>    11. Human Values: A.I. systems should be    designed and operated so as to be compatible with ideals of    human dignity, rights, freedoms, and cultural diversity.  <\/p>\n<p>    12. Personal Privacy: People should have the    right to access, manage and control the data they generate,    given A.I. systems power to analyze and utilize that data.  <\/p>\n<p>    13. Liberty and Privacy: The application of    A.I. to personal data must not unreasonably curtail peoples    real or perceived liberty.  <\/p>\n<p>    14 Shared Benefit: A.I. technologies should    benefit and empower as many people as possible.  <\/p>\n<p>    15. Shared Prosperity: The economic prosperity    created by A.I.I should be shared broadly, to benefit all of    humanity.  <\/p>\n<p>    16. Human Control: Humans should choose how    and whether to delegate decisions to A.I. systems, to    accomplish human-chosen objectives.  <\/p>\n<p>    17. Non-subversion: The power conferred by    control of highly advanced A.I. systems should respect and    improve, rather than subvert, the social and civic processes on    which the health of society depends.  <\/p>\n<p>    18. A.I. Arms Race: An arms race in lethal    autonomous weapons should be avoided.  <\/p>\n<p>    19. Capability Caution: There being no    consensus, we should avoid strong assumptions regarding upper    limits on future A.I. capabilities.  <\/p>\n<p>    20. Importance: Advanced A.I. could represent    a profound change in the history of life on Earth, and should    be planned for and managed with commensurate care and    resources.  <\/p>\n<p>    21. Risks: Risks posed by A.I. systems,    especially catastrophic or existential risks, must be subject    to planning and mitigation efforts commensurate with their    expected impact.  <\/p>\n<p>    22. Recursive Self-Improvement: A.I. systems    designed to recursively self-improve or self-replicate in a    manner that could lead to rapidly increasing quality or    quantity must be subject to strict safety and control measures.  <\/p>\n<p>    23. Common Good: Superintelligence should only    be developed in the service of widely shared ethical ideals,    and for the benefit of all humanity rather than one state or    organization.  <\/p>\n<p>    Photos via Getty Images  <\/p>\n<p>    James Grebey is a writer, reporter, and fairly decent    cartoonist living in Brooklyn. He's written for SPIN Magazine,    BuzzFeed, MAD Magazine, and more. He thinks Double Stuf Oreos    are bad and he's ready to die on this hill. James is the    weeknights editor at Inverse because content doesn't sleep.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Continued here: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/www.inverse.com\/article\/27349-artificial-intelelgence-ethis-safety-asilomar\" title=\"Stephen Hawking and Elon Musk Endorse 23 Asilomar Principles ... - Inverse\">Stephen Hawking and Elon Musk Endorse 23 Asilomar Principles ... - Inverse<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Artificial intelligence is an amazing technology thats changing the world in fantastic ways, but anybody who has ever seen the movie Terminator knows that there are some dangers associated with advanced A.I. Thats why Elon Musk, Stephen Hawking, and hundreds of other researchers, tech leaders, and scientists have endorsed a list of 23 guiding principles that should steer A.I.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/superintelligence\/stephen-hawking-and-elon-musk-endorse-23-asilomar-principles-inverse\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":7,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187765],"tags":[],"class_list":["post-175544","post","type-post","status-publish","format-standard","hentry","category-superintelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/175544"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=175544"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/175544\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=175544"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=175544"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=175544"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}