{"id":197182,"date":"2017-06-07T17:17:31","date_gmt":"2017-06-07T21:17:31","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/the-inaugural-ai-for-good-global-summit-is-a-milestone-but-must-focus-more-on-risks-council-on-foreign-relations-blog\/"},"modified":"2017-06-07T17:17:31","modified_gmt":"2017-06-07T21:17:31","slug":"the-inaugural-ai-for-good-global-summit-is-a-milestone-but-must-focus-more-on-risks-council-on-foreign-relations-blog","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/the-inaugural-ai-for-good-global-summit-is-a-milestone-but-must-focus-more-on-risks-council-on-foreign-relations-blog\/","title":{"rendered":"The Inaugural AI for Good Global Summit Is a Milestone but Must Focus More on Risks &#8211; Council on Foreign Relations (blog)"},"content":{"rendered":"<p><p>    The followingis a guest post by Kyle    Evanoff,research associate for    International Economics and U.S. Foreign Policy.  <\/p>\n<\/p>\n<p>    Today through Friday, artificial intelligence (AI)    experts are meeting with international leaders in Geneva,    Switzerland, for the inaugural AI for    Good Global Summit. Organized by the    International Telecommunications    Union (ITU), a UN agency that specializes in    information and communication technologies, and the    XPRIZE Foundation, a    Silicon Valley nonprofit that awards competitive prizes for    solutions addressing some of the worlds most difficult    problems, the gathering will discuss AI-related issues and    promote international dialogue and cooperation on AI    innovation.  <\/p>\n<\/p>\n<p>    The summit comes at a critical time and should help    increase policymakers awareness of the possibilities and    challenges associated with AI. The downside is that it may    encourage undue optimism, by giving short shrift to the    significant risks that AI poses to international    security.  <\/p>\n<\/p>\n<p>    Although many    policymakers and citizens are unaware of it,    narrow forms of AI are already here. Software programs have    long been able to defeat the worlds best chess players, and    newer ones are succeeding at less-defined tasks, such as    composing    music, writing news    articles, and diagnosing medical    conditions. The rate of    progress is surprising    even tech leaders, and future developments could    bring massive increases in economic    growth and human well-being, as    well as cause widespread socioeconomic    upheaval.  <\/p>\n<\/p>\n<p>    This weeks forum provides a much-needed    opportunity to discuss how AI should be governed at the global    levela topic that has garnered little attention from    multilateral institutions like the United    Nations. The draft    program promises to educate    policymakers on multiple AI issues, from sessions on    moonshots    to ethics, sustainable living, and poverty reduction, among    other topics. Participants will include prominent individuals    drawn from multilateral institutions, nongovernmental    organizations (NGOs), the private sector, and academia.      <\/p>\n<\/p>\n<p>    This inclusivity is typical of the complex    governance models that increasingly define and    shape global policymakingwith internet    governance being a case in point.    Increasingly, NGOs, public-private partnerships, industry codes    of conduct, and other flexible arrangements have assumed many    of the global governance functions once reserved for    intergovernmental organizations. The new partnership between    ITU and the XPRIZE Foundation suggests that global governance    of AI, although in its infancy, is poised to follow this same    model.  <\/p>\n<\/p>\n<p>    For all its strengths, however, this multistakeholder    approach could afford private sector organizers excessive    agenda-setting power. The XPRIZE Foundation, founded by    outspoken techno-optimist Peter Diamandis, promotes    technological innovation as a means of creating a more    abundant    future. The summits mission and agenda hews    to this attitude, placing disproportionate emphasis on how AI    technologies can overcome problems and too little attention on    the question of mitigating risks from those same    technologies.  <\/p>\n<\/p>\n<p>    This is worrisome, since the risks of AI are numerous and    non-trivial. Unrestrained AI innovation could threaten    international stability, global security, and possibly even    humanitys survival. And, because many of the pertinent    technologies have yet to reach maturity, the risks associated    with them have received scant attention on the international    stage.  <\/p>\n<\/p>\n<p>    One area in which the risk of AI is obvious is    electioneering. Since the epochal June 2016 Brexit referendum,    state and nonstate actors with varying motivations have used AI    to create and\/or distribute propaganda via the internet.    An Oxford    study found that during the recent French    presidential election, the proportion of traffic originating    from highly automated Twitter accounts doubled between the    first and second rounds of voting. Some even attribute Donald    J. Trumps victory over Hillary Clinton in the U.S.    presidential election to weaponized artificial intelligence    spreading misinformation. Automated propaganda may well    call the integrity of future elections into question.      <\/p>\n<\/p>\n<p>    Another major AI risk lies in the development and use of    lethal autonomous weapons systems (LAWS). After the release of    a 2012 Human Rights Watch report, Losing Humanity: The Case Against Killer    Robots, the United Nations began    considering including restrictions on LAWS in the    Convention on Certain Conventional    Weapons (CCW). Meanwhile, both China and the    United States have made significant headway    with their autonomous weapons programs, in what is    quickly escalating into an international arms race. Since    autonomous weapons might lower the political cost of conflict,    they could make war more commonplace and increase death    tolls.  <\/p>\n<\/p>\n<p>    A more distant but possibly greater risk is that of    artificial general intelligence (AGI). While current AI    programs are designed for specific, narrow purposes, future    programs may be able to apply their intelligence to a far    broader range of applications, much as humans do. An    AGI-capable entity, through recursive self-improvement, could    give rise to a superintelligence more capable    than any humanone that might prove impossible to control and    pose an existential threat to humanity, regardless of the    intent of its initial programming. Although the AI doomsday    scenario is a common science fiction trope, experts consider it    to be a legitimate    concern.  <\/p>\n<\/p>\n<p>    Given rapid recent advances in AI and the magnitude of    potential risks, the time to begin multilateral discussions on    international rules is now. AGI may seem far off, but many    experts believe that it could become a reality by    2050. This makes the timeline for AGI similar    to that of climate change. The stakes, though, could be    much    higher. Waiting until a crisis has occurred to    act could preclude the possibility of action altogether.  <\/p>\n<\/p>\n<p>    Rather than allocating their limited resources to summits    promoting AI innovation (a task for which national governments    and the private sector are better suited), multilateral    institutions should recognize AIs risks and work to mitigate    them. Finalizing the inclusion of LAWS in the CCW would    constitute an important milestone in this regard. So too would    the formal adoption of AI safety    principles such as those established at    the Beneficial AI    2017 conference, one of the many artificial    intelligence summits occurring outside of traditional global    governance channels.  <\/p>\n<\/p>\n<p>    Multilateral institutions should also continue working    with nontraditional actors to ensure that AIs benefits    outweigh its costs. Complex governance arrangements can provide    much-needed resources and serve as stopgaps when necessary. But    intergovernmental organizations, as well as the national    governments that govern them, should be careful in ceding too    much agenda-setting power to private organizations. The primary    danger of the AI for Good Global Summit is not that it distorts    perceptions of AI risk; it is that Silicon Valley will wield    greater influence over AI governance with each successive    summit. Since technologists often prioritize innovation over    risk mitigation, this could undermine global security.  <\/p>\n<\/p>\n<p>    More important still, policymakers should recognize AIs    unprecedented transformative power and take a more proactive    approach to     addressing new technologies. The    greatest risk of all is inaction.   <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>See the article here: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/www.cfr.org\/blog-post\/inaugural-ai-good-global-summit-milestone-must-focus-more-risks\" title=\"The Inaugural AI for Good Global Summit Is a Milestone but Must Focus More on Risks - Council on Foreign Relations (blog)\">The Inaugural AI for Good Global Summit Is a Milestone but Must Focus More on Risks - Council on Foreign Relations (blog)<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> The followingis a guest post by Kyle Evanoff,research associate for International Economics and U.S. Foreign Policy. Today through Friday, artificial intelligence (AI) experts are meeting with international leaders in Geneva, Switzerland, for the inaugural AI for Good Global Summit.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/the-inaugural-ai-for-good-global-summit-is-a-milestone-but-must-focus-more-on-risks-council-on-foreign-relations-blog\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187743],"tags":[],"class_list":["post-197182","post","type-post","status-publish","format-standard","hentry","category-ai"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/197182"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=197182"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/197182\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=197182"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=197182"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=197182"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}