{"id":201214,"date":"2015-04-14T12:44:03","date_gmt":"2015-04-14T16:44:03","guid":{"rendered":"http:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/uncategorized\/we-need-to-do-more-than-just-point-to-ethical-questions.php"},"modified":"2015-04-14T12:44:03","modified_gmt":"2015-04-14T16:44:03","slug":"we-need-to-do-more-than-just-point-to-ethical-questions","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/we-need-to-do-more-than-just-point-to-ethical-questions.php","title":{"rendered":"We Need To Do More Than Just Point to Ethical Questions &#8230;"},"content":{"rendered":"<p><p>    Dancer Matt Del Rosario from    Pilobolus performs a scene along with robots created in    partnership with the engineers, programmers, and pilots of the    MIT Computer Science and Artificial Intelligence Laboratory in    New York on July 18, 2011. (TIMOTHY A. CLARY\/AFP\/Getty Images)    | JOHN MACDOUGALL via Getty Images  <\/p>\n<p>    Hundreds of artificial intelligence experts recently signed a    letter put together by    the Future of Life Institute that prompted Elon Musk to donate    $10 million to the institute. \"We recommend expanded research    aimed at ensuring that increasingly capable AI systems are    robust and beneficial: our A.I. systems must do what we want    them to do,\" the letter read.  <\/p>\n<p>    The problem is that both the letter and the corresponding    report allow anyone to read any meaning he or she wants into    \"beneficial,\" and the same applies when it comes to defining    who \"we\" are and what \"we\" want A.I. systems to do exactly. Of    course, there already exists a \"we\" who think it is beneficial    to design robust A.I. systems that will do what \"we\" want them    to do when, for example, fighting wars.  <\/p>\n<p>    But the \"we\" the institute had in mind is something different.    \"The potential benefits [of A.I.] are huge, since everything    that civilization has to offer is a product of human    intelligence; we cannot predict what we might achieve when this    intelligence is magnified by the tools A.I. may provide, but    the eradication of disease and poverty are not unfathomable.\"    But notice that these are presented as possibilities, not as    goals. They are benefits that could happen, not    benefits that should happen. Nowhere in the research    priorities document are these eventualities actually called    research priorities.  <\/p>\n<p>    One might think that such vagueness is just the result of a    desire to draft a letter that a large number of people might be    willing to sign on to. Yet in fact, the combination of    gesturing towards what are usually called \"important ethical    issues,\" while steadfastly putting off serious discussion of    them, is pretty typical in our technology debates. We do not    live in a time that gives much real thought to ethics, despite    the many challenges you might think would call for it. We are    hamstrung by a certain pervasive moral relativism, a sense that    when you get right down to it, our \"values\" are purely    subjective and, as such, really beyond any kind of rational    discourse. Like \"religion,\" they are better left un-discussed    in polite company.  <\/p>\n<p>    There are, of course, \"philosophers\" who get paid to teach and    write about what is not discussed in polite company, but who    would look to them as authorities? It is practically a given    that on fundamental ethical questions, they will agree no more,    and perhaps even less, than the rest of us.  <\/p>\n<p>    As in the institute's research priorities document, if you want    to look responsible, you include such people in the discussion.    Whether they will actually influence outcomes is a question    about which a certain skepticism is warranted. After all, all    participants are entitled to have their own values, are they    not?  <\/p>\n<p>    This ethical reticence has some serious consequences. The more    we are restrained by it, the less we can talk seriously about    what is good and what is bad in the new world we are creating    with science and technology. As our power over nature    increases, you might think that the very first thing we would    want to be able to do is to know how that power ought to be    used responsibly -- if it is used at all. If instead, we hobble    our ethical discussions, how will such a question be decided?    An increasingly pervasive techno-libertarianism suggests that    we will move quickly from \"we can do x\" to \"we should do x,\"    and that our scientific and technical might will end up making    right.  <\/p>\n<p>    A final issue ought to be of particular concern to    progressives. The very idea of progress implies improvement in    the human condition -- it implies that some change is for the    better and some is not. Hence the idea of \"improvement\"    suggests some human good that is sought or has been achieved.    Without ethical standards, there is no progress -- only change.  <\/p>\n<p>    No one doubts that the world is changing and changing rapidly.    Organizations that want to work towards making change happen    for the better will need to do much more than point piously at    \"important ethical questions.\"  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>View post:<\/p>\n<p><a target=\"_blank\" href=\"http:\/\/www.huffingtonpost.com\/charles-t-rubin\/ethical-artificial-intelligence_b_7034070.html\" title=\"We Need To Do More Than Just Point to Ethical Questions ...\">We Need To Do More Than Just Point to Ethical Questions ...<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Dancer Matt Del Rosario from Pilobolus performs a scene along with robots created in partnership with the engineers, programmers, and pilots of the MIT Computer Science and Artificial Intelligence Laboratory in New York on July 18, 2011. (TIMOTHY A. CLARY\/AFP\/Getty Images) | JOHN MACDOUGALL via Getty Images Hundreds of artificial intelligence experts recently signed a letter put together by the Future of Life Institute that prompted Elon Musk to donate $10 million to the institute.  <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/we-need-to-do-more-than-just-point-to-ethical-questions.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[13],"tags":[],"class_list":["post-201214","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/201214"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=201214"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/201214\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=201214"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=201214"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=201214"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}