{"id":203440,"date":"2016-05-13T01:43:31","date_gmt":"2016-05-13T05:43:31","guid":{"rendered":"http:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/uncategorized\/the-guardian-view-on-artificial-intelligence-look-out-its.php"},"modified":"2016-05-13T01:43:31","modified_gmt":"2016-05-13T05:43:31","slug":"the-guardian-view-on-artificial-intelligence-look-out-its","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/the-guardian-view-on-artificial-intelligence-look-out-its.php","title":{"rendered":"The Guardian view on artificial intelligence: look out, its &#8230;"},"content":{"rendered":"<p><p>  A monk comes face to face with his robot counterpart called  Xianer at a Buddhist temple on the outskirts of Beijing.  Photograph: Kim Kyung-Hoon\/Reuters<\/p>\n<p>    Google artificial intelligence    project DeepMind is building software to trawl    through millions of patient    records from three NHS hospitals to detect early signs of    kidney disease. The project raises deep questions not only    about data protection but about the ethics of artificial    intelligence. But these are not the obvious questions about the    ethics of autonomous, intelligent computers.  <\/p>\n<p>    Computer programs can now do some things that it once seemed    only human beings could do, such as playing an excellent    game of Go. But even the    smartest computer cannot make ethical choices, because it has    no purpose of its own in life. The program that plays Go cannot    decide that it also wants a driving licence like its cousin,    the program that drives Googles cars.  <\/p>\n<p>    The ethical questions involved in the deal are partly    political: they have to do with trusting a private US    corporation with a great deal of data from which it hopes in    the long term to make a great deal of money. Further questions    are raised by the mere existence, or construction, of a giant    data store containing unimaginable amounts of detail about    patients and their treatments. This might yield useful medical    knowledge. It could certainly yield all kinds of damaging    personal knowledge. But questions of medical confidentiality,    although serious, are not new in principle or in practice and    they may not be the most disturbing aspects of the deal.  <\/p>\n<p>    What frightens people is the idea that we are constructing    machines that will think for themselves, and will be able to    keep secrets from us that they will use to their own advantage    rather than to ours. The tendency to invest such powers in    lifeless and unintelligent things goes back to the very    beginnings of AI research and beyond.  <\/p>\n<p>    In the 1960s, Joseph Weizenbaum, one of the pioneers    of computer science, created the chatbot Eliza, which mimicked a    non-directional psychoanalyst. It used cues supplied by the    users  Im worried about my father  to ask open-ended    questions: How do you feel about your father? The astonishing    thing was that students were happy to answer at length, as if    they had been asked by a sympathetic, living listener.    Weizenbaum was horrified, especially when his secretary, who    knew perfectly well what Eliza was, asked him to leave the room    while she talked to it.  <\/p>\n<p>    Elizas latest successor, Xianer, the Worthy Stupid    Robot Monk, functions in a Buddhist temple in Beijing,    where it dispenses wisdom in response to questions asked    through a touchpad on his chest. People seem to ask it serious    questions such as What is love?, How do I get ahead in    life?; the answers are somewhere between a horoscope and a    homily. Since they are not entirely predicable, Xianer is    treated as a primitive kind of AI.  <\/p>\n<p>    Most discussions of AI and most calls for an ethics of AI    assume we will have no problem recognising it once it emerges.    The examples of Eliza and Xianer show this is questionable.    They get treated as intelligent even though we know they are    not. But that is only one error we could make when approaching    the problem. We might also fail to recognise intelligence when    it does exist, or while it is emerging.  <\/p>\n<p>    The myth of Frankensteins monster is misleading. There might    be no lightning bolt moment when we realise that it is alive    and uncontrollable. Intelligent brains are built from billions    of neurones that are not themselves intelligent. If a    post-human intelligence arises, it will also be from a system    of parts that do not, as individuals, share in the post-human    intelligence of the whole. Parts of it would be human. Parts    would be computer systems. No part could understand the whole    but all would share its interests without completely    comprehending them.  <\/p>\n<p>    Such hybrid systems would not be radically different from    earlier social inventions made by humans and their tools, but    their powers would be unprecedented. Constructing and enforcing    an ethical framework for them would be as difficult as it has    been to lay down principles of international law. But it may    become every bit as urgent.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Go here to read the rest:<\/p>\n<p><a target=\"_blank\" href=\"http:\/\/www.theguardian.com\/commentisfree\/2016\/may\/08\/the-guardian-view-on-artificial-intelligence-look-out-its-ahead-of-you\" title=\"The Guardian view on artificial intelligence: look out, its ...\">The Guardian view on artificial intelligence: look out, its ...<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> A monk comes face to face with his robot counterpart called Xianer at a Buddhist temple on the outskirts of Beijing.  <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/the-guardian-view-on-artificial-intelligence-look-out-its.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[13],"tags":[],"class_list":["post-203440","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/203440"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=203440"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/203440\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=203440"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=203440"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=203440"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}