{"id":225723,"date":"2017-07-04T16:29:47","date_gmt":"2017-07-04T20:29:47","guid":{"rendered":"http:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/uncategorized\/why-robot-understanding-ai-ethics-the-register.php"},"modified":"2022-06-02T17:56:21","modified_gmt":"2022-06-02T21:56:21","slug":"why-robot-understanding-ai-ethics-the-register","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/why-robot-understanding-ai-ethics-the-register.php","title":{"rendered":"Why, Robot? Understanding AI ethics &#8211; The Register"},"content":{"rendered":"<p><p>    Not many people know that Isaac Asimov didnt originally write    his three laws of robotics for I, Robot. They actually    first appeared in \"Runaround\", the 1942 short story*. Robots    mustnt do harm, he said, or allow others to come to harm    through inaction. They must obey orders given by humans unless    they violate the first law. And the robot must protect itself,    so long as it doesnt contravene laws one and two.  <\/p>\n<p>    75 years on, were still mulling that future. Asimovs rules    seem more focused on strong AI  the kind of AI youd find in    HAL, but not in an Amazon Echo. Strong AI mimics the human    brain, much like an evolving child, until it becomes sentient    and can handle any problem you throw at it, as a human would.    Thats still a long way off, if it ever comes to pass.  <\/p>\n<p>    Instead, today were dealing with narrow AI, in which    algorithms cope with constrained tasks. It recognises faces,    understands that you just asked what the weather will be like    tomorrow, or tries to predict whether you should give someone a    loan or not.  <\/p>\n<p>    Making rules for this kind of AI is quite difficult enough to    be getting on with for now, though, says Jonathan M. Smith.    Hes a member of the Association for Computing Machinery, and a    professor of computer science at the University of    Pennsylvania, says theres still plenty of ethics to unpack at    this level.  <\/p>\n<p>    The shorter-term issues are very important because theyre at    the boundary of technology and policy, he says. You dont    want the fact that someone has an AI making decisions to    escape, avoid or divert past decisions that we made in the    social or political space about how we run our society.  <\/p>\n<p>    There are some thorny problems already emerging, whether real    or imagined. One of them is a variation on the trolley problem,    a kind of Sophies Choice scenario in which a train is bearing    down on two sets of people. If you do nothing, it kills five    people. If you actively pull a lever, the signals switch and it    kills one person. Youd have to choose.  <\/p>\n<p>    Critics of AI often adapt this to self-driving cars. A child    runs into the road and theres no time to stop, but the    software could choose to swerve and hit an elderly person, say.    What should the car do, and who gets to make that decision?    There are many variations on this theme, and MIT even collected    some of them into an online game.  <\/p>\n<p>    There are classic counter arguments: the self-driving car    wouldnt be speeding in a school zone, so its less likely to    occur. Utilitarians might argue that the number of deaths    eliminated worldwide by eliminating distracted, drunk or tired    drivers would shrink overall, which means society wins, even if    one person loses.  <\/p>\n<p>    You might point out that a human would have killed one of the    people in the scenario too, so why are we even having this    conversation? Yasemin Erden, a senior lecturer in philosophy at    Queen Marys University, has an answer for that. She spends a    lot of time considering ethics and computing on the committee    of the Society for the Study of Artificial Intelligence and    Simulation of Behaviour.  <\/p>\n<p>    Decisions in advance suggest ethical intent and incur others    judgement, whereas acting on the spot doesnt, she points out.  <\/p>\n<p>    The programming of a car with ethical intentions knowing what    the risk could be means that the public could be less willing    to view things as accidents, she says. Or in other words, as    long as you were driving responsibly its considered ok for you    to say that person just jumped out at me and be excused for    whomever you hit, but AI algorithms dont have that luxury.  <\/p>\n<p>    If computers are supposed to be faster and more intentional    than us in some situations, then how theyre programmed    matters. Experts are calling for accountability.  <\/p>\n<p>    Id need to cross-examine my algorithm, or at least know how    to find out what was happening at the time of the accident,    says Kay Firth-Butterfield. She is a lawyer specialising in AI    issues and executive director at AI Austin. Its a non-profit    AI thinktank set up this March that evolved from the Ethics    Advisory Panel, an ethics board set up by AI firm Lucid.  <\/p>\n<p>    We need a way to understand what AI algorithms are    \"thinking\" when they do things, she says. How can you say to    a patient's family if they died because of an intervention we    don't know how this happened? So accountability and    transparency are important.  <\/p>\n<p>    Puzzling over why your car swerved around the dog but backed    over the cat isnt the only AI problem that calls for    transparency. Biased AI algorithms can cause all kinds of    problems. Facial recognition systems may ignore people of    colour because their training data didnt have enough faces    fitting that description, for example.  <\/p>\n<p>    Or maybe AI is self-reinforcing to the detriment of society. If    social media AI learns that you like to see material supporting    one kind of politics and only ever shows you that, then over    time we could lose the capacity for critical debate.  <\/p>\n<p>    J.S Mill made the argument that if ideas arent challenged    then they are at risk of becoming dogma, Erden recalls, nicely    summarising what she calls the filter bubble problem. (Mill    was a 19th century utilitarian philosopher who was a strong    proponent of logic and reasoning based on empirical evidence,    so he probably wouldnt have enjoyed arguing with people on    Facebook much.)  <\/p>\n<p>    So if AI creates billions of people unwilling or even unable to    recognise and civilly debate each others ideas, isnt that an    ethical issue that needs addressing?  <\/p>\n<p>    Another issue concerns the forming of emotional relationships    with robots. Firth-Butterfield is interested in two ends of the    spectrum  children and the elderly. Kids love to suspend    disbelief, which makes robotic companions with their AI    conversational capabilities all that easier to embrace. She    frets about AI robots that may train children to be ideal    customers for their products.  <\/p>\n<p>    Similarly, at the other end of the spectrum, she muses about AI    robots used to provide care and companionship to the elderly.  <\/p>\n<p>    Is it against their human rights not to interact with human    beings but just to be looked after by robots? I think thats    going to be one of the biggest decisions of our time, she    says.  <\/p>\n<p>    That highlights a distinction in AI ethics, between how an    algorithm does something and what were trying to achieve with    it. Alex London, professor of philosophy and director at    Carnegie Mellon Universitys Center for Ethics and Policy, says    that the driving question is what the machine is trying to do.  <\/p>\n<p>    The ethics of that is probably one of the most fundamental    questions. If the machine is out to serve a goal thats    problematic, then ethical programming  the question of how it    can more ethically advance that goal - sounds misguided, he    warns.  <\/p>\n<p>    Thats tricky, because much comes down to intent. A robot could    be great if it improves the quality of life for an elderly    person as a supplement for frequent visits and calls with    family. Using the same robot as an excuse to neglect elderly    relatives would be the inverse. Like any enabling technology    from the kitchen knife to nuclear fusion, the tool itself isnt    good or bad  its the intent of the person using it. Even    then, points out Erden, what if someone thinks theyre doing    good with a tool but someone else doesnt?  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read the original post:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/www.theregister.co.uk\/2017\/07\/04\/ai_ethics_and_what_next\/\" title=\"Why, Robot? Understanding AI ethics - The Register\">Why, Robot? Understanding AI ethics - The Register<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Not many people know that Isaac Asimov didnt originally write his three laws of robotics for I, Robot.  <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/why-robot-understanding-ai-ethics-the-register.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[13],"tags":[],"class_list":["post-225723","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"modified_by":"Danzig","_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/225723"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=225723"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/225723\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=225723"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=225723"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=225723"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}