{"id":205011,"date":"2017-07-11T22:26:01","date_gmt":"2017-07-12T02:26:01","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/should-self-driving-cars-make-ethical-decisions-like-we-do-singularity-hub\/"},"modified":"2017-07-11T22:26:01","modified_gmt":"2017-07-12T02:26:01","slug":"should-self-driving-cars-make-ethical-decisions-like-we-do-singularity-hub","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/singularity\/should-self-driving-cars-make-ethical-decisions-like-we-do-singularity-hub\/","title":{"rendered":"Should Self-Driving Cars Make Ethical Decisions Like We Do? &#8211; Singularity Hub"},"content":{"rendered":"<p><p>    An enduring problem with self-driving cars has been how to    program them to make ethical decisions in unavoidable crashes.    A new study has found its actually surprisingly easy to model    how humans make them, opening a potential avenue to solving the    conundrum.  <\/p>\n<p>    Ethicists have tussled with the so-called trolley problem for    decades. If a runaway trolley, or tram, is about to hit a group    of people, and by pulling a lever you can make it switch tracks    so it hits only one person, should you pull the lever?  <\/p>\n<p>    But for those designing self-driving cars the problem is more    than just a thought experiment, as these vehicles will at times    have to make similar decisions. If a pedestrian steps out into    the road suddenly, the car may have to decide between swerving    and potentially injuring its passengers or knocking down the    pedestrian.  <\/p>\n<p>    Previous research had shown that the moral judgements at the    heart of how humans deal with these kinds of situations are    highly contextual, making them hard to model and therefore    replicate in machines.  <\/p>\n<p>    But when researchers from the University of Osnabrck in    Germany used immersive virtual reality to expose volunteers to    variations of the trolley problem and studied how they behaved,    they were surprised at what they found.  <\/p>\n<p>    We found quite the opposite, Leon Stfeld, first author of a    paper on the research in journal Frontiers in Behavioral    Neuroscience, said in     a press release. Human behavior in dilemma situations can    be modeled by a rather simple value-of-life-based model that is    attributed by the participant to every human, animal, or    inanimate object.  <\/p>\n<p>    The implication, the researchers say, is that human-like    decision making in these situations would not be that    complicated to incorporate into driverless vehicles, and they    suggest this could present a viable solution for programming    ethics into self-driving cars.  <\/p>\n<p>    Now that we know how to implement human ethical decisions into    machines we, as a society, are still left with a double    dilemma, Peter Knig, a senior author of the paper, said in    the press release. Firstly, we have to decide whether moral    values should be included in guidelines for machine behavior    and secondly, if they are, should machines act just like    humans.  <\/p>\n<p>    There are clear pitfalls with both questions. Self-driving cars    present an obvious case where a machine could have to make    high-stakes ethical decisions that most people would agree are    fairly black or white.  <\/p>\n<p>    But once you start insisting on programming ethical    decision-making into some autonomous systems, it could be hard    to know where to draw the line.  <\/p>\n<p>    Should a computer program designed to decide on loan    applications also be made to mimic the moral judgements a human    bank worker most likely would if face-to-face with a client?    What about one meant to determine whether or not a criminal    should be granted bail?  <\/p>\n<p>    Both represent real examples of autonomous systems operating in    contexts where a human would likely incorporate    ethicaljudgements in their decision-making. But unlike    the self-driving car example, a persons judgement in these    situations is likely to be highly colored by their life    experience and political views. Modeling these kinds of    decisions may not be so easy.  <\/p>\n<p>    Even if human behavior is consistent, that doesnt mean its    necessarily the best way of doing things, as Knig alludes to.    Humans are not always very rational and can be afflicted by all    kinds of biases that could feed into their decision-making.  <\/p>\n<p>    The alternative, though, is hand-coding morality into these    machines, and it is fraught with complications. For a start,    the chances of reaching an unambiguous consensus on what    particular ethical code machines should adhere to are slim.  <\/p>\n<p>    Even if you can, though,     a study in Science I covered last June suggests it    wouldnt necessarily solve the problem. A survey of US    residents found that most people thoughtself-driving cars    should be governed by utilitarian ethics that seek to minimize    the total number of deaths in a crash even if it harms the    passengers.  <\/p>\n<p>    But it also found most respondents would not ride in these    vehicles themselves or support regulations enforcing    utilitarian algorithms on them.  <\/p>\n<p>    In the face of such complexities, programming self-driving cars    to mimic peoples instinctive decision-making could be an    attractive alternative. For a start, building models of human    behavior simply required the researchers to collect data and    feed it into a machine learning system.  <\/p>\n<p>    Another upside is that it would prevent a situation where    programmers are forced to write algorithms that could    potentially put people in harms way. By basing the behavior of    self-driving cars on a model of our collective decision making    we would, in a way, share the responsibility for the decisions    they make.  <\/p>\n<p>    At the end of the day, humans are not perfect, but over the    millennia weve developed some pretty good rules of thumb for    life and death situations. Faced with the potential pitfalls of    trying to engineer self-driving cars to be better than us, it    might just be best to trust those instincts.  <\/p>\n<p>        Stock Media provided by Iscatel \/ Pond5  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>See the rest here:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/singularityhub.com\/2017\/07\/11\/should-self-driving-cars-make-ethical-decisions-like-we-do\/\" title=\"Should Self-Driving Cars Make Ethical Decisions Like We Do? - Singularity Hub\">Should Self-Driving Cars Make Ethical Decisions Like We Do? - Singularity Hub<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> An enduring problem with self-driving cars has been how to program them to make ethical decisions in unavoidable crashes. A new study has found its actually surprisingly easy to model how humans make them, opening a potential avenue to solving the conundrum <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/singularity\/should-self-driving-cars-make-ethical-decisions-like-we-do-singularity-hub\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187807],"tags":[],"class_list":["post-205011","post","type-post","status-publish","format-standard","hentry","category-singularity"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/205011"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=205011"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/205011\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=205011"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=205011"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=205011"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}