{"id":202741,"date":"2017-06-30T17:19:05","date_gmt":"2017-06-30T21:19:05","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/generic-situation-aware-guidelines-to-help-robots-co-exist-successfully-alongside-humans-phys-org\/"},"modified":"2017-06-30T17:19:05","modified_gmt":"2017-06-30T21:19:05","slug":"generic-situation-aware-guidelines-to-help-robots-co-exist-successfully-alongside-humans-phys-org","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/robotics\/generic-situation-aware-guidelines-to-help-robots-co-exist-successfully-alongside-humans-phys-org\/","title":{"rendered":"Generic, situation-aware guidelines to help robots co-exist successfully alongside humans &#8211; Phys.Org"},"content":{"rendered":"<p><p>June 30, 2017          Credit: University of Hertfordshire    <\/p>\n<p>      Artificial intelligence experts from the University of      Hertfordshire, Dr Christoph Salge and Professor Daniel      Polani, have designed a concept which could lead to a new set      of generic, situation-aware guidelines to help robots work      and co-exist successfully alongside humans.    <\/p>\n<p>    Empowerment, which has been developed over the course of twelve    years, is discussed in the latest edition of the journal    Frontiers in Robotics and AI today, as a potential    replacement for Asimov's celebrated Three Laws of Robotics     the most famous set of guidelines to govern robotic behaviour    to date.  <\/p>\n<p>    The paper shows how Empowerment has the potential to equip a    robot with guidelines or motivations that cause    it to a) protect itself and keep itself functioning b) do the    same for a human partner c) stick around and follow the human's    lead. In the future this principle could be implemented on a    range of robots that interact closely with humans in    challenging environments, such as elder care robots, hospital    robots, self-driving cars or exploration robots.  <\/p>\n<p>    Empowering robots to change their environment  <\/p>\n<p>    Motivated by the term from sociology and psychology, empowerment stands for the opposite of    helplessness; it is the ability to change one's environment and    to be aware of that possibility. Over the past twelve years,    leading University of Hertfordshire researchers have been    developing ways to translate this social concept into a    quantifiable and operational mathematical\/technical language,    endowing robots with a drive towards being empowered.  <\/p>\n<p>    The principle of empowerment states that an agent should    attempt to keep its options open, and will try to move to    states in its world where it has the most options it can    reliably attain. Since 2005, when it was first introduced,    researchers have generalized the empowerment principle and    applied it to various scenarios. The resulting behaviours are    surprisingly \"natural\" in many cases, and typically only    require the robot to know the dynamics of the world, but no    specialized Artificial Intelligence behaviour coded for the    particular scenario.  <\/p>\n<p>    Empowerment has also already begun to be adopted by pioneers in    artificial intelligence, such as    Google DeepMind.  <\/p>\n<p>    Need for ethical standards and guidelines for robots  <\/p>\n<p>    Dr Christoph Salge, Research Fellow at the University of    Hertfordshire said, \"There is currently a lot of debate on    ethics and safety in robotics, including a recent a call for    ethical standards or guidelines for robots. In particular there    is a need for robots to be guided by some form of generic,    higher instruction level if they are expected to deal with    increasingly novel and complex situations in the future -    acting as servants, companions and co-workers.  <\/p>\n<p>    \"In the challenging scenarios of the future, we will not be    able to rely on a clearly defined functionality that requires    robots to be safely separated from humans, or the scenarios to    be simplistic or very well defined in advance.\"  <\/p>\n<p>    \"Imbuing a robot with these kinds of motivation is difficult,    because robots have problems understanding human language and    specific behaviour rules can fail when applied to differing    contexts. For example, some robots will have automatisms that    stop moving whenever they encounter resistance, as a typical    safety feature to avoid damaging themselves or injuring a    human. But there might be a situation where a robot actually    should move to provide a safer space  for instance, to move    something away from the human, to get out of the human's escape    route, or to actively block the human from stepping into a    dangerous trajectory.\"  <\/p>\n<p>    \"From the outset, formalising this kind of behaviour in a    generic and proactive way poses a difficult challenge. We    believe that our approach can offer a solution.\"  <\/p>\n<p>    Daniel Polani, Professor of Artificial Intelligence at the    University of Hertfordshire, added: \"As we toyed with the idea    of using empowerment in more complex situations, we realized    that several of the original goals of the Three Laws of    Robotics by Asimov might be addressable in the context of    empowerment.  <\/p>\n<p>    \"While much of the public discourse is about how it is    difficult or impossible to rein in robots' behaviour, and most    certainly in keeping robots - in the most naive sense     'ethical', in the paper we discuss possibilities to map such    requirements into the formal and operational language of    empowerment.\"  <\/p>\n<p>     Explore further:        After 75 years, Isaac Asimov's Three Laws of Robotics need    updating  <\/p>\n<p>    More information: Christoph Salge et al. Empowerment As    Replacement for the Three Laws of Robotics, Frontiers in    Robotics and AI (2017). DOI: 10.3389\/frobt.2017.00025<\/p>\n<p>        When science fiction author Isaac Asimov devised his Three        Laws of Robotics he was thinking about androids. He        envisioned a world where these human-like robots would act        like servants and would need a set of programming rules ...      <\/p>\n<p>        Traditional rigid-bodied robots are stiff, with few degrees        of freedom, placing limits on many applications. Recently,        more engineers are learning from the soft flexibility        properties of living beings to advance bionic soft ...      <\/p>\n<p>        Social pedestrian navigation, such as walking down a        crowded sidewalk, is something humans take for granted, but        the actual process is quite sophisticated  especially if        you're a robot.      <\/p>\n<p>        Assembly line workers won't be swapping stories with their        robotic counterparts any time soon, but future robots will        be more aware of the humans they're working alongside.      <\/p>\n<p>        (Tech Xplore)Roboticists working on a robot's hardware and        software can brag a lot. They have made robots which can        flip pancakes, make sandwiches, ask children and adults        questions, and generate expressions of happiness, ...      <\/p>\n<p>        Robots already vacuum our floors, help dispose of bombs and        are exploring Mars. But in his new book, \"Robot Futures,\"        Illah Nourbakhsh, professor of robotics at Carnegie Mellon        University, argues that robots are not just ...      <\/p>\n<p>        Neural networks, which learn to perform computational tasks        by analyzing large sets of training data, are responsible        for today's best-performing artificial intelligence        systems, from speech recognition systems, to automatic ...      <\/p>\n<p>        The first rule of advocating for climate change-related        legislation is: You do not talk about \"climate change.\" The        term has become so polarizing that its mere mention can        cause reasonable people to draw seemingly immutable ...      <\/p>\n<p>        Artificial intelligence experts from the University of        Hertfordshire, Dr Christoph Salge and Professor Daniel        Polani, have designed a concept which could lead to a new        set of generic, situation-aware guidelines to help robots        ...      <\/p>\n<p>        Is an Apple Car about to hit the road?      <\/p>\n<p>        Researchers at the University of Alabama at Birmingham        suggest that brainwave-sensing headsets, also known as EEG        or electroencephalograph headsets, need better security        after a study reveals hackers could guess a user's ...      <\/p>\n<p>        Twice in the space of six weeks, the world has suffered        major attacks of ransomwaremalicious software that locks        up photos and other files stored on your computer, then        demands money to release them.      <\/p>\n<p>      Please sign      in to add a comment. Registration is free, and takes less      than a minute. Read more    <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read the original:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/phys.org\/news\/2017-06-situation-aware-guidelines-robots-co-exist-successfully.html\" title=\"Generic, situation-aware guidelines to help robots co-exist successfully alongside humans - Phys.Org\">Generic, situation-aware guidelines to help robots co-exist successfully alongside humans - Phys.Org<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> June 30, 2017 Credit: University of Hertfordshire Artificial intelligence experts from the University of Hertfordshire, Dr Christoph Salge and Professor Daniel Polani, have designed a concept which could lead to a new set of generic, situation-aware guidelines to help robots work and co-exist successfully alongside humans.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/robotics\/generic-situation-aware-guidelines-to-help-robots-co-exist-successfully-alongside-humans-phys-org\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":8,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187746],"tags":[],"class_list":["post-202741","post","type-post","status-publish","format-standard","hentry","category-robotics"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/202741"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/8"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=202741"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/202741\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=202741"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=202741"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=202741"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}