{"id":196615,"date":"2017-06-05T07:28:24","date_gmt":"2017-06-05T11:28:24","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/scientists-want-to-set-some-ground-rules-to-stop-ai-taking-over-the-world-sciencealert\/"},"modified":"2017-06-05T07:28:24","modified_gmt":"2017-06-05T11:28:24","slug":"scientists-want-to-set-some-ground-rules-to-stop-ai-taking-over-the-world-sciencealert","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/scientists-want-to-set-some-ground-rules-to-stop-ai-taking-over-the-world-sciencealert\/","title":{"rendered":"Scientists want to set some ground rules to stop AI taking over the world &#8211; ScienceAlert"},"content":{"rendered":"<p><p>    Artificial intelligence technology is accelerating forward        at a blistering pace, and a trio of scientists are calling    for more accountability and transparency in AI, before it's too    late.  <\/p>\n<p>    In their paper, the UK-based researchers say     existing rules and regulations don't go far enough in    limiting what AI can do  and recommend that robots be held to    the same standards as the humans who make them.  <\/p>\n<\/p>\n<p>    There are a number of issues, say the researchers, that could    lead to problems down the line, including the diverse nature of    the systems being developed and a lack of transparency about    the inner workings of AI.  <\/p>\n<p>    \"Systems can make unfair and discriminatory decisions,    replicate or develop biases, and behave in inscrutable and    unexpected ways in highly sensitive environments that put human    interests and safety at risk,\" the team reportsin their    paper.  <\/p>\n<p>    In other words: how do we know we can trust AI?  <\/p>\n<p>    Even before we get to the stage of the robots rising up, AI    that's unaccountable and impossible to decipher is going to    cause issues  from problems working out the cause of an    accident between self-driving cars, to understanding the    reasons why a bank's computer has turned you down for a loan.  <\/p>\n<p>    Another issue raised concerns systems that are mostly AI but    have some human inputs, which may exclude them from regulation.  <\/p>\n<p>    Airport security is one area where more    transparency is needed, say researchers. Image: Wachter,    Mittelstadt, Floridi  <\/p>\n<\/p>\n<p>    Among the suggestions put forward by the researchers is the    idea of having a set of guidelines that covers robotics, AI,    and decision-making algorithms as a group, though they admit    that these diverse areas are hard to regulate as a whole.  <\/p>\n<p>    The scientists also acknowledge that adding in extra    transparency into AI systems can negatively affect their    performance  and competing tech companies might not be too    willing to share their various secret sauces.  <\/p>\n<p>    What's more, with AI now     essentially teaching itself in some systems, we may even be    beyond the stage where we can explain what's happening.  <\/p>\n<p>    \"The inscrutability and the diversity of AI complicate the    legal codification of rights, which, if too broad or narrow,    can inadvertently hamper innovation or provide little    meaningful protection,\" the    researchers write.  <\/p>\n<p>    It's a delicate balancing act.  <\/p>\n<p>    It's not the first time that these three researchers have    criticised current AI rules:     in January they called for an artificial intelligence    watchdog in response to     the General Data Protection Regulation drawn up by the EU.  <\/p>\n<\/p>\n<p>    \"We are already too dependent on algorithms to give up the    right to question their decisions,\" one of the researchers,    Luciano Floridi from the University of Oxford in the UK,        told Ian Sample at The Guardian.  <\/p>\n<p>    Floridi and his colleagues quoted cases from Austria and    Germany where they felt people hadn't been given enough    information on how AI algorithms had reached their decisions.  <\/p>\n<p>    Ultimately, AI is going to be a boon for the human race,    whether it's helping elderly people keep their independence        through self-driving cars, or spotting    early signs of disease before doctors can.  <\/p>\n<p>    Right now, though, experts are scrambling to     put safety measures in place that can stop these systems    getting out of control or becoming too dominant, and that's    where the scientists behind this new paper want to see us    putting our efforts.  <\/p>\n<p>    \"Concerns about fairness, transparency, interpretability, and    accountability are equivalent, have the same genesis, and must    be addressed together, regardless of the mix of hardware,    software, and data involved,\" they    write.  <\/p>\n<p>    The paper has been published in Science    Robotics.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>View post:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/www.sciencealert.com\/scientists-want-to-set-some-ground-rules-to-stop-the-march-of-creepy-robots\" title=\"Scientists want to set some ground rules to stop AI taking over the world - ScienceAlert\">Scientists want to set some ground rules to stop AI taking over the world - ScienceAlert<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Artificial intelligence technology is accelerating forward at a blistering pace, and a trio of scientists are calling for more accountability and transparency in AI, before it's too late. In their paper, the UK-based researchers say existing rules and regulations don't go far enough in limiting what AI can do and recommend that robots be held to the same standards as the humans who make them <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/scientists-want-to-set-some-ground-rules-to-stop-ai-taking-over-the-world-sciencealert\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187743],"tags":[],"class_list":["post-196615","post","type-post","status-publish","format-standard","hentry","category-ai"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/196615"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=196615"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/196615\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=196615"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=196615"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=196615"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}