{"id":189415,"date":"2017-04-25T05:04:44","date_gmt":"2017-04-25T09:04:44","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-intelligence-has-to-deal-with-its-transparency-problems-tnw\/"},"modified":"2017-04-25T05:04:44","modified_gmt":"2017-04-25T09:04:44","slug":"artificial-intelligence-has-to-deal-with-its-transparency-problems-tnw","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-intelligence\/artificial-intelligence-has-to-deal-with-its-transparency-problems-tnw\/","title":{"rendered":"Artificial Intelligence has to deal with its transparency problems &#8211; TNW"},"content":{"rendered":"<p><p>    Artificial    Intelligence breakthroughs and developments    entail new challenges and problems. As AI algorithms grow    more advanced, it becomes more difficult to make    sense of their inner workings. Part of this is    because the companies that develop them do not allow the    scrutiny of their proprietary algorithms. But a lot of it has    to do with the mere fact that AI is becoming opaque due to its    increasing complexity.  <\/p>\n<p>    And this can turn into a problem as we move forward and    Artificial Intelligence becomes more prominent in our    lives.  <\/p>\n<p>        We've teamed up with Product Hunt to offer you the chance        to win an all expense paid trip to TNW        Conference 2017!      <\/p>\n<p>    By a long range, AI algorithms perform much better than    their human counterparts at the tasks they master. Self driving    cars, for instance, which rely heavily on machine learning    algorithms, will eventually reduce 90 percent    of road accidents. AI diagnosis platforms spot    early signs of dangerous illnesses much better than humans, and    help save lives. And predictive maintenance can detect signs of    wear in machinery and infrastructure in ways that are    impossible for humans, preventing    disasters and reducing costs.  <\/p>\n<p>    But AI is not flawless, and does make mistakes, albeit at    a lower rate than humans. Last year, the AI-powered opponent in    the game Elite Dangerous went    berserk and started creating super-weapons to    hunt players. In another case,     Microsofts AI chatbot Tay started    spewing out racist comments within a day of its launch. And    remember that time Google face recognition started making    some offending    labeling of pictures?  <\/p>\n<p>    None of these mistakes are critical, and the damage can    be shrugged off without much thought. However, neural networks,    machine learning algorithms, and other subsets of AI are    finding their way into more critical domains. Some of these    fields include     healthcare, transportation and law,    where mistakes can have critical and sometimes fatal    consequences.  <\/p>\n<p>    We humans make mistakes all the time, including fatal    ones. But the difference here is that we can explain the    reasons behind our actions and bear the responsibility. Even    the software we used before the age of AI was code and    rule-based logic. Mistakes could be examined and reasoned out,    and culpability could be well-defined.  <\/p>\n<p>    The same cant be said of Artificial Intelligence. In    particular, neural networks, which are the key component in    many AI applications, are something of a    black box. Often, not even the engineers can    explain why their algorithm made a certain decision. Last year,    Googles Go-playing AI stunned the world by coming up with    moves that professionals couldnt think    of.  <\/p>\n<p>    As Nils Lenke, Senior Director of Corporate Research at    Nuance, says about neural    networks, Its not always clear what happens    inside  you let the network organize itself, but that really    means it does organize itself: it doesnt necessarily tell you    how it did it.  <\/p>\n<p>    This can cause problems if those algorithms    havefull control in making decisions. Who will be    responsible if a self-driving car causes a fatal accident? You    cant hold any of the passengers accountable for something they    didnt control. And the manufacturers will have a hard time    explainingan event that involves so many complexities and    variables. And dont expect the car itself to start explaining    its actions.  <\/p>\n<p>    The same can be said of an AI application that has    autonomous control over a patients treatment process. Or    a risk assessment    algorithm that decides whether convicts stay    in prison or are free to go.  <\/p>\n<p>    So can we trust Artificial Intelligence to make decisions    on its own? For non-critical tasks, such as advertising, games    and Netflix suggestions,    where mistakes are tolerable, we can. But for situations where    the social, legal, economic and political repercussion can be    disastrous, we cant  not yet. The same goes for scenarios    where human lives are at stake. Were still not ready to    forfeit control to the robots.  <\/p>\n<p>    As Lenke says, [Y]ou need to look at the tasks at hand.    For some, its not really critical if you dont fully    understand what happens, or even if the network is wrong. A    system that suggests music, for example: all that can go wrong    is, you listen to boring piece of music. But with applications    like enterprise customer service, where transactions are    involved, or computer-assisted clinical documentation    improvement, what we typically do there is, we dont put the AI    in isolation, but we have it co-work with a human    being.  <\/p>\n<p>    For the moment Artificial Intelligence will show its full    potential in complementing human efforts. Were already seeing    inroads in fields such as medicine and        cybersecurity. AI takes care of    data-oriented research and analysis and presents human experts    with invaluable insights and suggestions. Subsequently, the    experts make the decisions and assume responsibility for the    possible consequences.  <\/p>\n<p>    In the meantime, firms and organization must do more to    make Artificial Intelligence more transparent and    understandable. An example is OpenAI, a nonprofit    research company founded by Teslas Elon Musk and YCombinators    Sam Altman. As the name suggests, OpenAIs goal is to open AI    research and development to everyone, independent of financial    interests.  <\/p>\n<p>    Another organization, Partnership on    AI, aims to raise awareness on and deal    with AI    challenges such as bias. Founded by tech    giants including Microsoft, IBM and Google, the Partnership will also work    on AI ethics and best practices.  <\/p>\n<p>    Eventually, well achieve  for better or worse     Artificial General Intelligence, AI that is on par with the    human brain. Maybe then, our cars and robot will be able to go    to court and stand trial for their actions. But then, well be    dealing with totally different problems.  <\/p>\n<p>    Thats for the future. In the present, human-dominated    world, to make critical decisions, you either have to be    flawless or accountable. For the moment, AI falls within none    of those categories.  <\/p>\n<p>    Read next:     Create effective and dazzling business plans with the help of    Bizplan Premium  just $69  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read the rest here:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/thenextweb.com\/artificial-intelligence\/2017\/04\/23\/artificial-intelligence-has-to-deal-with-its-transparency-problems\/\" title=\"Artificial Intelligence has to deal with its transparency problems - TNW\">Artificial Intelligence has to deal with its transparency problems - TNW<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Artificial Intelligence breakthroughs and developments entail new challenges and problems. As AI algorithms grow more advanced, it becomes more difficult to make sense of their inner workings. Part of this is because the companies that develop them do not allow the scrutiny of their proprietary algorithms.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-intelligence\/artificial-intelligence-has-to-deal-with-its-transparency-problems-tnw\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187742],"tags":[],"class_list":["post-189415","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/189415"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=189415"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/189415\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=189415"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=189415"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=189415"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}