{"id":200812,"date":"2017-06-23T06:15:45","date_gmt":"2017-06-23T10:15:45","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/why-the-military-and-corporate-america-want-to-make-ai-explain-itself-fast-company\/"},"modified":"2017-06-23T06:15:45","modified_gmt":"2017-06-23T10:15:45","slug":"why-the-military-and-corporate-america-want-to-make-ai-explain-itself-fast-company","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/why-the-military-and-corporate-america-want-to-make-ai-explain-itself-fast-company\/","title":{"rendered":"Why The Military And Corporate America Want To Make AI Explain Itself &#8211; Fast Company"},"content":{"rendered":"<p><p>      Modern artificial intelligence is smart enough to beat humans      at chess, understand speech, and even drive a car.    <\/p>\n<p>      But one area where machine-learning algorithms still struggle      is explaining to humans how and why theyre making particular      decisions. That can be fine if computers are just playing      games, but for more serious applications people are a lot      less willing to trust a machine whose thought processes they      cant understand.    <\/p>\n<p>      If AI is being used to make decisions about who to hire or      whether to extend a bank loan, people want to make sure the      algorithm hasnt absorbed race or gender biases from the      society that trained it. If a computer is going to drive a      car, engineers will want to make sure it doesnt have any      blind spots that will send it careening off the road in      unexpected situations. And if a machine is going to help make      medical diagnoses, doctors and patients will want to know      what symptoms and readings its relying on.    <\/p>\n<p>      If you go to a doctor and the doctor says, Hey, you have      six months to live, and offers absolutely no explanation as      to why the doctor is saying that, that would be a pretty poor      doctor, says Sameer      Singh, an assistant professor of computer science at the      University of California at Irvine.    <\/p>\n<p>      Singh is a coauthor of a      frequently cited paper published last year that proposes      a system for making machine-learning decisions more      comprehensible to humans. The system, known as LIME,      highlights parts of input data that factor heavily in the      computers decisions. In one example from the paper, an      algorithm trained to distinguish forum posts about      Christianity from those about atheism appears accurate at      first blush, but LIME reveals that its relying heavily on      forum-specific features, like the names of prolific      posters.    <\/p>\n<p>      Developing explainable AI, as such systems are frequently      called, is more than an academic exercise. Its of growing      interest to commercial users of AI and to the military.      Explanations of how algorithms are thinking make it easier      for leaders to adopt artificial intelligence systems within      their organizationsand easier to challenge them when theyre      wrong.    <\/p>\n<p>      If they disagree with that decision, they will be way more      confident in going back to the people who wrote that and say      no, this doesnt make sense because of this, says Mark      Hammond, cofounder and CEO of AI startup Bonsai.    <\/p>\n<p>      Last month, the Defense Advanced Research Projects Agency      signed formal agreements with 10 research teams in a       four-year, multimillion-dollar program designed to      develop new explainable AI systems and interfaces for      delivering the explanations to humans. Some of the teams will      work on systems for operating simulated autonomous devices,      like self-driving cars, while others will work on algorithms      for analyzing mounds of data, like intelligence reports.    <\/p>\n<p>      Each year, well have a major evaluation where well bring      in groups of users who will sit down with these systems,      says David Gunning, program manager in DARPAs Information      Innovation Office. Gunning says he imagines that by the end      of the program, some of the prototype projects will be ready      for further development for military or other use.    <\/p>\n<p>      Deep learning, loosely inspired by the networks of neurons in      the human brain, uses sample data to develop multilayered      sets of huge matrices of numbers. Algorithms then harness      those matrices to analyze and categorize data, whether      theyre looking for familiar faces in a crowd or trying to      spot the best move on a chess board. Typically, they process      information starting at the lowest level, whether thats the      individual pixels from an image or individual letters of      text. The matrices are used to decide how to weight each      facet of that data through a series of complex mathematical      formulas. While the algorithms often prove quite accurate,      the large arrays of seemingly arbitrary numbers are      effectively beyond human comprehension.    <\/p>\n<p>      The whole process is not transparent, says       Xia Ben Hu, an assistant professor of computer science      and engineering at Texas A&M and leader of one of the      teams in the DARPA program. His groupaims to produce      what it calls shallow models: mathematical constructs that      behave, at least in certain cases, similarly to deep-learning      algorithms while being simple enough for humans to      understand.    <\/p>\n<p>      Another team, from the Stanford University spin-off research      group SRI International, plans      to use what are called generative adversarial networks. Those      are pairs of AI systems in which one is trained to produce      realistic data in a particular category and the other is      trained to distinguish the generated data from authentic      samples. The purpose, in this case, is to generate      explanations similar to those that might be given by humans.    <\/p>\n<p>      The team plans to test its approach on a data set called      MovieQA,      which consists of 15,000 multiple choice questions about      movies along with data like their scripts and subtitled video      clips.    <\/p>\n<p>      You have the movie, you have scripts, you have subtitles.      You have all this rich data that is time-synched in      situations, says SRI senior computer scientist Mohamed Amer.      The question could be, who was the lead actor in The      Matrix?    <\/p>\n<p>      Ideally, the system would not only deliver the correct      answer, it would let users highlight certain sections of the      questions and answers to see the sections of the script and      film it used to figure out the answer.    <\/p>\n<p>      You hover over a verb, for example, it will show you a pose      of the person, for example, doing the action, Amer says.      The idea is to kind of bring it down to an interpretable      feature the person can actually visualize, where the person      is not a machine-learning developer but is just a regular      user of the system.    <\/p>\n<p>      Steven Melendez is an independent journalist living in New      Orleans.    <\/p>\n<p>       More    <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read the original here:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/www.fastcompany.com\/40433959\/why-the-military-and-corporate-america-want-to-make-ai-explain-itself\" title=\"Why The Military And Corporate America Want To Make AI Explain Itself - Fast Company\">Why The Military And Corporate America Want To Make AI Explain Itself - Fast Company<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Modern artificial intelligence is smart enough to beat humans at chess, understand speech, and even drive a car. But one area where machine-learning algorithms still struggle is explaining to humans how and why theyre making particular decisions. That can be fine if computers are just playing games, but for more serious applications people are a lot less willing to trust a machine whose thought processes they cant understand <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/why-the-military-and-corporate-america-want-to-make-ai-explain-itself-fast-company\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187743],"tags":[],"class_list":["post-200812","post","type-post","status-publish","format-standard","hentry","category-ai"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/200812"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=200812"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/200812\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=200812"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=200812"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=200812"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}