{"id":183925,"date":"2017-03-19T16:27:28","date_gmt":"2017-03-19T20:27:28","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/did-artificial-intelligence-deny-you-credit-fortune\/"},"modified":"2017-03-19T16:27:28","modified_gmt":"2017-03-19T20:27:28","slug":"did-artificial-intelligence-deny-you-credit-fortune","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-intelligence\/did-artificial-intelligence-deny-you-credit-fortune\/","title":{"rendered":"Did Artificial Intelligence Deny You Credit? &#8211; Fortune"},"content":{"rendered":"<p><p>                  Photograph by Image                  Source\/Getty Images                <\/p>\n<p>    People who apply for a loan from a bank    or credit card company, and are turned down, are owed an    explanation of why that happened. Its a good idea  because it    can help teach people how to repair their damaged credit  and    its a federal law, the Equal Credit    Opportunity Act    . Getting an answer wasnt much of a    problem in years past, when humans made those decisions. But    today, as artificial intelligence systems increasingly assist    or replace people making credit decisions, getting those    explanations has become much more difficult.       <\/p>\n<p>    Traditionally, a loan officer who    rejected an application could tell a would-be borrower there    was a problem with their income level, or employment history,    or     whatever the issue was     . But    computerized systems that use complex     machine learning      models are    difficult to explain, even for experts.  <\/p>\n<p>    Consumer credit decisions are just one    way this problem arises. Similar concerns      exist in     health care     ,     online marketing      and even     criminal justice     . My own    interest in this area began when a research group I was part of    discovered gender bias in how online ads    were targeted ,    but could not explain why it happened.  <\/p>\n<p>    All those industries, and many others,    who use machine learning to analyze processes and make    decisions have a little over a year to get a lot better at    explaining how their systems work. In May 2018, the new     European Union    General Data Protection Regulation      takes effect, including a section    giving people a right to get an explanation for automated    decisions that affect their lives. What shape should these    explanations take, and can we actually provide them?      <\/p>\n<p>    One way to describe why an automated    decision came out the way it did is to identify the factors    that were most influential in the decision. How much of a    credit denial decision was because the applicant didnt make    enough money, or because he had failed to repay loans in the    past?   <\/p>\n<p>    My research group at Carnegie Mellon    University, including PhD student Shayak Sen and then-postdoc    Yair Zick created a way to measure the relative    influence  of    each factor. We call it the Quantitative Input Influence.      <\/p>\n<p>    In addition to giving better    understanding of an individual decision, the measurement can    also shed light on a group of decisions: Did an algorithm deny    credit primarily because of financial concerns, such as how    much an applicant already owes on other debts? Or was the    applicants ZIP code more important  suggesting more basic    demographics such as race might have come into play?      <\/p>\n<p>    When a system makes decisions based on    multiple factors it is important to identify which factors    cause the decisions and their relative contribution.       <\/p>\n<p>    For example, imagine a credit-decision    system that takes just two inputs, an applicants    debt-to-income ratio and her race, and has been shown to    approve loans only for Caucasians. Knowing how much each factor    contributed to the decision can help us understand whether its    a legitimate system or whether its discriminating.      <\/p>\n<p>    An explanation could just look at the    inputs and the outcome and observe correlation  non-Caucasians    didnt get loans. But this explanation is too simplistic.    Suppose the non-Caucasians who were denied loans also had much    lower incomes than the Caucasians whose applications were    successful. Then this explanation cannot tell us whether the    applicants race or debt-to-income ratio caused the denials.       <\/p>\n<p>    Our method can provide this    information. Telling the difference means we can tease out    whether the system is unjustly discriminating or looking at    legitimate criteria, like applicants finances.       <\/p>\n<p>    To measure the influence of race in a    specific credit decision, we redo the application process,    keeping the debt-to-income ratio the same but changing the race    of the applicant. If changing the race does affect the outcome,    we know race is a deciding factor. If not, we can conclude the    algorithm is looking only at the financial information.       <\/p>\n<p>    In addition to identifying factors that    are causes, we can measure their relative causal influence on a    decision. We do that by randomly varying the factor (e.g.,    race) and measuring how likely it is for the outcome to change.    The higher the likelihood, the greater the influence of the    factor.  <\/p>\n<p>    Our method can also incorporate    multiple factors that work together. Consider a decision system    that grants credit to applicants who meet two of three    criteria: credit score above 600, ownership of a car, and    whether the applicant has fully repaid a home loan. Say an    applicant, Alice, with a credit score of 730 and no car or home    loan, is denied credit. She wonders whether her car ownership    status or home loan repayment history is the principal reason.       <\/p>\n<p>    An analogy can help explain how we    analyze this situation. Consider a court where decisions are    made by the majority vote of a panel of three judges, where one    is a conservative, one a liberal and the third a swing vote,    someone who might side with either of her colleagues. In a 2-1    conservative decision, the swing judge had a greater influence    on the outcome than the liberal judge.  <\/p>\n<p>    The factors in our credit example are    like the three judges. The first judge commonly votes in favor    of the loan, because many applicants have a high enough credit    score. The second judge almost always votes against the loan    because very few applicants have ever paid off a home. So the    decision comes down to the swing judge, who in Alices case    rejects the loan because she doesnt own a car.       <\/p>\n<p>    We can do this reasoning precisely by    using cooperative game theory     , a system of    analyzing more specifically how different factors contribute to    a single outcome. In particular, we combine our measurements of    relative causal influence with the     Shapley value     , which is a    way to calculate how to attribute influence to multiple    factors. Together, these form our Quantitative Input Influence    measurement.   <\/p>\n<p>    So far we have evaluated our methods on    decision systems that we created by training common machine    learning algorithms with real world data sets. Evaluating    algorithms at work in the real world is a topic for future    work.   <\/p>\n<p>    Our method of analysis and explanation    of how algorithms make decisions is most useful in settings    where the factors are readily understood by humans  such as    debt-to-income ratio and other financial criteria.       <\/p>\n<p>    However, explaining the decision-making    process of more complex algorithms remains a significant    challenge. Take, for example, an image recognition system, like    ones that detect and track    tumors . It is    not very useful to explain a particular images evaluation    based on individual pixels. Ideally, we would like an    explanation that provides additional insight into the decision     such as identifying specific tumor characteristics in the    image. Indeed, designing explanations for such automated    decision-making tasks is keeping many researchers     busy     .  <\/p>\n<p>    Anupam Datta      is Associate    Professor of Computer Science and Electrical and Computer    Engineering at Carnegie Mellon    University  <\/p>\n<p>    This article was    originally published on     The Conversation      and was    syndicated from TIME.com   <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>More here:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"http:\/\/fortune.com\/2017\/03\/19\/artificial-intelligence-credit\/\" title=\"Did Artificial Intelligence Deny You Credit? - Fortune\">Did Artificial Intelligence Deny You Credit? - Fortune<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Photograph by Image Source\/Getty Images People who apply for a loan from a bank or credit card company, and are turned down, are owed an explanation of why that happened. Its a good idea because it can help teach people how to repair their damaged credit and its a federal law, the Equal Credit Opportunity Act  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-intelligence\/did-artificial-intelligence-deny-you-credit-fortune\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":9,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187742],"tags":[],"class_list":["post-183925","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/183925"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/9"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=183925"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/183925\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=183925"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=183925"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=183925"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}