{"id":196346,"date":"2017-06-03T12:29:19","date_gmt":"2017-06-03T16:29:19","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/pitfalls-of-artificial-intelligence-decisionmaking-highlighted-in-idaho-aclu-case-aclu-blog\/"},"modified":"2017-06-03T12:29:19","modified_gmt":"2017-06-03T16:29:19","slug":"pitfalls-of-artificial-intelligence-decisionmaking-highlighted-in-idaho-aclu-case-aclu-blog","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-intelligence\/pitfalls-of-artificial-intelligence-decisionmaking-highlighted-in-idaho-aclu-case-aclu-blog\/","title":{"rendered":"Pitfalls of Artificial Intelligence Decisionmaking Highlighted In Idaho ACLU Case &#8211; ACLU (blog)"},"content":{"rendered":"<p><p>    One of the biggest civil liberties issues raised by technology    today is whether, when, and how we allow computer algorithms to    make decisions that affect peoples lives. Were starting to    see this in particular in the     criminal justice system. For the past several years the    ACLU of Idaho has been involved in a fascinating case that, so    far as I can tell, has received very little if any national    coverage, but which raises fascinating issues that are core to    the new era of big data that we are entering.  <\/p>\n<p>    The     case, K.W. v. Armstrong, is a class action lawsuit    brought by the ACLU representing about 4,000 Idahoans with    developmental and intellectual disabilities who receive    assistance from the states Medicaid program. I spoke recently    with Richard Eppink, Legal Director of the ACLU of Idaho, and    he told me about the case:  <\/p>\n<p>      It originally started because a bunch of people were      contacting me and saying that that the amount of assistance      that they were being given each year by the state Medicaid      program was being suddenly cut by 20 or 30 percent. I thought      the case would be a simple matter of saying to the state,      Okay, tell us why these dollar figures dropped by so much.    <\/p>\n<p>      What happens in this particular program is that each year you      go to an assessment interview with an assessor who is a      contractor with the Medicaid program, and they ask you a      bunch of questions. The assessor plugs these into an Excel      spreadsheet, and it comes out with this dollar figure amount,      which is how much that you can spend on your services that      year.    <\/p>\n<p>      But when we asked them how the dollar amounts were arrived      at, the Medicaid program came back and said, we cant tell      you that, its a trade secret.    <\/p>\n<p>      And so thats what led to the lawsuit. We said youve got to      release this, you cant just be coming up with these numbers      using a secret formula. And then, within a couple of weeks      of filing the case, the court agreed and told the state,      yeah, you have to disclose that. In a ruling from the bench      the judge said its just a blatant due process violation to      tell people youre going to reduce their health care services      by $20,000 in a year for some secret reason. The judge also      ruled on Medicaid Act groundsthere are requirements in the      act that if youre going to reduce somebodys coverage, you      have to explain why.    <\/p>\n<p>      That was five years ago. And once we got their formula, we      hired a couple of experts to dig into it and figure out what      it was doinghow the whole process was working, both the      assessmentthe formula itselfand the data that was used to      create it.    <\/p>\n<p>    Eppink said the experts that they hired found big problems with    what the state Medicaid program was doing:  <\/p>\n<p>      There were a lot of things wrong with it. First of all, the      data they used to come up with their formula for setting      peoples assistance limits was corrupt. They were using      historical data to predict what was going to happen in the      future. But they had to throw out two-thirds of the records      they had before they came up with the formula because of data      entry errors and data that didnt make sense. So they were      supposedly predicting what this population was going to need,      but the historical data they were using was flawed, and they      were only able to use a small sub-set of it. And bad data      produces bad results.    <\/p>\n<p>      A second thing is that the state itself had found in its own      testing that there were problemsdisproportionate results for      different parts of the state that couldnt be explained.    <\/p>\n<p>      And the third thing is that our experts found that there were      fundamental statistical flaws in the way that the formula      itself was structured.    <\/p>\n<p>    Idahos Medicaid bureaucracy was making arbitrary and    irrational decisions with big impacts on peoples lives, and    fighting efforts to make it explain how it was reaching those    decisions. This lack of transparency is unconscionable.    Algorithms are often highly complicated, and when you marry    them to human social\/legal\/bureaucratic systems, the complexity    only skyrockets. That means public transparency is vital. The    experience in Idaho only confirms this.  <\/p>\n<p>    I asked Eppink, if Idahos decisionmaking system was so    irrational, why did the state rely on it?  <\/p>\n<p>      I dont actually get the sense they even knew how bad this      was. Its just this bias we all have for computerized      resultswe dont question them. Its a cultural, maybe even      biological thing, but when a computer generates      somethingwhen you have a statistician, who looks at some      data, and comes up with a formulawe just trust that formula,      without asking hey wait a second, how is this actually      working? So I think the state fell victim to this      complacency that we have with computerized decisionmaking.    <\/p>\n<p>      Secondly, I dont think anybody at the Medicaid program      really thought about how this was working. When we took the      depositions in the case I asked each person we deposed from      the program to explain to me how they got from these      assessment figures to this number, and everybody pointed a      finger at somebody else. I dont know that, but this other      person does. So I would take a deposition from that other      person, and that person pointed at somebody else, and      eventually everybody was pointing around in a circle.    <\/p>\n<p>      And so, that machine bias or complacency, combined with this      idea that nobody really fully understood thisit was a lack      of understanding of the process on the part of everybody;      everybody assumed somebody else knew how it worked.    <\/p>\n<p>    This, of course, is one of the time-honored horrors of    bureaucracies: the fragmentation of intelligence that (as I    have     discussed) allows hundreds or thousands of intelligent,    ethical individuals to behave in ways that are collectively    stupid and\/or unethical. I have written     before about a fascinating paper by Danielle Citron    entitled Technological Due Process, which looks at the    problems and solutions that arise when translating human rules    and policies into computer code. This case shows those problems    in action.  <\/p>\n<p>    So what are the solutions in this case? Eppink:  <\/p>\n<p>      A couple years ago after wed done all that discovery and      worked with the experts, we put it together in a summary      judgment package for the judge. And last year the court held      that the formula itself was so bad that it was      unconstitutionalviolated due processbecause it was      effectively producing arbitrary results for a large number of      people. And the judge ordered that the Medicaid program      basically overhaul the way it was doing this. That includes      regular testing, regular updating, and the use of quality      data. And thats where we are now; theyre in the process of      doing that.    <\/p>\n<p>      My hunch is that this kind of thing is happening a lot across      the United States and across the world as people move to      these computerized systems. Nobody understands them, they      think that somebody else doesbut in the end we trust them.      Even the people in charge of these programs have this trust      that these things are working.    <\/p>\n<p>      And the unfortunate part, as we learned in this case, is that      it costs a lot of money to actually test these things and      make sure theyre working right. It cost us probably $50,000,      and I dont think that a state Medicaid program is going to      be motivated to spend the money that it takes to make sure      these things are working right. Or even these private      companies that are running credit predictions, housing      predictions, recidivism predictionsunless the cost is      internalized on them through litigation, and its understood      that hey, eventually somebodys going to have the money to      test this, so it better be working.    <\/p>\n<p>    As our technological train hurtles down the tracks, we need    policymakers at the federal, state, and local level who have a    good understanding of the pitfalls involved in using computers    to make decisions that affect peoples lives.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>See the original post: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/www.aclu.org\/blog\/free-future\/pitfalls-artificial-intelligence-decisionmaking-highlighted-idaho-aclu-case\" title=\"Pitfalls of Artificial Intelligence Decisionmaking Highlighted In Idaho ACLU Case - ACLU (blog)\">Pitfalls of Artificial Intelligence Decisionmaking Highlighted In Idaho ACLU Case - ACLU (blog)<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> One of the biggest civil liberties issues raised by technology today is whether, when, and how we allow computer algorithms to make decisions that affect peoples lives. Were starting to see this in particular in the criminal justice system <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-intelligence\/pitfalls-of-artificial-intelligence-decisionmaking-highlighted-in-idaho-aclu-case-aclu-blog\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":5,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187742],"tags":[],"class_list":["post-196346","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/196346"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=196346"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/196346\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=196346"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=196346"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=196346"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}