{"id":233546,"date":"2017-08-09T03:30:08","date_gmt":"2017-08-09T07:30:08","guid":{"rendered":"http:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/uncategorized\/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses-the-guardian.php"},"modified":"2022-03-23T15:02:57","modified_gmt":"2022-03-23T19:02:57","slug":"rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses-the-guardian","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses-the-guardian.php","title":{"rendered":"Rise of the racist robots  how AI is learning all our worst impulses &#8230; &#8211; The Guardian"},"content":{"rendered":"<p><p>  Current laws largely fail to address discrimination when it  comes to big data. Photograph: artpartner-images\/Getty Images<\/p>\n<p>    In May last year, a stunning    report    claimed that a computer program used by a US court for risk    assessment was biased against black prisoners. The program,    Correctional Offender Management Profiling for Alternative    Sanctions (Compas), was much more prone to mistakenly label    black defendants as likely to reoffend  wrongly flagging them    at almost twice the rate as white people (45% to 24%),    according to the investigative journalism organisation    ProPublica.  <\/p>\n<p>    Compas and programs similar to it were in use in hundreds of    courts across the US, potentially informing the decisions of    judges and other officials. The message seemed clear: the US    justice system, reviled for its racial bias, had turned to    technology for help, only to find that the algorithms had a    racial bias too.  <\/p>\n<p>    How could this have happened? The private company that supplies    the software, Northpointe, disputed    the conclusions of the report, but declined to reveal the    inner workings of the program, which it considers commercially    sensitive. The accusation gave frightening substance to a worry    that has been brewing among activists and computer scientists    for years and which the tech giants Google and Microsoft    have recently    taken steps to investigate: that as our computational tools    have become more advanced, they have become more opaque. The    data they rely on  arrest records, postcodes, social    affiliations, income  can reflect, and further ingrain, human    prejudice.  <\/p>\n<p>    The promise of machine learning and other programs that work    with big data (often under the umbrella term artificial    intelligence or AI) was that the more information we feed    these sophisticated computer algorithms, the better they    perform. Last year, according to global management consultant    McKinsey, tech companies spent somewhere    between $20bn and $30bn on AI, mostly in research and    development. Investors are making a big bet that AI will sift    through the vast amounts of information produced by our society    and find patterns that will help us be more efficient,    wealthier and happier.  <\/p>\n<p>    It has led to a decade-long AI arms race in which the UK    government is offering six-figure    salaries to computer scientists. They hope to use machine    learning to, among other things, help unemployed people find    jobs, predict the performance of pension funds and sort through    revenue and customs casework. It has become a kind of received    wisdom that these programs will touch every aspect of our    lives. (Its impossible to know how widely adopted AI is now,    but I do know we cant go back, one computer scientist says.)  <\/p>\n<p>      Its impossible to know how widely adopted AI is now, but I      do know we cant go back    <\/p>\n<p>    But, while some of the most prominent voices in the industry    are concerned with the far-off    future apocalyptic potential of AI, there is less attention    paid to the more immediate problem of how we prevent these    programs from amplifying the inequalities of our past and    affecting the most vulnerable members of our society. When the    data we feed the machines reflects the history of our own    unequal society, we are, in effect, asking the program to learn    our own biases.  <\/p>\n<p>    If youre not careful, you risk automating the exact same    biases these programs are supposed to eliminate, says Kristian    Lum, the lead statistician at the San Francisco-based,    non-profit Human Rights Data Analysis Group (HRDAG). Last year,    Lum and a co-author showed that PredPol, a program for police    departments that predicts hotspots where future crime might    occur, could potentially get stuck in    a feedback loop of over-policing majority black and brown    neighbourhoods. The program was learning from previous    crime reports. For Samuel Sinyangwe, a justice activist and    policy researcher, this kind of approach is especially    nefarious because police can say: Were not being biased,    were just doing what the math tells us. And the public    perception might be that the algorithms are impartial.  <\/p>\n<p>    We have already seen glimpses of what might be on the horizon.    Programs developed by companies at the forefront of AI research    have resulted in a string of errors that look uncannily like    the darker biases of humanity: a Google image    recognition program labelled the faces of several black    people as gorillas; a LinkedIn advertising program showed a    preference for male names in searches, and a Microsoft    chatbot called Tay spent a day learning from Twitter and    began    spouting antisemitic messages.  <\/p>\n<p>    These small-scale incidents were all quickly fixed by the    companies involved and have generally been written off as    gaffes. But the Compas revelation and Lums study hint at a    much bigger problem, demonstrating how programs could replicate    the sort of large-scale systemic biases that people have spent    decades campaigning to educate or legislate away.  <\/p>\n<p>    Computers dont become biased on their own. They need to learn    that from us. For years, the vanguard of computer science has    been working on machine learning, often having programs learn    in a similar way to humans  observing the world (or at least    the world we show them) and identifying patterns. In 2012,    Google researchers fed their computer brain millions of    images from YouTube videos to see what it could recognise. It    responded with blurry    black-and-white outlines of human and cat faces. The    program was never given a definition of a human face or a cat;    it had observed and learned two of our favourite subjects.  <\/p>\n<p>    This sort of approach has allowed computers to perform tasks     such as language translation, recognising faces or recommending    films in your Netflix queue  that just a decade ago would have    been considered too complex to automate. But as the algorithms    learn and adapt from their original coding, they become more    opaque and less predictable. It can soon become difficult to    understand exactly how the complex interaction of algorithms    generated a problematic result. And, even if we could, private    companies are disinclined to reveal the commercially sensitive    inner workings of their algorithms (as was the case with    Northpointe).  <\/p>\n<p>    Less difficult is predicting where problems can arise. Take    Googles face recognition program: cats are uncontroversial,    but what if it was to learn what British and American people    think a CEO looks like? The results would likely resemble the    near-identical portraits of older white men that line any bank    or corporate lobby. And the program wouldnt be inaccurate:    only 7% of FTSE CEOs are women. Even fewer, just 3%, have a BME    background. When computers learn from us, they can learn our    less appealing attributes.  <\/p>\n<p>    Joanna Bryson, a researcher at the University of Bath, studied    a program designed to learn relationships between words. It    trained on millions of pages of text from the internet and    began clustering female names and pronouns with jobs such as    receptionist and nurse. Bryson says she was astonished by    how closely the results mirrored the real-world gender    breakdown of those jobs in US government data, a nearly 90%    correlation.  <\/p>\n<p>    People expected AI to be unbiased; thats just wrong. If the    underlying data reflects stereotypes, or if you train AI from    human culture, you will find these things, Bryson says.  <\/p>\n<p>      People expected AI to be unbiased; thats just wrong    <\/p>\n<p>    So who stands to lose out the most? Cathy ONeil, the author of    the book Weapons of    Math Destruction about the dangerous consequences of    outsourcing decisions to computers, says its generally the    most vulnerable in society who are exposed to evaluation by    automated systems. A rich person is unlikely to have their job    application screened by a computer, or their loan request    evaluated by anyone other than a bank executive. In the justice    system, the thousands of defendants with no money for a lawyer    or other counsel would be the most likely candidates for    automated evaluation.  <\/p>\n<p>    In London, Hackney council has recently been working with a    private company to apply AI to data, including government    health and debt records, to help    predict which families have children at risk of ending up in    statutory care. Other councils have reportedly looked into    similar programs.  <\/p>\n<p>    In her 2016 paper, HRDAGs Kristian Lum demonstrated who would    be affected if a program designed to increase the efficiency of    policing was let loose on biased data. Lum and her co-author    took PredPol  the program that suggests the likely location of    future crimes based on recent crime and arrest statistics  and    fed it historical drug-crime data from the city of Oaklands    police department. PredPol showed a daily map of likely crime    hotspots that police could deploy to, based on information    about where police had previously made arrests. The program was    suggesting majority black neighbourhoods at about twice the    rate of white ones, despite the fact that when the    statisticians modelled the citys likely overall drug use,    based on national statistics, it was much more evenly    distributed.  <\/p>\n<p>    As if that wasnt bad enough, the researchers also simulated    what would happen if police had acted directly on PredPols    hotspots every day and increased their arrests accordingly: the    program entered a feedback loop, predicting more and more crime    in the neighbourhoods that police visited most. That caused    still more police to be sent in. It was a virtual mirror of the    real-world criticisms of initiatives such as New York Citys    controversial stop-and-frisk policy. By over-targeting    residents with a particular characteristic, police arrested    them at an inflated rate, which then justified further    policing.  <\/p>\n<p>    PredPols co-developer, Prof Jeff Brantingham, acknowledged the    concerns when asked by    the Washington Post. He claimed that  to combat bias     drug arrests and other offences that rely on the discretion of    officers were not used with the software because they are often    more heavily enforced in poor and minority communities.  <\/p>\n<p>    And while most of us dont understand the complex code within    programs such as PredPol, Hamid Khan, an organiser with Stop    LAPD Spying Coalition, a community group addressing police    surveillance in Los Angeles, says that people do recognise    predictive policing as another top-down approach where    policing remains the same: pathologising whole communities.  <\/p>\n<p>    There is a saying in computer science, something close to an    informal law: garbage in, garbage out. It means that programs    are not magic. If you give them flawed information, they wont    fix the flaws, they just process the information. Khan has his    own truism: Its racism in, racism out.  <\/p>\n<p>    Its unclear how existing laws to protect against    discrimination and to regulate algorithmic decision-making    apply in this new landscape. Often the technology moves faster    than governments can address its effects. In 2016, the Cornell    University professor and former Microsoft researcher Solon    Barocas claimed that current laws largely fail to address    discrimination when it comes to big data and machine learning.    Barocas says that many traditional players in civil rights,    including the American Civil Liberties Union (ACLU), are taking    the issue on in areas such as housing or hiring practices.    Sinyangwe recently worked with the ACLU to try to pass    city-level policies requiring police to disclose any technology    they adopt, including AI.  <\/p>\n<p>    But the process is complicated by the fact that public    institutions adopt technology sold by private companies, whose    inner workings may not be transparent. We dont want to    deputise these companies to regulate themselves, says Barocas.  <\/p>\n<p>    In the UK, there are some existing protections. Government    services and companies must disclose if a decision has been    entirely outsourced to a computer, and, if so, that decision    can be challenged. But Sandra Wachter, a law scholar at the    Alan Turing Institute at Oxford University, says that the    existing laws dont map perfectly to the way technology has    advanced. There are a variety of loopholes that could allow the    undisclosed use of algorithms. She has called for a right to    explanation, which would require a full disclosure as well as    a higher degree of transparency for any use of these programs.  <\/p>\n<p>    The scientific literature on the topic now reflects a debate on    the nature of fairness itself, and researchers are working on    everything from ways to strip unfair classifiers from decades    of historical data, to modifying algorithms to skirt round any    groups protected by existing anti-discrimination laws. One    researcher at the Turing Institute told me the problem was so    difficult because changing the variables can introduce new    bias, and sometimes were not even sure how bias affects the    data, or even where it is.  <\/p>\n<p>    The institute has developed a program that tests a series of    counterfactual propositions to track what affects algorithmic    decisions: would the result be the same if the person was    white, or older, or lived elsewhere? But there are some who    consider it an impossible task to integrate the various    definitions of fairness adopted by society and computer    scientists, and still retain a functional program.  <\/p>\n<p>    In many ways, were seeing a response to the naive optimism of    the earlier days, Barocas says. Just two or three years ago    you had articles credulously claiming: Isnt this great? These    things are going to eliminate bias from hiring decisions and    everything else.  <\/p>\n<p>    Meanwhile, computer scientists face an unfamiliar challenge:    their work necessarily looks to the future, but in embracing    machines that learn, they find themselves tied to our age-old    problems of the past.  <\/p>\n<p>    Follow the Guardians Inequality Project on Twitter here, or    email us at    <a href=\"mailto:inequality.project@theguardian.com\">inequality.project@theguardian.com<\/a>  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>See the original post: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/www.theguardian.com\/inequality\/2017\/aug\/08\/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses\" title=\"Rise of the racist robots  how AI is learning all our worst impulses ... - The Guardian\">Rise of the racist robots  how AI is learning all our worst impulses ... - The Guardian<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Current laws largely fail to address discrimination when it comes to big data. Photograph: artpartner-images\/Getty Images In May last year, a stunning report claimed that a computer program used by a US court for risk assessment was biased against black prisoners <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses-the-guardian.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[13],"tags":[],"class_list":["post-233546","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"modified_by":"Danzig","_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/233546"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=233546"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/233546\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=233546"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=233546"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=233546"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}