{"id":188485,"date":"2017-04-19T10:07:13","date_gmt":"2017-04-19T14:07:13","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/how-artificial-intelligence-learns-to-be-racist-vox-vox\/"},"modified":"2017-04-19T10:07:13","modified_gmt":"2017-04-19T14:07:13","slug":"how-artificial-intelligence-learns-to-be-racist-vox-vox","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-intelligence\/how-artificial-intelligence-learns-to-be-racist-vox-vox\/","title":{"rendered":"How artificial intelligence learns to be racist &#8211; Vox &#8211; Vox"},"content":{"rendered":"<p><p>    Open up the photo app on your phone and search dog, and all    the pictures you have of dogs will come up. This was no easy    feat. Your phone knows what a dog looks like.  <\/p>\n<p>    This and other modern-day marvels are the result of machine    learning. These are programs that comb through millions of    pieces of data and start making correlations and predictions    about the world. The appeal of these programs is immense: These    machines can use cold, hard data to make decisions that are    sometimes more accurate than a humans.  <\/p>\n<p>    But know: Machine learning has a dark side. Many people think    machines are not biased, Princeton computer scientist Aylin    Caliskan says. But machines are trained on human data. And    humans are biased.  <\/p>\n<p>    Computers learn how to be racist, sexist, and prejudiced in a    similar way that a child does, Caliskan explains: from their    creators.  <\/p>\n<p>    Nearly all new consumer technologies use machine learning in    some way. Like Google Translate: No person instructed the    software to learn how to translate Greek to French and then to    English. It combed through countless reams of text and learned    on its own. In other cases, machine learning programs make    predictions about which rsums are likely to yield successful    job candidates, or how a patient will respond to a particular    drug.  <\/p>\n<p>    Machine learning is a program that sifts through billions of    data points to solve problems (such as can you identify the    animal in the photo), but it doesnt     always make clear how it has solved the problem.    And its increasingly clear these programs can develop biases    and stereotypes without us noticing.  <\/p>\n<p>    Last May, ProPublica     published an investigation on a machine learning program    that courts use to predict who is likely to commit another    crime after being booked systematically. The reporters found    that the software rated black people at a higher risk than    whites.  <\/p>\n<p>    Scores like this  known as risk assessments  are    increasingly common in courtrooms across the nation,    ProPublica     explained. They are used to inform decisions about who can    be set free at every stage of the criminal justice system, from    assigning bond amounts  to even more fundamental decisions    about defendants freedom.  <\/p>\n<p>    The program learned about who is most likely to end up in jail    from real-world incarceration data. And historically, the    real-world criminal justice system has been unfair to black    Americans.  <\/p>\n<p>    This story reveals a deep irony about machine learning. The    appeal of these systems is they can make impartial decisions,    free of human bias. If computers could accurately predict    which defendants were likely to commit new crimes, the criminal    justice system could be fairer and more selective about who is    incarcerated and for how long, ProPublica wrote.  <\/p>\n<p>    But what happened was that machine learning programs    perpetuated our biases on a large scale. So instead of a judge    being prejudiced against African Americans, it was a robot.  <\/p>\n<p>    Its stories like the ProPublica investigation that led    Caliskan to research this problem. As a female computer    scientist who was routinely the only woman in her graduate    school classes, shes sensitive to this subject.  <\/p>\n<p>    Caliskan has seen bias creep into machine learning in often    subtle ways  for instance, in Google Translate.  <\/p>\n<p>    Turkish, one of her native languages, has no gender pronouns.    But when she uses Google Translate on Turkish phrases, it    always ends up as hes a doctor in a gendered language. The    Turkish sentence didnt say whether the doctor was male or    female. The computer just assumed if youre talking about a    doctor, its a man.  <\/p>\n<p>    Recently, Caliskan and colleagues published    a paper in Science, that finds as a computer teaches    itself English, it becomes prejudiced against black Americans    and women.  <\/p>\n<p>    Basically, they used a common machine learning program to crawl    through the internet, look at 840 billion words, and teach    itself the definitions of those words. The program accomplishes    this by looking for how often certain words appear in the same    sentence. Take the word bottle. The computer begins to    understand what the word means by noticing it occurs more    frequently alongside the word container, and also near words    that connote liquids like water or milk.  <\/p>\n<p>    This idea to teach robots English actually comes from cognitive    science and its understanding of how children learn language.    How frequently two words appear together is the first clue we    get to deciphering their meaning.  <\/p>\n<p>    Once the computer amassed its vocabulary, Caliskan ran it    through a version of the implicit association test.  <\/p>\n<p>    In humans, the IAT is meant to undercover subtle biases in the    brain by seeing how long it takes people to associate words. A    person might quickly connect the words male and engineer.    But if a person lags on associating woman and engineer,    its a demonstration that those two terms are not closely    associated in the mind, implying bias. (There are some    reliability issues with the IAT in humans, which you can        read about here.)  <\/p>\n<p>    Here, instead at looking at the lag time, Caliskan looked at    how closely the computer thought two terms were related. She    found that African-American names in the program were less    associated with the word pleasant than white names. And    female names were more associated with words relating to family    than male names. (In a weird way, the IAT might be better    suited for use on computer programs than for humans, because    humans answer its questions inconsistently, while a computer    will yield the same answer every single time.)  <\/p>\n<p>    Like a child, a computer builds its vocabulary through how    often terms appear together. On the internet, African-American    names are more likely to be surrounded by words that connote    unpleasantness. Thats not because African Americans are    unpleasant. Its because people on the internet say awful    things. And it leaves an impression on our young AI.  <\/p>\n<p>    This is as much as a problem as you think.  <\/p>\n<p>    Increasingly, Caliskan says, job recruiters are relying on    machine learning programs to take a first pass at rsums. And    if left unchecked, the programs can learn and act upon gender    stereotypes in their decision-making.  <\/p>\n<p>    Lets say a man is applying for a nurse position; he might be    found less fit for that position if the machine is just making    its own decisions, she says. And this might be the same for a    women applying for a software developer or programmer position.     Almost all of these programs are not open source, and were    not able to see whats exactly going on. So we have a big    responsibility about trying to uncover if they are being unfair    or biased.  <\/p>\n<p>    And that will be a challenge in the future. Already AI is    making its way into the health care system, helping doctors    find the right course of treatment for their patients. (Theres    early research on whether it can help     predict mental health crises.)  <\/p>\n<p>    But health data, too, is filled with historical bias. Its long    been known that women get     surgery at lower rates than men. (One reason is that women,    as primary caregivers, have fewer people to take care of them    post-surgery.)  <\/p>\n<p>    Might AI then recommend surgery at a lower rate for women? Its    something to watch out for.  <\/p>\n<p>    Inevitably, machine learning programs are going to encounter    historical patterns that reflect racial or gender bias. And it    can be hard to draw the line between what is bias and what is    just a fact about the world.  <\/p>\n<p>    Machine learning programs will pick up on the fact that most    nurses throughout history have been women. Theyll realize most    computer programmers are male. Were not suggesting you should    remove this information, Caliskan says. It might actually    break the software completely.  <\/p>\n<p>    Caliskan thinks there need to be more safeguards. Humans using    these programs need to constantly ask, Why am I getting these    results? and check the output of these programs for bias. They    need to think hard on whether the data they are combing is    reflective of historical prejudices. Caliskan admits the best    practices of how to combat bias in AI is still being worked    out. It requires a long-term research agenda for computer    scientists, ethicist, sociologists, and psychologists, she    says.  <\/p>\n<p>    But at the very least, the people who use these programs should    be aware of these problems, and not take for granted that a    computer can produce a less biased result than a human.  <\/p>\n<p>    And overall, its important to remember: AI learns about how    the world has been. It picks up on status quo trends.    It doesnt know how the world ought to be. Thats up    to humans to decide.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Original post: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"http:\/\/www.vox.com\/science-and-health\/2017\/4\/17\/15322378\/how-artificial-intelligence-learns-how-to-be-racist\" title=\"How artificial intelligence learns to be racist - Vox - Vox\">How artificial intelligence learns to be racist - Vox - Vox<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Open up the photo app on your phone and search dog, and all the pictures you have of dogs will come up. This was no easy feat <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-intelligence\/how-artificial-intelligence-learns-to-be-racist-vox-vox\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187742],"tags":[],"class_list":["post-188485","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/188485"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=188485"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/188485\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=188485"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=188485"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=188485"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}