{"id":177115,"date":"2017-02-13T09:20:23","date_gmt":"2017-02-13T14:20:23","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/how-to-keep-your-ai-from-turning-into-a-racist-monster-wired\/"},"modified":"2017-02-13T09:20:23","modified_gmt":"2017-02-13T14:20:23","slug":"how-to-keep-your-ai-from-turning-into-a-racist-monster-wired","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/how-to-keep-your-ai-from-turning-into-a-racist-monster-wired\/","title":{"rendered":"How to Keep Your AI From Turning Into a Racist Monster &#8211; WIRED"},"content":{"rendered":"<p><p>          Slide:          1 \/          of 1. Caption: Getty Images        <\/p>\n<p>    Working on a new product launch? Debuting a new mobile site?    Announcing a new feature? If youre not sure whether    algorithmic bias could derail your plan, you should be.  <\/p>\n<p>      About    <\/p>\n<p>        Megan Garcia (@meganegarcia) is a senior fellow and director        of New America California, where she studies cybersecurity,        AI, and diversity in technology.      <\/p>\n<p>    Algorithmic biaswhen seemingly innocuous programming takes on    the prejudices either of its creators or the data it is    fedcauses everything from warped Google searches to barring qualified women from medical    school. It doesnt take active prejudice to produce skewed    results (more on that later) in web searches, data-driven home    loan decisions, or photo-recognition software. It just takes    distorted data that no one notices and corrects for.  <\/p>\n<p>    It took one little Twitter bot to make the point to    Microsoft last year. Tay was designed to engage with people    ages 18 to 24, and it burst onto social media with an upbeat    hellllooooo world!! (the o in world was a planet earth    emoji). But within 12 hours, Tay morphed into a foul-mouthed    racist Holocaust denier that said feminists should all die and    burn in hell. Tay, which was quickly removed from Twitter, was    programmed to learn from the behaviors of other Twitter users,    and in that regard, the bot was a success. Tays embrace of    humanitys worst attributes is an example of algorithmic    biaswhen seemingly innocuous programming takes on the    prejudices either of its creators or the data it is fed.  <\/p>\n<p>    Tay represents just one example of algorithmic bias tarnishing    tech companies and some of their marquis products. In 2015,    Google Photos tagged several African-American users as    gorillas, and the images lit up social media. Yonatan Zunger,    Googles chief social architect and head of infrastructure for    Google Assistant, quickly took to Twitter to announce that Google was    scrambling a team to address the issue. And then there was the    embarrassing revelation that Siri didnt know how to respond to    a host of health questions that affect women, including, I was    raped. What do I do? Apple took action to handle that as well    after a nationwide petition from the American Civil Liberties    Union and a host of cringe-worthy media attention.  <\/p>\n<p>    One of the trickiest parts about algorithmic bias is that    engineers dont have to be actively racist or sexist to create    it. In an era when we increasingly trust technology to be more    neutral than we are, this is a dangerous situation. As Laura    Weidman Powers, founder of Code2040, which brings more African    Americans and Latinos into tech, told me, We are running the    risk of seeding self-teaching AI with the discriminatory    undertones of our society in ways that will be hard to rein in,    because of the often self-reinforcing nature of machine    learning.  <\/p>\n<p>    As the tech industry begins to create artificial intelligence,    it risks inserting racism and other prejudices into code that    will make decisions for years to come. And as deep learning    means that code, not humans, will write code, theres an even    greater need to root out algorithmic bias. There are four    things that tech companies can do to keep their developers from    unintentionally writing biased code or using biased data.  <\/p>\n<p>    The first is lifted from gaming. League of Legends    used to be besieged by claims of harassment until a few small    changes caused complaints to drop sharply. The games creator    empowered players to vote on reported cases    of harassment and decide whether a player should be suspended.    Players who are banned for bad behavior are also now told why    they were banned. Not only have incidents of bullying    dramatically decreased, but players report that they previously    had no idea how their online actions affected others. Now,    instead of coming back and saying the same horrible things    again and again, their behavior improves. The lesson is that    tech companies can use these community policing models to    attack discrimination: Build creative ways to have users find    it and root it out.  <\/p>\n<p>    Second, hire the people who can spot the problem before    launching a new product, site, or feature. Put women, people of    color, and others who tend to be affected by bias and are    generally underrepresented in tech companies development    teams. Theyll be more likely to feed algorithms a wider    variety of data and spot code that is unintentionally biased.    Plus there is a trove of research that shows that diverse teams    create better products and generate more profit.  <\/p>\n<p>    Third, allow algorithmic auditing. Recently, a Carnegie Mellon    research team unearthed algorithmic bias in online ads. When    they simulated people searching for jobs online, Google ads    showed listings for high-income jobs to men nearly six times as    often as to equivalent women. The Carnegie Mellon team has said    it believes internal auditing to beef up companies ability to    reduce bias would help.  <\/p>\n<p>    Fourth, support the development of tools and standards that    could get all companies on the same page. In the next few    years, there may be a certification for companies actively and    thoughtfully working to reduce algorithmic discrimination. Now    we know that water is safe to drink because the EPA monitors    how well utilities keep it contaminant-free. One day we may    know which tech companies are working to keep bias at bay. Tech    companies should support the development of such a    certification and work to get it when it exists. Having one    standard will both ensure sectors sustain their attention to    the issue and give credit to the companies using commonsense    practices to reduce unintended algorithmic bias.  <\/p>\n<p>    Companies shouldnt wait for algorithmic bias to derail their    projects. Rather than clinging to the belief that technology is    impartial, engineers and developers should take steps to ensure    they dont accidentally create something that is just as    racist, sexist, and xenophobic as humanity has shown itself to    be.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>See the rest here:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/www.wired.com\/2017\/02\/keep-ai-turning-racist-monster\/\" title=\"How to Keep Your AI From Turning Into a Racist Monster - WIRED\">How to Keep Your AI From Turning Into a Racist Monster - WIRED<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Slide: 1 \/ of 1. Caption: Getty Images Working on a new product launch?  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/how-to-keep-your-ai-from-turning-into-a-racist-monster-wired\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":8,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187743],"tags":[],"class_list":["post-177115","post","type-post","status-publish","format-standard","hentry","category-ai"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/177115"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/8"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=177115"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/177115\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=177115"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=177115"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=177115"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}