{"id":189116,"date":"2017-04-23T00:53:02","date_gmt":"2017-04-23T04:53:02","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/our-fear-of-artificial-intelligence-it-is-all-too-human-san-francisco-chronicle\/"},"modified":"2017-04-23T00:53:02","modified_gmt":"2017-04-23T04:53:02","slug":"our-fear-of-artificial-intelligence-it-is-all-too-human-san-francisco-chronicle","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-intelligence\/our-fear-of-artificial-intelligence-it-is-all-too-human-san-francisco-chronicle\/","title":{"rendered":"Our fear of artificial intelligence? It is all too human &#8211; San Francisco Chronicle"},"content":{"rendered":"<p><p>  The classic sci-fi fear that robots will intellectually outpace  humans has resurfaced now that artificial intelligence is part of  our daily lives. Today artificially intelligent programs deliver  food, deposit checks and help employers make hiring decisions. If  we are to worry about a robot takeover, however, it is not  because artificial intelligence is inhuman and immoral, but  rather because we are coding-in distinctly human prejudice.<\/p>\n<p>    Last year, Microsoft released an artificially intelligent    Twitter chatbot named     Tay aimed at engaging Millennials online. The idea was    that Tay would spend some time interacting with users, absorb    relevant topics and opinions, and then produce its own content.    In less than 24 hours, Tay went from tweeting humans are super    cool to racist, neo-Nazi one-liners, such as: I f hate n, I    wish we could put them all in a concentration camp with kikes    and be done with the lot. Needless to say, Microsoft shut down    Tay and issued an apology.  <\/p>\n<p>    We need to hold the companies who make our AI-enabled devices    accountable to a standard of ethics.  <\/p>\n<p>    As the Tay disaster revealed, artificial intelligence does not    always distinguish between the good, the bad and the ugly in    human behavior. The type of artificial intelligence frequently    used in consumer products is called machine learning. Before    machine learning, humans analyzed data, found a pattern and    wrote an algorithm (like a step-by-step recipe) for the    computer to use. Now, we feed the computer huge amounts of data    points, and the computer itself can spot the pattern then write    the algorithm for itself to follow.  <\/p>\n<p>    For example, if we wanted the artificial intelligence to    correctly identify cars, then wed teach it what cars looked    like by giving it lots pictures of cars. If all the pictures we    chose happened to be red sedans, then the artificial    intelligence might think that cars, by definition, are red    sedans. If we then showed the artificial intelligence a picture    of a blue sports utility vehicle, it might determine it wasnt    a car. This is all to say that the accuracy of AI-powered    technology depends on the data we use to teach it.  <\/p>\n<p>    When there is bias in the data used to train artificial    intelligence, there is bias in its output.  <\/p>\n<p>    AI-controlled online advertising is almost     six times more likely to show high-paying job posts to men    than to women. An     AI-judged beauty contest found white women most attractive.    Artificially intelligent software used in court to help judges    set bail and parole sentences also showed racial prejudice. As    ProPublica reported, The    formula was particularly likely to falsely flag black    defendants as future criminals, wrongly labeling them this way    at almost twice the rate as white defendants. It is not that    the algorithm is inherently racist  its that it was fed    stacks of court filings that were harsher on black men than on    white. In turn, the artificial intelligence learned to call    black defendants criminals at an unfairly higher rate, just    like a human might.  <\/p>\n<p>    That algorithm-fueled artificial intelligence amplifies human    bias should make us wary of Silicon Valleys claim that this    technology will usher in a better future.  <\/p>\n<p>    Even when algorithms are not involved, old-fashioned    assumptions make their way into the newest gadgets. I walked    into room the other day to a man yelling, Alexa, find my    phone! only later to realize he was talking to his Amazon    Alexa robot personal assistant, not a human female secretary.    It is no coincidence that all the AI personal assistants     Apples Siri, Microsofts Cortana and Amazons Alexa  marketed    to perform to traditionally female tasks, default to female    voices. What is disruptive about that?  <\/p>\n<p>    Some have suggested that AIs bias problem stems from the    homogeneity of the people making the technology. Silicon    Valleys top tech firms are notoriously     white-male dominated, and hire     fewer women and people of color than the rest of the    business sector. Companies such as Uber and Tesla have gained    reputations for     corporate culture hostile to women and people of color.    Google was     sued in January by the Department of Labor for failing to    provide compensation data, and then charged with underpaying    its female employees (Google is federally contracted and must    hire in accordance with federal law). There is no question that    there should be more women and people of color in tech. But    adding diversity to product teams alone will not counteract the    systemic nature of the bias in data used to train artificial    intelligence.  <\/p>\n<p>    Careful attention to how artificial intelligence learns will    require placing antibias ethics at the center of tech    companies operating principles  not just an after-the-fact    inclusion measure mentioned on the company website. This    ethical framework exists in other fields  medicine, law,    education, government. Training, licensing, ethics boards,    legal sanctions and public opinion coalesce to establish    standards of practice. For instance, medical doctors are taught    the Hippocratic oath and agree to uphold certain ethical    practices or lose their licenses. Why cant tech have a similar    ethical infrastructure?  <\/p>\n<p>    Perhaps ethics in tech did not matter as much when the products    were confined to calculators, video games and iPods. But now    that artificial intelligence makes serious, humanlike    decisions, we need to hold it to humanlike moral standards and    humanlike laws. Otherwise, we risk building a future that looks    just like our past and present.  <\/p>\n<p>    Madeleine Chang is a San Francisco Chronicle staff writer.    Email: <a href=\"mailto:mchang@sfchronicle.com\">mchang@sfchronicle.com<\/a>    Twitter: @maddiechang  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Original post:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"http:\/\/www.sfchronicle.com\/opinion\/article\/Our-fear-of-artificial-intelligence-It-is-all-11090059.php\" title=\"Our fear of artificial intelligence? It is all too human - San Francisco Chronicle\">Our fear of artificial intelligence? It is all too human - San Francisco Chronicle<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> The classic sci-fi fear that robots will intellectually outpace humans has resurfaced now that artificial intelligence is part of our daily lives. Today artificially intelligent programs deliver food, deposit checks and help employers make hiring decisions. If we are to worry about a robot takeover, however, it is not because artificial intelligence is inhuman and immoral, but rather because we are coding-in distinctly human prejudice <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-intelligence\/our-fear-of-artificial-intelligence-it-is-all-too-human-san-francisco-chronicle\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":7,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187742],"tags":[],"class_list":["post-189116","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/189116"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=189116"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/189116\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=189116"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=189116"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=189116"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}