{"id":187690,"date":"2017-04-13T23:49:41","date_gmt":"2017-04-14T03:49:41","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai-programs-exhibit-racial-and-gender-biases-research-reveals-the-guardian\/"},"modified":"2017-04-13T23:49:41","modified_gmt":"2017-04-14T03:49:41","slug":"ai-programs-exhibit-racial-and-gender-biases-research-reveals-the-guardian","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/ai-programs-exhibit-racial-and-gender-biases-research-reveals-the-guardian\/","title":{"rendered":"AI programs exhibit racial and gender biases, research reveals &#8211; The Guardian"},"content":{"rendered":"<p><p>    An artificial intelligence tool that has revolutionised the    ability of computers to interpret everyday language has been    shown to exhibit striking gender and racial biases.  <\/p>\n<p>    The findings raise the spectre of existing social inequalities    and prejudices being reinforced in new and unpredictable ways    as an increasing number of decisions affecting our everyday    lives are ceded to automatons.  <\/p>\n<p>    In the past few years, the ability of programs such as Google    Translate to interpret language has improved dramatically.    These gains have been thanks to new machine learning techniques    and the availability of vast amounts of online text data, on    which the algorithms can be trained.  <\/p>\n<p>    However, as machines are getting closer to acquiring human-like    language abilities, they are also absorbing the deeply    ingrained biases concealed within the patterns of language use,    the latest research reveals.  <\/p>\n<p>    Joanna Bryson, a computer scientist at the University of Bath    and a co-author, said: A lot of people are saying this is    showing that AI is prejudiced. No. This is showing were    prejudiced and that AI is learning it.  <\/p>\n<p>    But Bryson warned that AI has the potential to reinforce    existing biases because, unlike humans, algorithms may be    unequipped to consciously counteract learned biases. A danger    would be if you had an AI system that didnt have an explicit    part that was driven by moral ideas, that would be bad, she    said.  <\/p>\n<p>    The research, published in the journal Science,    focuses on a machine learning tool known as word embedding,    which is already transforming the way computers interpret    speech and text. Some argue that the natural next step for the    technology may involve machines    developing human-like abilities such as common sense and    logic.<\/p>\n<p>    A major reason we chose to study word embeddings is that they    have been spectacularly successful in the last few years in    helping computers make sense of language, said Arvind    Narayanan, a computer scientist at Princeton University and the    papers senior author.  <\/p>\n<p>    The approach, which is already used in web search and machine    translation, works by building up a mathematical representation    of language, in which the meaning of a word is distilled into a    series of numbers (known as a word vector) based on which other    words most frequently appear alongside it. Perhaps    surprisingly, this purely statistical approach appears to    capture the rich cultural and social context of what a word    means in the way that a dictionary definition would be    incapable of.  <\/p>\n<p>    For instance, in the mathematical language space, words for    flowers are clustered closer to words linked to pleasantness,    while words for insects are closer to words linked to    unpleasantness, reflecting common views on the relative merits    of insects versus flowers.  <\/p>\n<p>    The latest paper shows that some more troubling implicit biases    seen in human psychology experiments are also readily acquired    by algorithms. The words female and woman were more closely    associated with arts and humanities occupations and with the    home, while male and man were closer to maths and    engineering professions.  <\/p>\n<p>    And the AI system was more likely to associate European    American names with pleasant words such as gift or happy,    while African American names were more commonly associated with    unpleasant words.  <\/p>\n<p>    The findings suggest that algorithms have acquired the same    biases that lead people (in the UK and US, at least) to match    pleasant words and white faces in implicit association    tests.  <\/p>\n<p>    These biases can have a profound impact on human behaviour. One    previous study showed that an identical CV is 50% more likely    to result in an interview invitation if the candidates name is    European American than if it is African American. The latest    results suggest that algorithms, unless explicitly programmed    to address this, will be riddled with the same social    prejudices.  <\/p>\n<p>    If you didnt believe that there was racism associated with    peoples names, this shows its there, said Bryson.  <\/p>\n<p>    The machine learning tool used in the study was trained on a    dataset known as the common crawl corpus  a list of 840bn    words that have been taken as they appear from material    published online. Similar results were found when the same    tools were trained on data from Google News.  <\/p>\n<p>    Sandra Wachter, a researcher in data ethics and algorithms at    the University of Oxford, said: The world is biased, the    historical data is biased, hence it is not surprising that we    receive biased results.  <\/p>\n<p>    Rather than algorithms representing a threat, they could    present an opportunity to address bias and counteract it where    appropriate, she added.  <\/p>\n<p>    At least with algorithms, we can potentially know when the    algorithm is biased, she said. Humans, for example, could lie    about the reasons they did not hire someone. In contrast, we do    not expect algorithms to lie or deceive us.  <\/p>\n<p>    However, Wachter said the question of how to eliminate    inappropriate bias from algorithms designed to understand    language, without stripping away their powers of    interpretation, would be challenging.  <\/p>\n<p>    We can, in principle, build systems that detect biased    decision-making, and then act on it, said Wachter, who along    with others has called for an AI    watchdog to be established. This is a very complicated    task, but it is a responsibility that we as society should not    shy away from.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>See the article here:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/www.theguardian.com\/technology\/2017\/apr\/13\/ai-programs-exhibit-racist-and-sexist-biases-research-reveals\" title=\"AI programs exhibit racial and gender biases, research reveals - The Guardian\">AI programs exhibit racial and gender biases, research reveals - The Guardian<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> An artificial intelligence tool that has revolutionised the ability of computers to interpret everyday language has been shown to exhibit striking gender and racial biases. The findings raise the spectre of existing social inequalities and prejudices being reinforced in new and unpredictable ways as an increasing number of decisions affecting our everyday lives are ceded to automatons. In the past few years, the ability of programs such as Google Translate to interpret language has improved dramatically.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/ai-programs-exhibit-racial-and-gender-biases-research-reveals-the-guardian\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187743],"tags":[],"class_list":["post-187690","post","type-post","status-publish","format-standard","hentry","category-ai"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/187690"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=187690"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/187690\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=187690"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=187690"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=187690"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}