{"id":176519,"date":"2017-02-10T03:14:39","date_gmt":"2017-02-10T08:14:39","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/can-ai-make-facebook-more-inclusive-christian-science-monitor\/"},"modified":"2017-02-10T03:14:39","modified_gmt":"2017-02-10T08:14:39","slug":"can-ai-make-facebook-more-inclusive-christian-science-monitor","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/can-ai-make-facebook-more-inclusive-christian-science-monitor\/","title":{"rendered":"Can AI make Facebook more inclusive? &#8211; Christian Science Monitor"},"content":{"rendered":"<p><p>    February 9, 2017 When faced with a challenge, whats    a tech company to do? Turn to technology, Facebook suggests.  <\/p>\n<p>    Following criticism that its ad-approval process was failing to    weed out discriminatory ads,Facebook has revised    its approach to advertising, the company announced on    Wednesday. In addition to updating its policies about how    advertisers can use data to target users, the social media    giant plans to implement a high-tech solution: machine    learning.  <\/p>\n<p>    In recent years, artificial intelligence has climbed off the    pages of science fiction novels and into myriad aspects of    everyday life, from internet searches to health care decisions    to traffic recommendations. But Facebook's new ad-approval    algorithms wade into greener territory as the company attempts    to utilize machine learning to address, or at least not    contribute to, social discrimination.  <\/p>\n<p>    Machine learning has been around for half a century at    least but were only now starting to use it to make a social    difference,Geoffrey Gordon, an associate professor in    the Machine Learning Department at Carnegie Mellon University    in Pittsburgh, Penn., tells The Christian Science Monitor in a    phone interview. Its going to become increasingly important.  <\/p>\n<p>    Though analysts caution that machine learning has its limits,    such an approach also carries tremendous potential for    addressing these types of challenges. With that in mind, more    companies  particularly in the tech sector  are likely to    deploy similar techniques.  <\/p>\n<p>    Facebooks change of strategy, intended to make the platform    more inclusive, follow the discovery that some of its ads were    specifically excluding certain racial groups. In October,    nonprofit investigative news site ProPublica tested the companys ad approval process    with an ad for a renter event that explicitly excluded    African-Americans. The Fair Housing Act of 1968 prohibits    discrimination or showing preference to anyone on the basis of    race, making that ad illegal  but it was nevertheless approved    within 15 minutes, ProPublica reported.  <\/p>\n<p>    Why? Because while Facebook doesn't ask users to identify their    race and bars advertisers from directing their content at    specific races, they have a host of information about users on    file: pages they like, what languages they use, and so on. This    kind of information is important to advertisers, since it means    they can improve their chances of making a sale by targeting    their ads toward people who are more likely to buy their    product.  <\/p>\n<p>    But by creating a demographic picture of a user, this data may    make it possible to determine an individuals race, and then    improperly exclude or target individuals. The company's updated    policies emphasize that advertisers cannot discriminate against    users on the basis of personal attributes, which Facebook says    include \"race, ethnicity, color, national origin, religion,    age, sex, sexual orientation,gender identity, family    status, disability, medical or genetic condition.\"  <\/p>\n<p>    There's a fine line between appropriate use of such information    and discrimination, as Facebooks head of US multicultural    sales, Christian Martinez, explained following the ProPublica    investigation: a merchant selling hair care products that are    designed for black women will need to reach that constituency,    while an apartment building that wont rent to black people or    an employer that only hires men [could use the information    for]negative exclusion.  <\/p>\n<p>    For Facebook, the challenge is maintaining that advertising    advantage, while preventing discrimination, particularly where    its illegal. Thats where machine learning comes in.  <\/p>\n<p>    Were beginning to test new technology thatleverages machine learning to help us identify    adsthat offer housing, employment or credit    opportunities - the types of advertising stakeholders told us    they were concerned about, the company said in a statement on    Wednesday.  <\/p>\n<p>    The computer is just looking for patterns in data that you    supply to it, explains Professor Gordon.  <\/p>\n<p>    That means Facebook can decide which areas it wants to focus on     namely, ads that offer housing, employment or credit    opportunities, according to the company  and then supply    hundreds of examples of these types of ads to a computer.  <\/p>\n<p>    If a human teaches the computer by initially labeling each ad    as discriminatory or nondiscriminatory, a computer can learn to    go from the text of the advertising to a prediction of whether    its discriminatory or not, Gordon says.  <\/p>\n<p>    This kind of machine learning  known as supervised learning     already has dozens of applications, from determining which    emails are spam to recognizing faces in a photo.  <\/p>\n<p>    But there are certainly limits to its effectiveness, Gordon    adds.  <\/p>\n<p>    Youre not going to do better than your source of    information, he explains. Teaching the machine to recognize    discriminatory ads requires lots of examples of similar    ads.  <\/p>\n<p>    If the distribution of ads that you see changes, the machine    learning might stop working, Gordon explains, noting that    these changing strategies on the part of content producers can    often get them past AI filters, like your email spam filter.    Insufficient understanding of details on the part of machines    can also lead to high-profile problems, like Google Photos,    which in 2015 mistakenly labeled black people as gorillas.  <\/p>\n<p>    Teaching the machine also means having a person take the time    to go through hundreds of ads and label them, as well as    continue to check and correct a machines work. That makes the    system vulnerable to human biases.  <\/p>\n<p>    That process of refinement involves sorting, labeling and    tagging  which is difficult to do without using assumptions    about ethnicity, gender, race, religion and the like, explains    Amy Webb, founder and CEO of the Future Today Institute, in an    email to the Monitor. The system learns through a process of    real-time experimenting and testing, so once bias creeps in, it    can be difficult to remove it.  <\/p>\n<p>    More overt bias issues have already been observed with AI bots,    like Tay, Microsofts chatbot, who repeated the Nazi slogans fed to her by Twitter    users. While this bias may be more subtle, since it is    presumably unintentional, it could conceivably create    persistent problems.  <\/p>\n<p>    Unbiased machine learning is the subject of a lot of current    research, says Gordon. One answer, he suggests, is having a    lot of teachers, since it offers a consensus view of    discrimination that may be less vulnerable to individual    biases.  <\/p>\n<p>    Since October, the company has been working with civil rights    groups and government organizations to strengthen their    nondiscrimination policies. Despite potential obstacles, those    groups seem pleased with the progress that the AI system and    associated steps represent.  <\/p>\n<p>    We like Facebook for following up on its commitment to combatting    discriminatory targeting in online advertisements, Wade    Henderson, president and chief executive officer of the    Leadership Conference on Civil and Human Rights, said in a    statement on Wednesday.  <\/p>\n<p>    And machine learning is likely to become a component in other    companies efforts to combat discrimination, as well as perform    a host of other functions. Though he notes that tech companies    are typically fairly secretive about their plans, Gordon    suggests that such projects are probably already underway at    many of them.  <\/p>\n<p>    Facebook isnt the only company doing this  as far as I know,    all of the tech companies are considering a similar ...    question, he concludes.  <\/p>\n<p>    But is the ability to target advertising on social media    platforms really worth the trouble? Professor Webb, who also    teaches at the NYU School of Business, sounds a note of    caution.  <\/p>\n<p>    My behavior in Facebook is not an accurate representation for    who I really am, how I think, and how I act  and thats true    of most people, she writes. We sometimes like, comment and    post authentically, but more often were revealing just the    aspirational versions of ourselves. That may ultimately not be    useful for would-be advertisers.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read more: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"http:\/\/www.csmonitor.com\/Technology\/2017\/0209\/Can-AI-make-Facebook-more-inclusive\" title=\"Can AI make Facebook more inclusive? - Christian Science Monitor\">Can AI make Facebook more inclusive? - Christian Science Monitor<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> February 9, 2017 When faced with a challenge, whats a tech company to do? Turn to technology, Facebook suggests. Following criticism that its ad-approval process was failing to weed out discriminatory ads,Facebook has revised its approach to advertising, the company announced on Wednesday.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/can-ai-make-facebook-more-inclusive-christian-science-monitor\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":5,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187743],"tags":[],"class_list":["post-176519","post","type-post","status-publish","format-standard","hentry","category-ai"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/176519"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=176519"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/176519\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=176519"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=176519"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=176519"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}