{"id":204255,"date":"2017-07-08T04:08:17","date_gmt":"2017-07-08T08:08:17","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/facebook-cant-solve-its-hate-speech-problem-with-automation-popular-science\/"},"modified":"2017-07-08T04:08:17","modified_gmt":"2017-07-08T08:08:17","slug":"facebook-cant-solve-its-hate-speech-problem-with-automation-popular-science","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/automation\/facebook-cant-solve-its-hate-speech-problem-with-automation-popular-science\/","title":{"rendered":"Facebook can&#8217;t solve its hate speech problem with automation &#8211; Popular Science"},"content":{"rendered":"<p><p>    How, exactly, are people supposed to talk to each other online?    For Facebook, this is as much an operational question as it is    a philosophical one.  <\/p>\n<p>    Last week, Facebook announced it has two billion users, which    means roughly 27 percent of the worlds 7.5 billion people use    the social media network. In     a post at Facebooks Hard Questions blog, the company    offered a look at the internal logic behind how the company    manages hate speech, the day before     ProPublica broke a story about apparently hypocritical ways    in which those standards are applied. Taken together, they make    Facebooks attempt to regulate speech look impossible.  <\/p>\n<p>    Language is hard. AI trained on human language, for example,    will     replicate the same biases of the users, just by seeing how    words are used in relation to each other. And the same word, in    the same sentence, can mean different things depending on the    identity of the speaker, the identity of the person to which    its addressed, and even the manner of conversation. And thats    not even considering the multiple definitions of a given word.  <\/p>\n<p>    What does the statement 'burn flags not fags' mean?,     writes Richard Allan, Facebooks VP of Public Policy for    Europe, the Middle East, and Africa. While this is clearly a    provocative statement on its face, should it be considered hate    speech? For example, is it an attack on gay people, or an    attempt to 'reclaim' the slur? Is it an incitement of political    protest through flag burning? Or, if the speaker or audience is    British, is it an effort to discourage people from smoking    cigarettes (fag being a common British term for cigarette)? To    know whether its a hate speech violation, more context is    needed.  <\/p>\n<p>    Reached for comment, a Facebook spokesperson confirmed that the    Hard Questions post wasnt representative of any new policy.    Instead, its simply transparency into the logic of how    Facebook polices speech.  <\/p>\n<p>    People want certain things taken down, they want the right to    say things, says Kate Klonick, a resident fellow at the    Information Society Project at Yale, they want there to be a    perfect filter that takes down the things that are hate speech    or racist or sexist or hugely offensive.  <\/p>\n<p>    One reason that Facebook may be parsing how it regulates speech    in public is that, thanks to a trove of internal documents    leaked to the    Guardian, others are reporting on how Facebooks internal    guidance for what speech to take down and what speech to leave    up.  <\/p>\n<p>    \"According to one document, migrants can be referred to as    'filthy' but not called filth,'\"     reports ProPublica, \"They cannot be likened to filth or    disease 'when the comparison is in the noun form,' the document    explains.\"  <\/p>\n<p>    Klonick studies how Facebook governs its users, and while the    kinds off moderation discussed in the Hard Questions post    arent new, the transparency is. Says Klonick, \"It's not secret    anymore that this happens and that your voice is being    moderated, your feed is being moderated behind the scenes.\"  <\/p>\n<p>    To Klonicks eye, by starting to disclose more of what goes on    in the sausage factory, Facebook is trying to preempt criticism    of how, exactly, Facebook chooses to moderate speech.  <\/p>\n<p>    Theres nothing, though, that says Facebook has to regulate all    the speech it does, beyond what's required by the law in the    countries where Facebook operates. Several examples in the Hard    Questions post hinge on context: Is the person reclaiming a    former slur, or is it a joke among friends or an attack by a    stranger against a member of a protected group? But what    happens when war suddenly changes a term from casual use to    something reported as hate speech?  <\/p>\n<p>    One example from Hard Questions is how Facebook choose to    handle the word \"moskal,\" a Ukranian slang term for Russians,    and \"khokhol,\" a Russian slang term for Ukrainians. When a    conflict between Russia and Ukraine broke out in 2014, people    in both countries started reporting the terms used by the    opposing side as hate speech. In response, says Allan, \"We did    an internal review and concluded that they were right. We began    taking both terms down, a decision that was initially unpopular    on both sides because it seemed restrictive, but in the context    of the conflict felt important to us.\"  <\/p>\n<p>    One common use of reporting features on websites is for people    to simply report others with whom they disagree, invoking the    ability of the site to censor their ideological foes. With the    conversion of regular language to slurs in the midst of a war,    Facebook appears to have chosen to try and calm tensions    itself, by removing posts with the offending words.  <\/p>\n<p>    \"I thought that example was really interesting because he says    explicitly that the decision to censor those words was    unpopular on both sides,\" says Jillian York, the EFF's Director    for International Freedom of Expression. \"Thats very much a    value judgement. Its not saying 'people were killing    themselves because of this term, and so were protecting    ourselves from liability;' which is one thing that they do, one    thats a little more understandable. This is Facebook saying,    'the people didnt want this, but we decided it was right for    them anyway.'\"  <\/p>\n<p>    And while Facebook ultimately sets policy about what to take    down and what to leave up, the work of moderation is done by    people, and like with Facebooks    moderation of video, this work will continue to be done by    people for the foreseeable future.  <\/p>\n<p>    \"People think that its easy to automate this, and I think that    that blogpost is why its so difficult right now, how far we    are from automating it,\" says Klonick. \"Those are difficult    human judgements to make, were years away from that. These    types of examples that Richard Allen talked about in his blog    post are exactly why were so far from automating this    process.\"  <\/p>\n<p>    Again, Facebook is deciding the rules and standards for speech    for over a quarter of the worlds population, something few    governments in history have ever come close to or exceeded.    (Ancient Persia is a     rare exception). With the enormity of the task, its worth    looking at not just how Facebook chooses to regulate speech,    but why it chooses to do so.  <\/p>\n<p>    \"On scale, moderating content for 2 billion people is    impossible,\" says York, \"so why choose to be restrictive beyond    the law? Why is Facebook trying to be the worlds regulator?\"  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read more here: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"http:\/\/www.popsci.com\/facebook-cant-use-automation-to-kill-hate-speech\" title=\"Facebook can't solve its hate speech problem with automation - Popular Science\">Facebook can't solve its hate speech problem with automation - Popular Science<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> How, exactly, are people supposed to talk to each other online? For Facebook, this is as much an operational question as it is a philosophical one <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/automation\/facebook-cant-solve-its-hate-speech-problem-with-automation-popular-science\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":6,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187732],"tags":[],"class_list":["post-204255","post","type-post","status-publish","format-standard","hentry","category-automation"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/204255"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=204255"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/204255\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=204255"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=204255"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=204255"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}