{"id":227348,"date":"2017-07-12T12:20:40","date_gmt":"2017-07-12T16:20:40","guid":{"rendered":"http:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/uncategorized\/what-does-facebook-think-free-speech-is-for-harvard-crimson.php"},"modified":"2017-07-12T12:20:40","modified_gmt":"2017-07-12T16:20:40","slug":"what-does-facebook-think-free-speech-is-for-harvard-crimson","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/free-speech\/what-does-facebook-think-free-speech-is-for-harvard-crimson.php","title":{"rendered":"What Does Facebook Think Free Speech is For? &#8211; Harvard Crimson"},"content":{"rendered":"<p><p>    Who should decide what is hate speech in an online global    community? Thats the question Richard Allan, Facebooks Vice    President for Policy in the Middle East and Asia,     is asking in the wake of reporting on the social networks    content moderation guidelines. Reporting group ProPublicas        headlineFacebooks Secret Censorship Rules Protect White    Men from Hate Speech But Not Black Childrencaptures our    almost dystopian fear of an all-powerful corporation rigging    political discourse to serve shareholders, advertisers, and    procrastinators the world over. Just imagine the 7,500-strong    community    operations team as uniformed propagandists searching for    content that bucks the party line, and your Orwellian    masterpiece is off to a fine start.  <\/p>\n<p>    At first glance, removing hate speech might seem to depend    exclusively on moderators ability to judge which posts cause    serious harm to usersa task difficult only because determining    that harm is so tricky. Yet as Facebook acknowledges, its own    categories of hate speech dont function purely as    immunizations from feeling threatened by others online.  <\/p>\n<p>    For example, categorically demeaning African arrivals to Italy    violates the social networks rules, but advocating for    proposals to deny refugees Italian welfare does not. And this    remains true even if both actions cause comparable suffering to    their migrant subjects. As Allan explains with reference to    German debates on migrants, we have left in place the ability    for people to express their views on immigration itself. And we    are deeply committed to making sure Facebook remains a place    for legitimate debate. In other words, Facebook will permit    some legitimate posts in spite of their potential to harm    shielded groups.  <\/p>\n<p>    What kind of debate qualifies as legitimate in Facebooks eyes?    The company doesnt say. One approach is to classify hateful    content, like much-scrutinized    fake news, as a subset of false speech. Group-focused hate    speech contains generalizations or arguments that take no time    to debunk, while more involved political content requires    prohibitive resources to fact-check properly.  <\/p>\n<p>    However, even if removing egregiously incorrect posts were a    good idea, Facebook uses other variables to decide the    boundaries of legitimate discussion. When then-presidential    candidate Donald Trump called for a ban on Muslims entering the    United States, he likely ran afoul of the sites rules against    calling for exclusion of protected classesbut     reports     indicate that Facebook CEO Mark E. Zuckerberg, a former    member of the Class of 2006, permitted the content to remain on    his platform because it was part of the political discourse.    The companys efforts to exclude hate do not amount to    eradicating falsehood.  <\/p>\n<p>    Facebooks selective moderation suggests that legitimate    content for the company is not necessarily true or respectful    content, but material whose publication it deems valuable from    the publics point of view. Even if the social network could    have stopped users from hearing Trumps Muslim ban speech, for    instance, doing so would have prevented voters from learning    something important about the candidates policy preferences.  <\/p>\n<p>    This desire to inform citizens just illustrates how any    outfits censorship practicesor lack thereofreflect a    normative set of ideas about what best serves the interests of    users. When Facebook, Google, or others frame content    regulation as concerned with the safety of users, they mask    the extent to which that safety is just one piece of a broader,    and possibly controversial, conception of how we should lead    our digital lives.  <\/p>\n<p>    A social network that helps to structure the discourse of    nearly two billion individuals ought to justify the design it    chooses for them. And to its credit, Facebook seems more    interested than just about any other technology company in        giving explicit voice to its vision of building    global community. But the fact that the companys    moderation guidelines were developed ad hoc and without user    input over the span of several years is worrying and hard to    defend. When we stop pretending that online platforms are    amoral structures, we also see the urgent need to scrutinize    their foundations.  <\/p>\n<p>    As it stands, the question of who ought to define and regulate    hate speech is a moot one. With the exception of some European    authorities, Facebook and other companies are already answering    it for us, whether or not we accept their verdicts.    Undoubtedly, many well-intentioned technologists envision a    future in which online platforms guide political and social    debate to be as robust as possible. But absent major changes,    we can only hope that their utopia is not our dystopian future.  <\/p>\n<p>    Gabriel H. Karger 18 is a philosophy concentrator in    Mather House.  <\/p>\n<p>          Eliot House Moves Facebook On-Line        <\/p>\n<p>          It's late at night. You're surfing the Internet, staring          at your own reflection in the screen, when you notice          your        <\/p>\n<p>          Yale To Give Professors Facebook Access        <\/p>\n<p>          At Yale, the days of quietly slipping into the back of a          classroomand promptly dozing off to your professors          muted        <\/p>\n<p>          techTALK        <\/p>\n<p>          Matching faces to names is a Harvard pastime, thanks to          the Freshman Register and the various House facebooks.          Student demand        <\/p>\n<p>          Safe Spaces and Free Speech        <\/p>\n<p>          While the University of Chicago may have overstepped in          issuing a blanket condemnation of safe spaces and content          warnings, its letter was also a reaction to the          suppression of speech that has every right to be heard on          university campuses everywhere.        <\/p>\n<p>          BGLTQ Office Prepares For Visit of Anti-Transgender 'Free          Speech' Bus        <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read more from the original source: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"http:\/\/www.thecrimson.com\/column\/rights-and-wrongs\/article\/2017\/7\/12\/karger-facebook-free-speech\/\" title=\"What Does Facebook Think Free Speech is For? - Harvard Crimson\">What Does Facebook Think Free Speech is For? - Harvard Crimson<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Who should decide what is hate speech in an online global community? Thats the question Richard Allan, Facebooks Vice President for Policy in the Middle East and Asia, is asking in the wake of reporting on the social networks content moderation guidelines. Reporting group ProPublicas headlineFacebooks Secret Censorship Rules Protect White Men from Hate Speech But Not Black Childrencaptures our almost dystopian fear of an all-powerful corporation rigging political discourse to serve shareholders, advertisers, and procrastinators the world over.  <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/free-speech\/what-does-facebook-think-free-speech-is-for-harvard-crimson.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[388392],"tags":[],"class_list":["post-227348","post","type-post","status-publish","format-standard","hentry","category-free-speech"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/227348"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=227348"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/227348\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=227348"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=227348"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=227348"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}