What Does Facebook Think Free Speech is For? – Harvard Crimson

Who should decide what is hate speech in an online global community? Thats the question Richard Allan, Facebooks Vice President for Policy in the Middle East and Asia, is asking in the wake of reporting on the social networks content moderation guidelines. Reporting group ProPublicas headlineFacebooks Secret Censorship Rules Protect White Men from Hate Speech But Not Black Childrencaptures our almost dystopian fear of an all-powerful corporation rigging political discourse to serve shareholders, advertisers, and procrastinators the world over. Just imagine the 7,500-strong community operations team as uniformed propagandists searching for content that bucks the party line, and your Orwellian masterpiece is off to a fine start.

At first glance, removing hate speech might seem to depend exclusively on moderators ability to judge which posts cause serious harm to usersa task difficult only because determining that harm is so tricky. Yet as Facebook acknowledges, its own categories of hate speech dont function purely as immunizations from feeling threatened by others online.

For example, categorically demeaning African arrivals to Italy violates the social networks rules, but advocating for proposals to deny refugees Italian welfare does not. And this remains true even if both actions cause comparable suffering to their migrant subjects. As Allan explains with reference to German debates on migrants, we have left in place the ability for people to express their views on immigration itself. And we are deeply committed to making sure Facebook remains a place for legitimate debate. In other words, Facebook will permit some legitimate posts in spite of their potential to harm shielded groups.

What kind of debate qualifies as legitimate in Facebooks eyes? The company doesnt say. One approach is to classify hateful content, like much-scrutinized fake news, as a subset of false speech. Group-focused hate speech contains generalizations or arguments that take no time to debunk, while more involved political content requires prohibitive resources to fact-check properly.

However, even if removing egregiously incorrect posts were a good idea, Facebook uses other variables to decide the boundaries of legitimate discussion. When then-presidential candidate Donald Trump called for a ban on Muslims entering the United States, he likely ran afoul of the sites rules against calling for exclusion of protected classesbut reports indicate that Facebook CEO Mark E. Zuckerberg, a former member of the Class of 2006, permitted the content to remain on his platform because it was part of the political discourse. The companys efforts to exclude hate do not amount to eradicating falsehood.

Facebooks selective moderation suggests that legitimate content for the company is not necessarily true or respectful content, but material whose publication it deems valuable from the publics point of view. Even if the social network could have stopped users from hearing Trumps Muslim ban speech, for instance, doing so would have prevented voters from learning something important about the candidates policy preferences.

This desire to inform citizens just illustrates how any outfits censorship practicesor lack thereofreflect a normative set of ideas about what best serves the interests of users. When Facebook, Google, or others frame content regulation as concerned with the safety of users, they mask the extent to which that safety is just one piece of a broader, and possibly controversial, conception of how we should lead our digital lives.

A social network that helps to structure the discourse of nearly two billion individuals ought to justify the design it chooses for them. And to its credit, Facebook seems more interested than just about any other technology company in giving explicit voice to its vision of building global community. But the fact that the companys moderation guidelines were developed ad hoc and without user input over the span of several years is worrying and hard to defend. When we stop pretending that online platforms are amoral structures, we also see the urgent need to scrutinize their foundations.

As it stands, the question of who ought to define and regulate hate speech is a moot one. With the exception of some European authorities, Facebook and other companies are already answering it for us, whether or not we accept their verdicts. Undoubtedly, many well-intentioned technologists envision a future in which online platforms guide political and social debate to be as robust as possible. But absent major changes, we can only hope that their utopia is not our dystopian future.

Gabriel H. Karger 18 is a philosophy concentrator in Mather House.

Eliot House Moves Facebook On-Line

It's late at night. You're surfing the Internet, staring at your own reflection in the screen, when you notice your

Yale To Give Professors Facebook Access

At Yale, the days of quietly slipping into the back of a classroomand promptly dozing off to your professors muted

techTALK

Matching faces to names is a Harvard pastime, thanks to the Freshman Register and the various House facebooks. Student demand

Safe Spaces and Free Speech

While the University of Chicago may have overstepped in issuing a blanket condemnation of safe spaces and content warnings, its letter was also a reaction to the suppression of speech that has every right to be heard on university campuses everywhere.

BGLTQ Office Prepares For Visit of Anti-Transgender 'Free Speech' Bus

Read more from the original source:

What Does Facebook Think Free Speech is For? - Harvard Crimson

Related Posts

Comments are closed.