Why Trumps Twitter ban isnt a violation of free speech: Deplatforming, explained – Vox.com

Posted: January 23, 2021 at 6:20 am

Within days of the January 6 Capitol insurrection, outgoing President Donald Trumps internet presence was in upheaval. Trumps social media accounts were suspended across Facebook, Twitter, YouTube, Instagram, Snapchat, Twitch, and TikTok.

The same was true for many of Trumps more extremist followers. Twitter suspended more than 70,000 accounts primarily dedicated to spreading the false right-wing conspiracy theory QAnon. Apple, Google, and Amazon Web Services banned the right-wing Twitter alternative Parler, effectively shutting down the site indefinitely (though its attempting to return) and relegating many right-wingers to the hinterlands of the internet.

Permanently revoking users access to social media platforms and other websites a practice known as deplatforming isnt a new concept; conservatives have been railing against it and other forms of social media censure for years. But Trumps high-profile deplatforming has spawned new confusion, controversy, and debate.

Many conservatives have cried censorship, believing theyve been targeted by a collaborative, collective agreement among leaders in the tech industry in defiance of their free speech rights. On January 13, in a long thread about the sites decision to ban Trump, Twitter CEO Jack Dorsey rejected that idea. I do not believe this [collective deplatforming] was coordinated, he said. More likely: companies came to their own conclusions or were emboldened by the actions of others.

Still, the implications for free speech have worried conservatives and liberals alike. Many have expressed wariness about the power social media companies have to simply oust whoever they deem dangerous, while critics have pointed out the hypocrisy of social media platforms spending years bending over backward to justify not banning Trump despite his posts violating their content guidelines, only to make an about-face during his final weeks in office. Some critics, including Trump himself, have even floated the misleading idea that social media companies might be brought to heel if lawmakers were to alter a fundamental internet law called Section 230 a move that would instead curtail everyones internet free speech.

All of these complicated, chaotic arguments have clouded a relatively simple fact: Deplatforming is effective at rousting extremists from mainstream internet spaces. Its not a violation of the First Amendment. But thanks to Trump and many of his supporters, it has inevitably become a permanent part of the discourse involving free speech and social media moderation, and the responsibilities that platforms can and should have to control what people do on their sites.

We know deplatforming works to combat online extremism because researchers have studied what happens when extremist communities get routed from their homes on the internet.

Radical extremists across the political spectrum use social media to spread their messaging, so deplatforming those extremists makes it harder for them to recruit. Deplatfoming also decreases their influence; a 2016 study of ISIS deplatforming found, for example, that ISIS influencers lost followers and clout as they were forced to bounce around from platform to platform. And when was the last time you heard the name Milo Yiannopoulos? After the infamous right-wing instigator was banned from Twitter and his other social media homes in 2016, his influence and notoriety plummeted. Right-wing conspiracy theorist Alex Jones met a similar fate when he and his media network Infowars were deplatformed across social media in 2018.

The more obscure and hard to access an extremists social media hub is, the less likely mainstream internet users are to stumble across the group and be drawn into its rhetoric. Thats because major platforms like Facebook and Twitter generally act as gateways for casual users; from there, they move into the smaller, more niche platforms where extremists might congregate. If extremists are banned from those major platforms, the vast majority of would-be recruits wont find their way to those smaller niche platforms.

Those extra hurdles added obscurity and difficulty of access also apply to the in-group itself. Deplatforming disrupts extremists ability to communicate with one another, and in some cases creates a barrier to continued participation in the group. A 2018 study tracking a deplatformed British extremist group found that not only did the groups engagement decrease after it was deplatformed, but so did the amount of content it published online.

Social media companies should continue to censor and remove hateful content, the studys authors concluded. Removal is clearly effective, even if it is not risk-free.

Deplatforming impacts the culture of both the platform thats doing the ousting and the group that gets ousted. When internet communities send a message of zero tolerance toward white supremacists and other extremists, other users also grow less tolerant and less likely to indulge extremist behavior and messaging. For example, after Reddit banned several notorious subreddits in 2015, leaving many toxic users no place to gather, a 2017 study of the remaining communities on the site found that hate speech decreased across Reddit.

That may seem like an obvious takeaway, but it perhaps needs to be repeated: The element of public shaming involved in kicking people off a platform reminds everyone to behave better. As such, the message of zero tolerance that tech companies sent by deplatforming Trump is long overdue in the eyes of many, such as the millions of Twitter users who spent years pressuring the company to ban the Nazis and other white supremacists whose rhetoric Trump frequently echoed on his Twitter account. But it is a welcome message nonetheless.

As for the extremists, the opposite effect often takes place. Extremist groups have typically had to sand off their more extreme edges to be welcomed on mainstream platforms. So when that still isnt enough and they get booted off a platform like Twitter or Facebook, wherever they go next tends to be a much laxer, less restrictive, and, well, extreme internet location. That often changes the nature of the group, making its rhetoric even more extreme.

Think about alt-right users getting booted off 4chan and flocking to even more niche and less moderated internet forums like 8chan, where they became even more extreme; a similar trajectory happened with right-wing users fleeing Twitter for explicitly right-wing-friendly spaces like Gab and Parler. The private chat platform Telegram, which rarely steps in to take action against the many extremist and radical channels it hosts, has become popular among terrorists as an alternative to more mainstream spaces. Currently, Telegram and the encrypted messaging app Signal are gaining waves of new users as a result of recent purges at mainstream sites like Twitter.

The more niche and less moderated an internet platform is, the easier it is for extremism to thrive there, away from public scrutiny. Because fewer people are likely to frequent such platforms, they can feel more insular and foster ideological echo chambers more readily. And because people tend to find their way to these platforms through word of mouth, theyre often primed to receive the ideological messages that users on the platforms might be peddling.

But even as extreme spaces get more extreme and agitated, theres evidence to suggest that depriving extremist groups of a stable and consistent place to gather can make the groups less organized and more unwieldy. As a 2017 study of ISIS Twitter accounts put it, The rope connecting ISISs base of sympathizers to the organizations top-down, central infrastructure is beginning to fray as followers stray from the agenda set for them by strategic communicators.

Scattering extremists to the far corners of the internet essentially forces them to play online games of telephone regarding what their messaging, goals, and courses of action are, and contributes to the group becoming harder to control which makes them more likely to be diverted from their stated cause and less likely to be corralled into action.

So far, all of this probably seems like a pretty good thing for the affected platforms and their user bases. But many people feel wary of the power dynamics in play, and question whether a loss of free speech is at stake.

One of the most frequent arguments against deplatforming is that its a violation of free speech. This outcry is common whenever large communities are targeted based on the content of their tweets, like when Twitter finally did start banning Nazis by the thousands. The bottom line is that social media purges are not subject to the First Amendment rule that protects Americans right to free speech. But many people think social media purges are akin to censorship and its a complicated subject.

Andrew Geronimo is the director of the First Amendment Clinic at Case Western Reserve law school. He explained to Vox that the reason theres so much debate about whether social media purges qualify as censorship comes down to the nature of social media itself. In essence, he told me, websites like Facebook and Twitter have replaced more traditional public forums.

Some argue that certain websites have gotten so large that theyve become the de facto public square, he said, and thus should be held to the First Amendments speech-protective standards.

In an actual public square, First Amendment rights would probably apply. But no matter how much social media may resemble that kind of real space, the platforms and the corporations that own them are at least for now considered private businesses rather than public spaces. And as Geronimo pointed out, A private property owner isnt required to host any particular speech, whether thats in my living room, at a private business, or on a private website.

The First Amendment constrains government power, so when private, non-governmental actors take steps to censor speech, those actions are not subject to constitutional constraints, he said.

This distinction is confusing even to the courts. In 2017, while ruling on a related issue, Supreme Court Justice Anthony Kennedy called social media the modern public square, noting, a fundamental principle of the First Amendment is that all persons have access to places where they can speak and listen, and then, after reflection, speak and listen once more. And while social media can seem like a place where few people have ever listened or reflected, its easy to see why the comparison is apt.

Still, the courts have consistently rejected free speech arguments in favor of protecting the rights of social media companies to police their sites the way they want to. In one 2019 decision, the Ninth Circuit Court of Appeals cited the Supreme Courts assertion that merely hosting speech by others is not a traditional, exclusive public function and does not alone transform private entities into state actors subject to First Amendment constraints. The courts generally reinforce the rights of website owners to run their websites however they please, which includes writing their own rules and booting anyone who misbehaves or violates those rules.

Geronimo pointed out that many of the biggest social media companies have already been enacting restrictions on speech for years. These websites already ban a lot of constitutionally protected speech pornography, hate speech, racist slurs, and the like, he noted. Websites typically have terms of service that contain restrictions on the types of speech, even constitutionally protected speech, that users can post.

But that hasnt stopped critics from raising concerns about the way tech companies removed Trump and many of his supporters from their platforms in the wake of the January 6 riot at the Capitol. In particular, Trump himself claimed a need for Section 230 reform that is, reform of the pivotal clause of the Communications Decency Act that basically allows the internet as we know it to exist.

Known as the safe harbor rule of the internet, Section 230 of the 1996 Communications Decency Act is a pivotal legal clause and one of the most important pieces of internet legislation ever created. It holds that No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

Simply put, Section 230 protects websites from being held legally responsible for what their users say and do while using said websites. Its a tiny phrase but a monumental concept. As Geronimo observed, Section 230 allows websites to remove user content without facing liability for censoring constitutionally protected speech.

But Section 230 has increasingly come under fire from Republican lawmakers seeking to more strictly regulate everything from sex websites to social media sites where conservatives allege they are being unfairly targeted after their opinions or activities get them suspended, banned, or censured. These lawmakers, in an effort to force websites like Twitter to allow all speech, want to make websites responsible for what their users post. They seem to believe that altering Section 230 would force the websites to then face penalties if they censored conservative speech, even if that speech violates the websites rules (and despite several inherent contradictions). But as Recodes Sara Morrison summed up, messing with Section 230 creates a huge set of problems:

This law has allowed websites and services that rely on user-generated content to exist and grow. If these sites could be held responsible for the actions of their users, they would either have to strictly moderate everything those users produce which is impossible at scale or not host any third-party content at all. Either way, the demise of Section 230 could be the end of sites like Facebook, Twitter, Reddit, YouTube, Yelp, forums, message boards, and basically any platform thats based on user-generated content.

So, rather than guaranteeing free speech, restricting the power of Section 230 would effectively kill free speech on the internet as we know it. As Geronimo told me, any government regulation that would force [web companies] to carry certain speech would come with significant First Amendment problems.

However, Geronimo also allows that just because deplatforming may not be a First Amendment issue doesnt mean that its not a free speech issue. People who care about free expression should be concerned about the power that the largest internet companies have over the content of online speech, he said. Free expression is best served if there are a multitude of outlets for online speech, and we should resist the centralization of the power to censor.

And indeed, many people have expressed concerns about deplatforming as an example of tech company overreach including the tech companies themselves.

In the wake of the attack on the Capitol, a public debate arose about whether tech and social media companies were going too far in purging extremists from their user bases and shutting down specific right-wing platforms. Many observers have worried that the moves demonstrate too much power on the part of companies to decide what kinds of opinions are sanctioned on their platforms and what arent.

A company making a business decision to moderate itself is different from a government removing access, yet can feel much the same, Twitters Jack Dorsey stated in his self-reflective thread on banning Trump. He went on to express hope that a balance between over-moderation and deplatforming extremists can be achieved.

This is by no means a new conversation. In 2017, when the web service provider Cloudflare banned a notorious far-right neo-Nazi site, Cloudflares president, Matthew Prince, opined on his own power. I woke up this morning in a bad mood and decided to kick them off the Internet, he wrote in a subsequent memo to his employees. Having made that decision we now need to talk about why it is so dangerous. [...] Literally, I woke up in a bad mood and decided someone shouldnt be allowed on the Internet. No one should have that power.

But while Prince was hand-wringing, others were celebrating what the ban meant for violent hate groups and extremists. And that is really the core issue for many, many members of the public: When extremists are deplatformed online, it becomes harder for them to commit real-world violence.

Deplatforming Nazis is step one in beating far right terror, antifa activist and writer Gwen Snyder tweeted, in a thread urging tech companies to do more to stop racists from organizing on Telegram. No, private companies should not have this kind of power over our means of communication. That doesnt change the fact that they do, or the fact that they already deploy it.

Snyder argued that conservatives fear of being penalized for the violence and hate speech they may spread online ignores that penalties for that offense have existed for years. Whats new is that now the consequences are being felt offline and at scale, as a direct result of the real-world violence that is often explicitly linked to the online actions and speech of extremists. The free speech debate obscures that reality, but its one that social media users who are most vulnerable to extremist violence people of color, women, and other marginalized communities rarely lose sight of. After all, while people whove been kicked off Twitter for posting violent threats or hate speech may feel like theyre the real victims here, theres someone on the receiving end of that anger and hate, sometimes even in the form of real-world violence.

The deplatforming of Trump already appears to be working to curb the spread of election misinformation that prompted the storming of the Capitol. And while the debate about the practice will likely continue, it seems clear that the expulsion of extremist rhetoric from mainstream social media is a net gain.

Deplatforming wont single-handedly put a stop to the spread of extremism across the internet; the internet is a big place. But the high-profile banning of Trump and the large-scale purges of many of his extremist supporters seems to have brought about at least some recognition that deplatforming is not only effective, but sometimes necessary. And seeing tech companies attempt to prioritize the public good over extremists demand for a megaphone is an important step forward.

Support Vox's explanatory journalism

Every day at Vox, we aim to answer your most important questions and provide you, and our audience around the world, with information that empowers you through understanding. Voxs work is reaching more people than ever, but our distinctive brand of explanatory journalism takes resources. Your financial contribution will not constitute a donation, but it will enable our staff to continue to offer free articles, videos, and podcasts to all who need them. Please consider making a contribution to Vox today, from as little as $3.

More here:
Why Trumps Twitter ban isnt a violation of free speech: Deplatforming, explained - Vox.com

Related Posts