Wars Are Bad For Free SpeechThe Coronavirus War Will Be No Exception – The National Interest

Theprominent legal scholar Geoffrey Stonereminds us that war is aperilous time for freedom of speech. The struggle with COVID-19 seems like awar. Some have evoked executive authorities created for, and justified by, wartime exigency.Unity will be needed to defeat this invisible enemy. How is free speech doing in this difficult time?

Speech may be restricted by public or private authorities. Public officials have strong incentives to censor or restrict speech perhaps especially during acrisis; hence, the First Amendment limits their powers over the freedom of speech. Content moderators also may restrict speech. Their powers in this regard are limited largely by their own commitments to free speech and consumer choices.

Somesaber rattling by local police departmentsaside, the government has done little to limit dissent or adiversity of views. Yesterday, the Democratic leadershipproposed astimulus bill that imposed additional disclosures and banned lobbying by companies receiving aid. This proposal has little chance of becoming law, though it bears watching.

Social media platforms have been active in both advancing and suppressing speech.Most tech companies are providing their users with expert information about COVID-19. Facebook isactively trying to steer [users] toward authoritative sourcesabout the pandemic.By their own accounts they are also suppressing alot of misinformation.Facebook has also devoted an extra million dollars to fact checking claims on its platform, though much of its emergency moderation effort has been focused on the less politically salient, though more immediately harmful threat of mental health crises fostered by isolation. Suppression can be legitimate as arecent case shows.

Shortly after the realities of the Coronavirus pandemic took hold in the United States, ayoung Californian technologist named Aaron Ginnwrote apaper arguing that the governments response to the virus was overblown and costly. He posted the essay to Medium, an online platform specializing in hosting such writings.Less than aday later, the moderators at Medium removed the Ginn essay. The Ginn essay attracted extensive criticism on Twitterfrom Carl T. Bergstrom, aprofessor of biology at the University of Washington who noted that the paper was getting too much traction here and even in traditional media.After the removal from Medium, the Ginn paper then was uploaded to at least two sites, one of which was Zerohedge, awebsite that sometimes pushes conspiracy theories. The venue of republication has some effect on readers perception of the article, just as the articles presence on Medium might impact Mediums reputation. Republication by Zerohedge may be reputationally poisonous, while an archive.org link, as Ihave used above, merely indicates that the content in question is no longer available at its original source. The extent to which the perceived reputational effects of hosting and deplatforming drive the politics of content moderation is underappreciated.

From alibertarian perspective, everything seems in order at this point. Aperson expressed acontroversial opinion and published it online via apopular blogging platform. Acting within its rights, the moderators of the platform took down the essay. They may have done so to avoid being associated with controversial and perhaps harmful speech. (To his credit,Ginn himself would later affirm that Medium and other platforms are free to associate with whom they want.) Meanwhile, the essay had prompted speech by Bergstrom countering its claims about the pandemic. The suppression of the essay related only to Medium. Everyone had aright to download the essay when it was on Medium, or from archive.org after its removal. Readers had no legal obligation to refrain from reposting the essay elsewhere. Ginns article was available, counter speech sought to expose its shortcomings, and everyone retained the responsibility to make up their own minds about Ginns arguments.

Does speech misinforming people about the pandemic incite akind of violence? Speech that misinforms people thereby convincing them to spread the COVID-19 which in turn infects some initially unidentified people who die or incur health care costs. Ido not think such incitement meets the legal test for justifying criminalizing speech. The speech in question does not intentionally bring about imminent harm. But that incitement test applies to public not private authorities. Tech companies believe they are suppressing speech to halt the spread of the virus and attendant harms, fulfilling apublic responsibility.In other words, they are balancing the value of some speech against the probability of it doing harm in the general population. In current circumstances, the platforms antipathy to hoaxes and conspiracy theories seems justified. But doesnt advocating areturn of economic life by Easter pose acertain probability of doing harm to some people? How much speech threatens harm in current circumstances andbeyond?

Finally the potential costs of false positives by content moderators. Lets imagine almost all the speech removed from the biggest platforms does threaten to harm some people. Yet inevitably content moderators will make mistakes especially if moderation by algorithm matters more in coming weeks. Imagine also that acontrarian offers an unexpected insight about the pandemic, onethat could save lives. Once shared on social media, his idea might seem not just contrarian but dangerous. Moderators might then removehispost. It mightthen turn up almost immediately on afringe site where the idea goes unnoticed and unconsidered. Will many people be saying in late July if we had only known! about the contrarian insight that would have saved lives?

Well, yes, they might be saying that later this year. But notice the contrarian idea was not suppressed. It appeared elsewhere; anyone could consider its arguments though most would stay clear of its marginal host. No system of social choice is perfect. But private content moderation beats public censorship even when the former suppresses speech that has great value. The nature of the internet means such suppression is never complete. Under aregime of piecemeal private moderation, its still possible that the valuable speech will be heard and heeded. Because platforms are open by default, and moderation occurs postpublication, even fringe ideas can get an initial hearing. Censorship seeks to make sure the relevant speech is neither heard nor heeded.

Our current crisis will not be good for free speech. Classical liberals may regret anyone suppressing speech even when justified. However, private moderators can legitimately suppress speech on social media.Indeed, leaders of the companies may feel they have alarger responsibility to suppress some speech during apandemic. We should keep in mind that the suppressed speech will be removed from one platform and not the internet. It may also be stigmatized. That outcome will be better for speechthan being censored and forgotten. We still might wonder how slippery the slope may be in defining harmful speech and how costly the moderators errors will turn out to be. Giving acceptable answers to those questions are also apart of the responsibility tech companies have to the larger public in this crisis and beyond.

This article by John Samples first appeared at CATO.

Image:Schoolmediadepartment staff are editing online classes, following the outbreak of thecoronavirusdisease (COVID-19) in the holy city of Karbala, Iraq March 26, 2020. REUTERS/Abdullah Dhiaa Al-Deen

See the original post:

Wars Are Bad For Free SpeechThe Coronavirus War Will Be No Exception - The National Interest

Related Posts

Comments are closed.