Hate Speech Online | Freedom Forum Institute

Posted: November 17, 2021 at 1:23 pm

By David L. Hudson Jr., First Amendment Scholar, andMahad Ghani,First Amendment Center Fellow

Last updated: September 18, 2017

The Internet revolutionized the way in which people share information and communicate with each other. But by providing an open forum for people to communicate with each other, the Internet also paved the way for speech that is usually reserved for the edges of society. Racists, misogynists, xenophobes, and terrorists have used the Web as a haven to communicate theirnoxious views, harass others, and even plan nefarious deeds.

Some Web sites deny that the Holocaust occurred. Others promote the beating of gays and lesbians. Still others rail against Muslims and Islam in the UnitedStates, or are anti-Christian. The 2016 election illuminated the extent that fake news had infiltrated society, resulting in incidents like an man armed with an assault rifle entered a family pizza restaurant because of false reporting he had read online in the Pizzagate incident. Many such sites target young people and seek to promote their hateful ideologies.

From cyberbullying to terrorists use of the Internet to recruit and incite,Internet hate speech is a serious problem, said Christopher Wolf, immediatepast chair of the International Network Against Cyber-Hate, in an e-mailinterview. The most notorious hate crimes of late such as the shooting at theHolocaust Museum (in Washington, D.C.) were committed by individuals who used the Internet to spread hate and to receive reinforcement from like-minded haters, who made hatred seem normal and acceptable.

Some contend that hate speech infringes on the 14th Amendments guarantee ofequal protection under the law. Alexander Tsesis, for example, wrote in a 2009article that hate speech is a threatening form of communication that iscontrary to democratic principles.1

However, the First Amendment provides broad protection to offensive,repugnant and hateful expression. Political speech receives the greatestprotection under the First Amendment, and discrimination against viewpoints runscounter to free-speech principles. Much hate speech qualifies as political, evenif misguided. Regulations against hate speech are sometimes imposed because thegovernment (at any level) disagrees with the views expressed. Such restrictionsmay not survive constitutional scrutiny in court.

Furthermore, the U.S. Supreme Court inRenov. ACLU(1997) noted (albeit in a non-hate speech context) that the Internetis entitled to the highest level of First Amendment protection, akin to theprint medium. In other words, online hate speech receives as much protection asa hate-speech pamphlet distributed by the Ku Klux Klan.

Given these factors high protection for political speech, hostility toviewpoint discrimination and great solicitude for online speech much hatespeech is protected. However, despite its text Congress shall make no law abridging the freedom of speech the First Amendment does not safeguard allforms of speech.

Unless online hate speech crosses the line into incitement to imminent lawless action or true threats, the speech receives protection under the First Amendment.

InBrandenburg v. Ohio(1969), the Supreme Court said that the constitutional guaranteesof free speech and free press do not permit a State to forbid or proscribeadvocacy of the use of force or of law violation except where such advocacy isdirected to inciting or producing imminent lawless action and is likely toincite or produce such action.

Most online hate speech will not cross into the unprotected category ofincitement to imminent lawless action because it will not meet the imminencerequirement. A message of hate on the Internet may lead to unlawful action atsome indefinite time in the future but that possibility is not enough to meetthe highly speech-protective test inBrandenburg.

For this reason, some legal commentators have urged that theBrandenburgstandard be modified with respect to online hate speech. One commentator wrotein 2002: New standards are needed to address the growing plague of Internetspeech that plants the seeds of hatred, by combining information and incitementthat ultimately enables others to commit violence.2

Another agreed, writing: AlthoughBrandenburgmay be suitable for thetraditional media outlets, which were well-established when it was decided,Internet speech and many unforeseen changes have made such a standard outdated.3Still another called for a revised imminence requirement in Internethate-speech cases to updateBrandenburgand make it applicable online.4

Some online hate speech could fall into the unprotected categoryoftrue threats.The FirstAmendment does not protect an individual who posts online I am going to killyou about a specific individual. The Supreme Court explained the definition oftrue threats inVirginia v. Black(2003) in which it upheld most of a Virginia cross-burning statute this way:

True threats encompass those statements where the speaker meansto communicate a serious expression of an intent to commit an act of unlawfulviolence to a particular individual or group of individuals. The speaker neednot actually intend to carry out the threat. Rather, a prohibition on truethreats protect(s) individuals from the fear of violence and from the disruptionthat fear engenders, in addition to protecting people from the possibility thatthe threatened violence will occur.

The Court inVirginia v. Blackreasoned that crosses burned with an intent tointimidate others could constitutionally be barred as provided in the Virginialaw. (But the Court did strike down a part of the law that said there was apresumption that all cross-burnings were done with an intent to intimidate; forinstance, in the consolidated cases the Court considered, one involved across-burning with a property owners permission.) Thus, online hate speechmeant to communicate a serious expression of an intent to commit violence andintimidate others likely would not receive First Amendment protection.

A few cases have applied the true-threat standard to online speech. InPlanned Parenthood v. American Coalition of Life Activists(2002), the 9th U.S.Circuit Court of Appeals held that some vigorous anti-abortion speech including a Web site called the Nuremberg Files that listed the names andaddresses of abortion providers who should be tried for crimes againsthumanity could qualify as a true threat. The 9th Circuit emphasized that thenames of abortion providers who have been murdered because of their activitiesare lined through in black, while names of those who have been wounded arehighlighted in grey.

Similarly, the 5th U.S. Circuit Court of Appeals ruled inU.S. v. Morales(2001) that an 18-year-old high school student made true threatswhen he wrote in an Internet chat room that he planned to kill other students athis school.

Even in the speech-restrictive world of the military, the U.S. Court of Appeals for the Armed Forces ruled inUnited States v. Wilcox(2008) that a member ofthe military could not be punished under the Uniform Code for Military Justicefor posting racially offensive and hateful remarks he made over the Internetabout white supremacy. The court wrote that the service members variouscommunications on the Internet are not criminal in the civilian world [and] did not constitute unprotected dangerous speech under the circumstancesof this case. No evidence was admitted that showed the communications eitherinterfere[d] with or prevent[ed] the orderly accomplishment of the mission, orpresent[ed] a clear danger to loyalty, discipline, mission, or morale of thetroops.

The Supreme Court turned their eyes towards social media to determine whether speech online could constitute a threat in Elonis v. United States (2015). The case involved an individual that posted rap lyrics on his Facebook page in which he threatened to kill his ex-wife. He was charged for conveying threats across state lines.

When the case arrived to the Supreme Court, the case revolved around determining whether a post on social media crossed into the realm of the True Threat standard. The Court chose to apply a reasonable person standard to determine whether the threshold had been met. They ultimately ruled that a reasonable person would not have found the rap lyrics to be a true threat, and reversed the decision.

Since the Supreme Court decision, there have been cases filed when children have used things like bomb emojis and have faced penalties. The Supreme Court choosing to apply a reasonable person standard will likely guide these cases moving forward.

If hateful Internet communications do not cross the lineinto incitement to imminent lawless action or a true threat, they receive FirstAmendment protection. The First Amendment distinguishes the United States fromother countries. Alan Brownstein and Leslie Gielow Jacobs, in their book GlobalIssues in Freedom of Speech and Religion, write that the U.S. is afree[-]speech outlier in the arena of hate speech. Many other countriescriminalize online hate speech.

With social media and the Internet increasingly resulting in real world acts of violence, and as a recruiting tool for terrorists, it is likely the law will change to address the changing times.

Wolf, chair of the Anti-Defamation Leagues Internet Task Force, said muchcould be done to counter online hate speech besides criminalizing it. There isa wide range of things to be done, consistent with the First Amendment,including shining the light on hate and exposing the lies underlying hate andteaching tolerance and diversity to young people and future generations, hesaid. Counter-speech is a potent weapon.

With where the law currently stands, hate speech is protected so long as it stays in the realm of just speech. The great Supreme Court Justice Oliver Wendell Holmes wrote that if there is any principle of the Constitution that more imperatively calls for attachment than any other, it is the principle of free thoughtnot free thought for those who agree with us but freedom for the thought that we hate. The Constitution ensures freedom of speech for all by fighting to protect even the most vile speech of all.

1Alexander Tsesis, Dignity and Speech: The Regulation ofHate Speech in a Democracy, 44Wake Forest L. Rev.497, 502 (2009).

2TiffanyKamasara, Planting the Seeds of Hatred: Why Imminence Should No Longer BeRequired to Impose Liability on Internet Communications, 29Capital UniversityL. Rev.835, 837 (2002).

3Jennifer L. Brenner, True Threats A MoreAppropriate Standard for Analyzing First Amendment Protection and Free SpeechWhen Violence is Perpetrated over the Internet, 78North Dakota L. Rev.753,783 (2002).

4John P. Cronan, The Next Challenge for the First Amendment: TheFramework for an Internet Incitement Standard, 51Catholic University L. Rev.425 (2002).

Continue reading here:
Hate Speech Online | Freedom Forum Institute

Related Posts