It’s Time to Update Section 230 – Harvard Business Review

Posted: August 14, 2021 at 12:35 am

Internet social-media platforms are granted broad safe harbor protections against legal liability for any content users post on their platforms. Those protections, spelled out in Section 230 of the 1996 Communications Decency Act (CDA), were written a quarter century ago during a long-gone age of nave technological optimism and primitive technological capabilities. So much has changed since the turn of the century that those protections are now desperately out of date. Its time to rethink and revise those protections and for all leaders whose companies rely on internet platforms to understand how their businesses might be affected.

Social-media platforms provide undeniable social benefits. They gave democratic voice to oppressed people during the Arab Spring and a platform for the # MeToo and #BlackLivesMatter movements. They helped raise $115 million for ALS with the Ice Bucket Challenge, and they helped identify and coordinate rescue for victims of Hurricane Harvey.

But weve also learned just how much social devastation these platforms can cause, and that has forced us to confront previously unimaginable questions about accountability. To what degree should Facebook be held accountable for the Capitol riots, much of the planning for which occurred on its platform? To what degree should Twitter be held accountable enabling terrorist recruiting? How much responsibility should Backpage and Pornhub bear for facilitating the sexual exploitation of children? What about other social-media platforms that have profited from the illicit sale of pharmaceuticals, assault weapons, and endangered wildlife? Section 230 just didnt anticipate such questions.

Section 230 has two key subsections that govern user-generated posts. The first, Section 230(c)(1), protects platforms from legal liability relating to harmful content posted on their sites by third parties. The second, Section 230(c)(2), allows platforms to police their sites for harmful content, but it doesnt require that they remove anything, and it protects them from liability if they choose not to.

These provisions are good except for the parts that are bad.

The good stuff is pretty obvious. Because social-media platforms generate social benefits, we want to keep them in business, but thats hard to imagine if they are instantly and irreversibly liable for anything and everything posted by third parties on their sites. Section 230(c)(1) was put in place to address this concern.

Section 230(c)(2), for its part, was put in place in response to a 1995 court ruling declaring that platforms who policed any user generated content on their sites should be considered publishers of and therefore legally liable for all of the user-generated content posted to their site. Congress rightly believed that ruling would make platforms unwilling to police their sites for socially harmful content, so it passed 230(c)(2) to encourage them to do so.

At the time, this seemed a reasonable approach. But the problem is that these two subsections are actually in conflict. When you grant platforms complete legal immunity for the content that their users post, you also reduce their incentives to proactively remove content causing social harm. Back in 1996, that didnt seem to matter much: Even if social media platforms had minimal legal incentives to police their platform from harmful content, it seemed logical that they would do so out of economic self-interest, to protect their valuable brands.

Lets just say weve learned a lot since 1996.

One thing weve learned is that we significantly underestimated the cost and scope of harm that posts on social-media can cause. Weve also learned that platforms dont have strong enough incentives to protect their brands by policing their platforms. Indeed, weve discovered that providing socially harmful content can be economically valuable to platform owners while posing relatively little economic harm to their public image or brand name.

Today there is a growing consensus that we need to update Section 230. Facebooks Mark Zuckerberg even told Congress that it may make sense for there to be liability for some of the content, and that Facebook would benefit from clearer guidance from elected officials. Elected officials, on both sides of the aisle, seem to agree: As a candidate, Joe Biden told the New York Times that Section 230 should be revoked, immediately, and Senator Lindsey Graham (R-SC) has said, Section 230 as it exists today has got to give. In an interview with NPR, the former Congressmen Christopher Cox (R-CA), a co-author of Section 230, has called for rewriting Section 230, because the original purpose of this law was to help clean up the Internet, not to facilitate people doing bad things.

How might Section 230 be rewritten? Legal scholars have put forward a variety of proposals, almost all of which adopt a carrot-and-stick approach, by tying a platforms safe-harbor protections to its use of reasonable content-moderation policies. A representative example appeared in 2017, in a Fordham Law Review article by Danielle Citron and Benjamin Wittes, who argued that Section 230 should be revised with the following (highlighted) changes: No provider or user of an interactive computer service that takes reasonable steps to address known unlawful uses of its services that create serious harm to others shall be treated as the publisher or speaker of any information provided by another information content provider in any action arising out of the publication of content provided by that information content provider.

This argument, which Mark Zuckerberg himself echoed in testimony he gave to Congress in 2021, is tied to the common law standard of duty of care, which the American Affairs Journal has described as follows:

Ordinarily, businesses have a common law duty to take reasonable steps to not cause harm to their customers, as well as to take reasonable steps to prevent harm to their customers. That duty also creates an affirmative obligation in certain circumstances for a business to prevent one party using the businesss services from harming another party. Thus, platforms could potentially be held culpable under common law if they unreasonably created an unsafe environment, as well as if they unreasonably failed to prevent one user from harming another user or the public.

The courts have recently begun to adopt this line of thinking. In a June 25, 2021 decision, for example, the Texas Supreme Court ruled that Facebook is not shielded by Section 230 for sex-trafficking recruitment that occurs on its platform. We do not understand Section 230 to create a lawless no-mans-land on the Internet, the court wrote. Holding internet platforms accountable for the words or actions of their users is one thing, and the federal precedent uniformly dictates that Section 230 does not allow it. Holding internet platforms accountable for their own misdeeds is quite another thing. This is particularly the case for human trafficking.

The duty-of-care standard is a good one, and the courts are moving toward it by holding social media platforms responsible for how their sites are designed and implemented. Following any reasonable duty-of-care standard, Facebook should have known it needed to take stronger steps against user-generated content advocating the violent overthrow of the government. Likewise, Pornhub should have known that sexually explicit videos tagged as 14yo had no place on its site.

Not everybody believes in the need for reform. Some defenders of Section 230 argue that as currently written it enables innovation, because startups and other small businesses might not have sufficient resources to protect their sites with the same level of care that, say, Google can. But the duty-of-care standard would address this concern, because what is considered reasonable protection for a billion-dollar corporation will naturally be very different from what is considered reasonable for a small startup. Another critique of Section 230 reform is that it will stifle free speech. But thats simply not true: All of the duty-of-care proposals on the table today address content that is not protected by the First Amendment. There are no First Amendment protections for speech that induces harm (yelling fire in a crowded theater), encourages illegal activity (advocating for the violent overthrow of the government), or that propagates certain types of obscenity (child sex-abuse material).

Technology firms should embrace this change. As social and commercial interaction increasingly move online, social-media platforms low incentives to curb harm are reducing public trust, making it harder for society to benefit from these services, and harder for legitimate online businesses to profit from providing them.

Most legitimate platforms have little to fear from a restoration of the duty of care. Much of the risk stems from user-generated content, and many online businesses host little if any such content. Most online businesses also act responsibly, and so long as they exercise a reasonable duty of care, they are unlikely to face a risk of litigation. And, as noted above, the reasonable steps they would be expected to take would be proportionate to their services known risks and resources.

What good actors have to gain is a clearer delineation between their services and those of bad actors. A duty of care standard will only hold accountable those who fail to meet the duty. By contrast, broader regulatory intervention could limit the discretion of, and impose costs on, all businesses, whether they act responsibly or not. The odds of imposing such broad regulation increase the longer harms from bad actors persist. Section 230 must change.

Go here to read the rest:
It's Time to Update Section 230 - Harvard Business Review

Related Posts