The Year That Changed the Internet – The Atlantic

Posted: December 30, 2020 at 5:13 pm

That enthusiasm didnt last, but mainstream platforms learned their lesson, accepting that they should intervene aggressively in more and more cases when users post content that might cause social harm. During the wildfires in the American West in September, Facebook and Twitter took down false claims about their cause, even though the platforms had not done the same when large parts of Australia were engulfed in flames at the start of the year. Twitter, Facebook, and YouTube cracked down on QAnon, a sprawling, incoherent, and constantly evolving conspiracy theory, even though its borders are hard to delineate. These actions had a domino effect, as podcast platforms, on-demand fitness companies, and other websites banned QAnon postings. Content moderation comes to every content platform eventually, and platforms are starting to realize this faster than ever.

As if to make clear how far things had come since 2016, Facebook and Twitter both took unusually swift action to limit the spread of a New York Post article about Hunter Biden mere weeks before the election. By stepping in to limit the storys spread before it had even been evaluated by any third-party fact-checker, these gatekeepers trumped the editorial judgment of a major media outlet with their own.

Gone is the naive optimism of social-media platforms early days, whenin keeping with an overly simplified and arguably self-serving understanding of the First Amendment traditionexecutives routinely insisted that more speech was always the answer to troublesome speech. Our tech overlords have been doing some soul-searching. As Reddit CEO Steve Huffman said, when doing a PR tour about an overhaul of his platforms policies in June, I have to admit that Ive struggled with balancing my values as an American, and around free speech and free expression, with my values and the companys values around common human decency.

Derek Thompson: The real trouble with Silicon valley

Nothing symbolizes this shift as neatly as Facebooks decision in October (and Twitters shortly after) to start banning Holocaust denial. Almost exactly a year earlier, Zuckerberg had proudly tied himself to the First Amendment in a widely publicized stand for free expression at Georgetown University. The strong protection of even literal Nazism is the most famous emblem of Americas free-speech exceptionalism. But one year and one pandemic later, Zuckerbergs thinking, and, with it, the policy of one of the biggest speech platforms in the world, had evolved.

The evolution continues. Facebook announced earlier this month that it will join platforms such as YouTube and TikTok in removing, not merely labeling or down-ranking, false claims about COVID-19 vaccines. This might seem an obvious move; the virus has killed more than 315,000 people in the U.S. alone, and widespread misinformation about vaccines could be one of the most harmful forms of online speech ever. But until now, Facebook, wary of any political blowback, had previously refused to remove anti-vaccination content. However, the pandemic also showed that complete neutrality is impossible. Even though its not clear that removing content outright is the best way to correct misperceptions, Facebook and other platforms plainly want to signal that, at least in the current crisis, they dont want to be seen as feeding people information that might kill them.

See the rest here:
The Year That Changed the Internet - The Atlantic

Related Post