Why Police Love the Idea of Automated Content Moderation – Slate

Photo illustration by Slate. Photos by Getty Images Plus.

In March 2019, a shooter livestreamed on Facebook as he attacked two mosques in Christchurch, New Zealand, killing more than 50 people and injuring many more. The livestreams slipped through the platforms content moderation systems, allowing thousands of people to view, download, and redistribute the horrific footage. In the aftermath of the attacks, Facebook and YouTube removed millions of copies of the videos in an effort to stifle their viral spread, but platforms struggled to keep the videos offline.

It was just one of a series of horrific violent episodes that have circulated on social media platforms like Facebook Live, Instagram, and even Amazons gaming platform, Twitch, in the past year. In response, governments and tech companies are forming new partnerships and strategies to keep violence from going viral online and to prevent it from being broadcast in the first place. But the strengthening of relationships between tech companies and law enforcement agencies illustrates the risk of content moderationhistorically a zone of private regulationbeing co-opted to facilitate law enforcement and digital surveillance.

As platforms develop more robust ways of identifying and ferreting out violent content, they also whet the appetites of police and intelligence agencies that might benefit from thosecapabilities.

Facebook, Twitch, and most other mainstream social media platforms ban the dissemination of violent content, but their content moderation systems arent always able to keep up. Once a video proliferates, its nearly impossible to entirely remove it from the internet and the world. Current content moderation entails chasing after prohibited content, rather than preventing its upload in the first place. And many content moderation workers are underpaid contractors working under abysmal conditions, as the scholar Sarah Roberts has documented. When moderation workers have to look at dozens of horrific photographs and videos each minute, they will inevitably make mistakes, and some banned content will slip through. Facebook and other big social media platforms are working on artificial intelligence and machine learning methods to detect and prevent violent content from ever being posted, but those techniques are still fairly rudimentary.

One response is the call for more collaboration between platforms and government actors. In May 2019, a group of dozens of nations, tech companies (including Amazon, Facebook, Google, Microsoft, and Twitter), and civil society organizations adopted the Christchurch Call, committing to accelerate research into and development of technical solutions to prevent the online dissemination of terrorist and extremist violence. These collaborative arrangements bring together a mix of enhanced content moderation efforts, automated technologies, and law enforcement input.

The tech industrys hope is that by developing new techniques and technologies to comply with government pressures, the sector can stave off strengthening calls for harder regulation. The tech industry is touting its private development of automation and artificial intelligence as a promising answer to the proliferation of online violence. But private sector investment is driven in no small part by government pressure. As platforms develop more robust ways of identifying and ferreting out violent content, they also whet the appetites of police and intelligence agencies that might benefit from those capabilities.

Accordingly, the tech sector is doubling down on investments in automated technology to address the challenges of moderation at scale. A separate tech industry consortium, the Global Internet Forum to Counter Terrorism, has created a shared database of hashesdigital fingerprintsto identify violent terrorist videos and keep them from being shared across social media platforms including Facebook, Twitter, YouTube, as well as other online services.

New automated moderation techniques will create ripe new sources of information for both private and public sector surveillance of users. The more platforms automate their content moderation techniques, the more data they can easily and quickly aggregate about users who attempt to post prohibited contentmapping their relationships, associations, and networks. For example, New Zealand police could ask Facebook to provide a list of all the users who attempted to repost the Christchurch video, or a list of all the users who watched one of the streams. Platforms could be asked to share this information with law enforcement in response to subpoenas, warrants, or other less-formal demands.

Consider, for example, social medias response to the changing relationship between the United States and Iran. Soon after the United States designated Irans Islamic Revolutionary Guard Corps a foreign terrorist organization, Facebook and its subsidiary Instagram began deleting the pages and profiles for IRGC officials and associates. In the wake of the United States killing of IRGC commander Gen. Qassem Soleimani in a January 2020 strike, Facebook and Instagram also began deleting posts that expressed support for Soleimani.

From a platform perspective, deleting these posts and pages is the quickest and easiest way to ensure that companies are not punished for hosting unlawful content. And if the chief policy goal is to stop dangerous ideas from spreading, then Facebook and Instagrams approach offers a highly effective silencing technique.

But from a law enforcement perspective, deleting these posts and pages might also deprive authorities of useful sources of intelligence. As Instagram and Facebook build out their capacity to automatically identify support for terrorist organizations, law enforcement might want to use these pages as honey pots, ensuring access to key information about those who engage with this content. This information could be used to map networks of terrorist sympathizers or help shed light on the diffusion of dangerous propaganda. Or it might simply help law enforcement identify and monitor those who have viewed dangerous content.

Information from social media platforms has long been a critical asset for law enforcement and intelligence agencies. Using search warrants and subpoenas, law enforcement agencies frequently get access to user data in the course of investigations. These demands are limited by privacy laws and the Fourth Amendment, which protects individuals against unreasonable government searches and seizures. As platforms redesign their content moderation rules and systems, law enforcements influence is equally widespreadbut less constrained by formal rules.

In some cases, police needs and automated content moderation systems are converging. London police, for instance, have partnered with Facebook to provide images from body-worn cameras in order to train Facebooks artificial intelligence system to detect first-person footage of shootings. If we want Facebook to be able to identify and filter out unlawful violent content, as the consensus seems to hold, then this kind of cooperation is critical.

But law enforcement also influences the design and implementation of content moderation systems in more subtle and alarming ways. In 2016, Israeli authorities attributed a wave of violence in part to Palestinian incitement on Facebook and sharply criticized the platform for sabotaging law enforcements efforts to take down posts from Palestinian users. The Knesset considered a law that would have required Facebook to take down inflammatory content. According to the Arab Center for the Advancement of Social Media, Facebook responded to the rising pressures by cracking down on Palestinian posts and pages. While the law ultimately was shelved at the eleventh hour, Facebook has reportedly continued to work closely with Israeli law enforcement to identify violations of its community standards. Or some of them. Last year, the Jerusalem Post reported that incitement by Israeli posters against Palestinians has remained a major problem on the platformone that neither Facebook nor Israeli law enforcement appears eager to address.

Even as the relationships between policing and platforms grow more embedded, they have remained pretty opaque to the public. Surveillance technologies are rarely subject to the same public oversight and control mechanisms as other government contracts, as Catherine Crump has shown. And when these relationships are unofficial, informal pressures on platforms tend to take place through backdoor channels that are less amenable to public scrutiny.

Though we often think of content moderation and surveillance as two entirely separate issues, the extent of law enforcement pressure on private content moderation shows how entwined they are. Together, platforms and law enforcement are capable of identifying individualsboth online and offfor monitoring and surveillance to an unparalleled degree. And the push for automated content moderation adds to these capabilities, expanding the wealth of data about users, their relationships, their interests, and their engagement with online content and creating new sources of data that are highly relevant to law enforcement investigations. Platforms abilities to identify, track, and control our online behaviors might be unsettling, but they are a gold mine for law enforcement. The good thing is that the tech sector might do more to limit the spread of horrific content on social media. But it would be wise to remember that its users interests might be different from law enforcements preferences.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

View post:

Why Police Love the Idea of Automated Content Moderation - Slate

Related Posts

Comments are closed.