We all need to be deepfake detectors but especially social media platforms | TheHill – The Hill

A quality deepfake video clip, released at the right time, could almost certainly swing an election result.

The technology needed to create these deceptive video clips which convey people saying or doing things they never said or did has peaked just as primary voters and caucus-goers register the first results of the 2020 presidential election in the coming days and weeks.

Its obvious something should be done to safeguard our election process from such powerful disinformation, especially during a time when our social media communities are awash in intentionally false and misleading political information.

The best solution to deepfakes, however, is a little less obvious. Its downright counter-intuitive. We need tech firms such as Facebook, Google, and Twitter to protect us. Yes, I am suggesting the wolves guard the henhouse. Heres why.

Deepfakes represent a historically unique threat to democratic discourse. A video clip is different than a text-based message. It puts the audience in the moment. Viewers see with their eyes and hear with their ears. Its the most visceral form of information.

When tech-savvy bad actors manipulate machine-learning tools to create false video footage of political candidates doing or saying something that would disenfranchise or encourage voters, it is difficult for us to notice.

Just the same, First Amendment protections limit the tools lawmakers can use to protect democracy from such manipulation and false information. Laws that restrict content, particularly when it relates to political speech, risk violating foundational free speech safeguards. Even intentionally misleading information, the Supreme Court has ruled, is often protected by the First Amendment. So, any law-related solution would face an uphill climb.

Despite these concerns, Texas is one of two states that has a deepfake law. It criminalizes deepfakes that are used to influence an election and allows the state to penalize the deepfake creator and the forum, such as YouTube.

Californias law has similar aims but is far more expansive. It criminalizes deepfakes that are used to harm the political process but provides numerous exemptions, including for parody and satire. It also exempts news media. While California lawmakers careful efforts to clarify the laws scope help, they also provide gray areas where those who are accused can claim their work fits within the exemptions.

Neither states law has been tested in court. Recent Supreme Court decisions, however, make their survival unlikely.

Despite lawmakers laudable efforts, these are 20th-century solutions to a 21st-century problem. It is often impossible to know who created a deepfake. Even if we have that information, what if the creator was outside the states jurisdiction or the U.S. for that matter?

Texass law conflicts with a federal law that protects online forum providers from liability for how people use their services.

This is why, at least in the short term, we need a little help from the tech firms that have created the forums in which false information flourishes.

Social media firms do not face geographic barriers or First Amendment safeguards. At any point, they could implement systems to block, delete, or label deepfakes or other suspected doctored videos if they want to. Thats the catch.

Weve had some encouraging moves lately. Twitter surveyed its users in November, asking a series of questions about how the service should mitigate deepfakes.

Facebook announced its manipulated media policy in early January. The firm stated deepfakes that would likely mislead someone into thinking that a subject of the video said words that they did not say will be removed unless they are published as satire or parody.

Its unclear how, and how well, Facebook will implement this policy. Consistent enforcement of its policies, on Instagram and Facebook, has been problematic. Thats the drawback when the wolves guard the henhouse the company is going to fend for itself before it worries about democracy. We rely on a corporations altruism to protect the flow of information.

This is not ideal, but its the least bad option.

The system could be more effective if tech firms created an industry-wide independent advisory board that encouraged policy change and enforcement. This could be somewhat modeled after the Motion Picture Association, which has operated as a form of industry self-regulation for nearly a century.

Facebook has taken steps in this direction, particularly with its announcement this week about forming a board that will advise the firm regarding content moderation decisions. This, however, deals only with one company rather than the industry. The conflict between corporate interests and protecting democracy remains too direct.

We also have a part to play. We must let the tech firms know what we expect from them by tagging them in posts about our concerns and using their support features. We have to be on the lookout for video clips that dont seem right. When we see them, we have to verify the information with a trusted source. If they are a deepfake, we must report them.

Big tech firms have shown encouraging signs in their recent efforts to monitor and block deepfakes, but as the election approaches, we need more from these services.

Jared Schroeder is an assistant professor of journalism at SMU Dallas, where he specializes in First Amendment law. He is the author of the 2018 book The Press Clause and Digital Technology's Fourth Wave: Media Law and the Symbiotic Web.

Read the original post:

We all need to be deepfake detectors but especially social media platforms | TheHill - The Hill

Related Posts

Comments are closed.