China Is Trying to Scrub Bikinis and Smoking From the Internet

A new story reveals how Chinese live-streaming company Inke uses a combination of human moderators and AI to facilitate government censorship.

Cleaning Cyberspace

On Monday, the South China Morning Post published a story about the content moderation operations at Inke, one of China’s largest live-streaming companies.

The piece offers a rare glimpse at how China’s private sector helps facilitate government censorship. In some cases, that means flagging streams of people smoking or wearing bikinis — content that would likely seem fairly innocuous to an American audience — but in others, it means preventing internet viewers from seeing streams of people committing acts of terrorism or violence.

That’s the same kind of content multinational corporations such as Facebook have had trouble moderating — raising questions about what these Chinese companies have figured out that American ones haven’t.

Evolving Censorship

Inke tasks a team of 1,200 moderators with policing the streams of its 25 million users, according to SCMP.

The moderators watch streams 10- to 15-seconds before they actually go live, and in that time, they’re expected to catch anything “that is against the law and regulations, against mainstream values, and against the company’s values,” Zhi Heng, Inke’s content safety team leader, told the SCMP.

Inke defers to guidelines published by the China Association of Performing Arts to know what content falls under that umbrella, and according to the SCMP story, it ranges from politically sensitive speech and violence to people smoking or wearing bikinis.

The document is updated weekly, however, meaning content that might be acceptable one week could be censored the next, or vice versa.

To make this massive task of censoring content a little more manageable on its human moderators, Inke also employs algorithms and recognition software capable of filtering content into different risk categories.

The company sometimes dedicates just one human reviewer to watching streams considered “low-risk,” such as cooking shows, according to SCMP, while higher-risk streams receive closer scrutiny.

Learning Opportunity

The idea of censoring streams of people smoking cigarettes or wearing bikinis might seem ridiculous to a Western audience.

However, if Inke’s combination of human and AI moderators is effective at flagging the content deemed objectionable in China, it’s worth considering what it’s doing that others, such as Facebook, aren’t. Are Inke’s algorithms better in some discernible way? Has it stumbled upon the optimum human moderator-to-user ratio?

You might not agree with the content China is censoring, but content moderation isn’t by default objectionable — even Facebook’s own execs believe the company should have prevented the horrific livestream of the Christchurch shooting from reaching its audience, for example.

So perhaps there’s something Facebook and others could learn from how Inke is managing the job of filtering out undesirable online content, even if we don’t agree with China’s definition of undesirable.

READ MORE: No smoking, no tattoos, no bikinis: inside China’s war to ‘clean up’ the internet [South China Morning Post]

More on censorship: China Is Censoring “Genetically Edited Babies” on Social Media

The post China Is Trying to Scrub Bikinis and Smoking From the Internet appeared first on Futurism.

Continued here:
China Is Trying to Scrub Bikinis and Smoking From the Internet

Related Posts

Comments are closed.