Facebook battles bad content ‘cloaking’ with AI – Marketing Dive

Dive Brief:

Protecting News Feed users from inadvertently accessing unwanted content is a noble goal, but Facebook has a vested interest in ensuring the content meets its stated policies because any offensive content that makes it through the review process due to cloaking could lead to a scenario similar to what Google faced earlier this year when a number of advertisers boycotted YouTube for fear of their ads appearing next to content filled with hate speech or that supported terrorists.

It is unclear how big an impact the boycott had on YouTube from a monetization standpoint, with Google reporting strong Q2 results for the platform even as other reports suggest big advertisers like P&G and Unilever have accelerated their ad spend reductions across the digital landscape. However, it is clear that brands are taking the safety issue seriously, resulting in a flurry of announcements from Google, Facebook and others delineating how they are trying to close the gaps through which offensive content may be slipping.

Providing a good user experience has been a long-stated goal of Facebook, particularly with its advertising, and because cloaking can lead users to pages filled with unwanted ads, scams and offensive material, it is an issue the platform needs to deal with. Given the attention being paid to brand safety by advertisers and the media, Facebook is wise to get ahead of its cloaking problem as well as make it known it is doing so.

See the rest here:

Facebook battles bad content 'cloaking' with AI - Marketing Dive

Related Posts

Comments are closed.