Cloak your photos with this AI privacy tool to fool facial recognition – The Verge

Ubiquitous facial recognition is a serious threat to privacy. The idea that the photos we share are being collected by companies to train algorithms that are sold commercially is worrying. Anyone can buy these tools, snap a photo of a stranger, and find out who they are in seconds. But researchers have come up with a clever way to help combat this problem.

The solution is a tool named Fawkes, and was created by scientists at the University of Chicagos Sand Lab. Named after the Guy Fawkes masks donned by revolutionaries in the V for Vendetta comic book and film, Fawkes uses artificial intelligence to subtly and almost imperceptibly alter your photos in order to trick facial recognition systems.

The way the software works is a little complex. Running your photos through Fawkes doesnt make you invisible to facial recognition exactly. Instead, the software makes subtle changes to your photos so that any algorithm scanning those images in future sees you as a different person altogether. Essentially, running Fawkes on your photos is like adding an invisible mask to your selfies.

Scientists call this process cloaking and its intended to corrupt the resource facial recognition systems need to function: databases of faces scraped from social media. Facial recognition firm Clearview AI, for example, claims to have collected some three billion images of faces from sites like Facebook, YouTube, and Venmo, which it uses to identify strangers. But if the photos you share online have been run through Fawkes, say the researchers, then the face the algorithms know wont actually be your own.

According to the team from the University of Chicago, Fawkes is 100 percent successful against state-of-the-art facial recognition services from Microsoft (Azure Face), Amazon (Rekognition), and Face++ by Chinese tech giant Megvii.

What we are doing is using the cloaked photo in essence like a Trojan Horse, to corrupt unauthorized models to learn the wrong thing about what makes you look like you and not someone else, Ben Zhao, a professor of computer science at the University of Chicago who helped create the Fawkes software, told The Verge. Once the corruption happens, you are continuously protected no matter where you go or are seen.

The group behind the work Shawn Shan, Emily Wenger, Jiayun Zhang, Huiying Li, Haitao Zheng, and Ben Y. Zhao published a paper on the algorithm earlier this year. But late last month they also released Fawkes as free software for Windows and Macs that anyone can download and use. To date they say its been downloaded more than 100,000 times.

In our own tests we found that Fawkes is sparse in its design but easy enough to apply. It takes a couple of minutes to process each image, and the changes it makes are mostly imperceptible. Earlier this week, The New York Times published a story on Fawkes in which it noted that the cloaking effect was quite obvious, often making gendered changes to images like giving women mustaches. But the Fawkes team says the updated algorithm is much more subtle, and The Verges own tests agree with this.

But is Fawkes a silver bullet for privacy? Its doubtful. For a start, theres the problem of adoption. If you read this article and decide to use Fawkes to cloak any photos you upload to social media in future, youll certainly be in the minority. Facial recognition is worrying because its a society-wide trend and so the solution needs to be society-wide, too. If only the tech-savvy shield their selfies, it just creates inequality and discrimination.

Secondly, many firms that sell facial recognition algorithms created their databases of faces a long time ago, and you cant retroactively take that information back. The CEO of Clearview, Hoan Ton-That, told the Times as much. There are billions of unmodified photos on the internet, all on different domain names, said Ton-That. In practice, its almost certainly too late to perfect a technology like Fawkes and deploy it at scale.

Naturally, though, the team behind Fawkes disagree with this assessment. They note that although companies like Clearview claim to have billions of photos, that doesnt mean much when you consider theyre supposed to identify hundreds of millions of users. Chances are, for many people, Clearview only has a very small number of publicly accessible photos, says Zhao. And if people release more cloaked photos in the future, he says, sooner or later the amount of cloaked images will outnumber the uncloaked ones.

On the adoption front, however, the Fawkes team admits that for their software to make a real difference it has to be released more widely. They have no plans to make a web or mobile app due to security concerns, but are hopeful that companies like Facebook might integrate similar tech into their own platform in future.

Integrating this tech would be in these companies interest, says Zhao. After all, firms like Facebook dont want people to stop sharing photos, and these companies would still be able to collect the data they need from images (for features like photo tagging) before cloaking them on the public web. And while integrating this tech now might only have a small effect for current users, it could help convince future, privacy-conscious generations to sign up to these platforms.

Adoption by larger platforms, e.g. Facebook or others, could in time have a crippling effect on Clearview by basically making [their technology] so ineffective that it will no longer be useful or financially viable as a service, says Zhao. Clearview.ai going out of business because its no longer relevant or accurate is something that we would be satisfied [with] as an outcome of our work.

See original here:

Cloak your photos with this AI privacy tool to fool facial recognition - The Verge

Related Posts

Comments are closed.