Here’s how to stop facial recognition systems in their tracks – RTE.ie

Posted: November 13, 2021 at 11:19 am

Analysis: researchers have developed adversarial image generators to fool facial recognition software and protect privacy rights

When you post an image of yourself on a social media platform, what audience do you have in mind? Friends? Family? Facial recognition software? Probably not the latter. But images which we post online are used routinely as reference examples for facial recognition systems. These may be developed by social media companies, security companies, intelligence agencies or any private individual with enough computer power to create a Deep Neural Network system.

Clearly lots of people dont care as we post images of ourselves with abandon. However, there is a privacy concern here that is coming back to bite us, especially our children who generally post the most and tend to have the least information about what the imagery is used for. Once you can be recognised by AI, your image can be-cross referenced with CCTV footage or pictures taken of you in public, giving the holder of the captured image, access to your online identity. Public anonymity is breached.

Such abuse of images is in violation of peoples privacy rights, but it is very difficult to counter. The laws do not yet exist to protect peoples imagery and minors currently have no right to be forgotten online.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RT Radio 1's Today show, CNN"s Donie O'Sullivan discusses Clearview AI's facial recognition technology

Researchers at the Insight SFI Research Centre for Data Analytics at DCU have been working on a technique that gives power back to the individual to hide their own face and the faces of their loved ones from any facial recognition software that is scraping images from social media platforms or anywhere else online. Its an ingenious technology that uses an aspect of the recognition software against itself.

'Adversarial example images are used by developers to disrupt facial recognition technology in order to test the process. Andrew Merrigan, working with ProfAlan Smeaton, is looking at ways to make adversarial image generators that can be applied by ordinary social media users to make their images unrecognisable to AI, while looking exactly the same to the human eye. The researchers envision a simple app through which users could run images before posting them online. To their friends and family, the image looks exactly the same. To AI, it does not provide a match to any other photo and is of no use as a data point in facial recognition models.

The use of facial recognition systems, coupled with the unauthorised use of images taken from social media sources, is a threat to individuals' privacy. The use of facial recognition systems is constantly increasing throughout the world. The development of open-source facial recognition models based on Deep Neural Networks makes it possible for anyone with access to moderate computing power to create this software. More troubling perhaps is the rise of companies using images from social media with or without users' consent.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RT Six One News in 2015, new facial recognition software system launched by Garda

Deep Neural Networks are known to have superior performance compared to more conventional approaches when applied to a variety of computer vision tasks including facial recognition. They are, however, susceptible to adversarial examples where images are crafted so that the machine's interpretation differs greatly from that of a person looking at the same image. The original image is altered by adding patterns of noise to the image of the face at the level of individual pixels, noise which is imperceptible to the human eye but cannot be ignored by face recognition algorithms.

This is another example of how images and videos contain information encoded at the pixel level, which can be hidden in plain sight. An example of this hidden information are the slight changes in skin colour caused by the pulses of our heartbeat. As blood rushes to our skin it changes colour and gets redder with every heartbeat. This can be identified in videos, but is not visible to the human eye. Last year, the team used this hidden pulse information as a way to detect deepfake videos.

A user uploading largely adversarial images of themselves would poison any pool of reference example images of their face

The motivation behind the work is to create a small effective model, which can be deployed on user devices, to enable them to convert facial images into adversarial examples, prior to sharing them online. The model needs to be efficient enough to run quickly on small user devices like mobile phones. It must be able to modify images in a minimal way so as to be largely unnoticeable by human observers.

Using a model such as this, a user uploading largely adversarial images of themselves would poison any pool of reference example images of their face. This would mean that images taken from a source not controlled by the user, such as a surveillance camera, would not have a strong match to the user because the examples used as reference would not match the users true identity. This could be an important step in helping individuals maintain privacy against those who would use their images for facial recognition without their consent.

The views expressed here are those of the author and do not represent or reflect the views of RT

Read more here:

Here's how to stop facial recognition systems in their tracks - RTE.ie

Related Posts