Deepfakes: Some progress in video detection, but it’s back to the basics for faked audio – Biometric Update

A pair of developments are being reported in efforts to thwart deepfake video and audio scams. Unfortunately, in the case of digitally mimicked voice attacks, the advice is old school.

An open-access paper published by SPIE, an international professional association of optics and photonics, reports on a new algorithm reportedly has scored a precision rate in detecting deepfake video of 99.62 percent. It reportedly was accurate 98.21 percent of the time.

It has been three years since the threat of deepfakes broke big in the global media, and during that time, efforts have quickly grown more sophisticated.

Fear about misuse (beyond simply grafting the faces of celebrities on those of porn actors) has sometimes been breathless, with some observers warning that a key military figure in a nuclear-armed nation could appear to issue emergency orders to launch missiles.

The papers authors, two from Thapar Institute of Engineering and Technology and the third from Indraprastha Institute of Information Technology (both in India), claim a research milestone.

They say that they are the first to make publicly available a database of deepfakes manipulated by generative adversarial networks that feature famous politicians. Their database is 100 source and 100 destination videos.

What is more, they claim to be the first with an algorithm that can spot deepfakes of politicians within two seconds of the start of a clip. The team has said they used temporal sequential frames taken from clips to pull off the feat.

Biometrics providers ID R&D and NtechLab finished among the leaders in a recent video Deepfake Detection Challenge.

Voice fraud detection efforts continue apace, too.

Until the pandemic, when people of all walks of life began routinely participating in video calls, deepfake audio attack looked more menacing over the medium term.

Comparing the two threats, it just seemed more likely that a convincing faked call could rattle a key mid-level staff member into helping the boss out in an emergency. The odds have evened a bit.

A white paper published by cybersecurity firm Nisos, sketches five incidents involving deepfake audio attacks.

Nisos writes in the marketing document that it actually investigated one such attack including the original synthetic audio. It was the faked voice of a companys CEO asking an employee to call back to finalize an urgent business deal.

Wisely, the employee immediately called the legal department. The number the would-be victim was intended to call was a VOIP service burner.

Nisos engineers studied the recording with Spectrum3d, a spectrogram tool, which, along with just listening to the message and comparing it to a known human voice elicited some data, but, apparently, no smoking gun.

Ultimately, the best advice that Nisos or anyone in the industry can offer is to stress the commonsense. If something about a call smells fishy, call legal.

biometrics | deepfakes | fraud prevention | research and development | spoof detection | video analytics | voice authentication

Read the original post:

Deepfakes: Some progress in video detection, but it's back to the basics for faked audio - Biometric Update

Related Posts

Comments are closed.