Sign up to read our regular email newsletters
A deepfake detector designed to identify unique facial expressions and hand gestures could spot manipulated videos of world leaders such as Volodymyr Zelenskyy and Vladimir Putin
By
Video on a smartphone of a real speech by Ukrainian president Volodymyr Zelenskyy
Kristina Kokhanova/Alamy
A deepfake detector can spot fake videos of Ukraine’s president Volodymyr Zelenskyy with high accuracy. This detection system could not only protect Zelenskyy, who was the target of a deepfake attempt during the early months of the Russian invasion of Ukraine, but also be trained to flag deepfakes of other world leaders and business tycoons.
“We don’t have to distinguish you from a billion people – we just have to distinguish you from [the deepfake made by] whoever is trying to imitate you,” says Hany Farid at the University of California, Berkeley.
Farid worked with Matyáš Boháček at Johannes Kepler Gymnasium in the Czech Republic to develop detection capabilities for faces, voices, hand gestures and upper body movements. Their research builds on previous work in which a system was trained to detect deepfake faces and head movements of world leaders, such as former president Barack Obama.
Boháček and Farid trained a computer model on more than 8 hours of video featuring Zelenskyy that had previously been posted publicly.
The detection system scrutinises many 10-second clips taken from a single video, analysing up to 780 behavioural features. If it flags multiple clips from the same video as being fake, that is the signal for human analysts to take a closer look.
Based on real videos the AI is trained on, it can detect when something doesn’t follow a person’s usual habits. “[It] can say, ‘Ah, what we observed is that with President Zelenskyy, when he lifts his left hand, his right eyebrow goes up, and we are not seeing that’,” says Farid. “We always imagine there’s going to be humans in the loop, whether those are reporters or analysts at the National Security Agency, who have to be able to look at this being like, ‘Why does it think it’s fake?’”
The deepfake detector’s holistic head-and-upper-body analysis is uniquely suited to spotting manipulated videos and could complement commercially available deepfake detectors that are mostly focused on spotting less intuitive patterns involving pixels and other image features, says Siwei Lyu at the University at Buffalo in New York, who was not involved in the study.
Read more:
“Up to this point, we have not seen a single example of deepfake generation algorithms that can create realistic human hands and demonstrate the flexibility and gestures of a real human being,” says Lyu. That gives the latest detector an advantage in catching today’s deepfakes that fail to convincingly capture the connections between facial expressions and other body movements when a person is speaking – and potentially stay ahead of the quick pace of advances in deepfake technology.
The deepfake detector achieved 100 per cent accuracy when tested on three deepfake videos of Zelenskyy that modified his mouth movements and spoken words, commissioned from the Delaware-based company Colossyan, which offers custom videos featuring AI actors. Similarly, the detector performed flawlessly against the actual deepfake that was released in March 2022.
But the time-consuming training process requiring hours of video for each person of interest is less suitable for identifying deepfakes involving ordinary people or non-consensual videos of sexual acts. “The more futuristic goal would be how to get these technologies to work for less exposed individuals who do not have as much video data,” says Boháček.
The researchers have already built another deepfake detector focused on ferreting out false videos of US president Joe Biden, and are considering creating similar models for public figures such as Russia’s Vladimir Putin, China’s Xi Jinping and billionaire Elon Musk. They plan to make the detector available to certain news organisations and governments.
Journal reference: PNAS, DOI: 10.1073/pnas.2216035119
Journal reference: PNAS, DOI: 10.1073/pnas.2216035119
More on these topics:
Magazine issue 3417 , published 17 December 2022
Previous article
Sticky plaster for punctured lungs stretches as they expand
Next article
Venice may get a temporary respite from rising seas by 2035