2024 has been the year when deepfakes took off. Not only has this been a pivotal election year in several countries like the UK, US, and France, which have been gearing up for critical polls, but disinformation has also spread rampant across social media platforms with a rather feverish ferocity.
Until the junior leagues, fake celebrities have been used to scam people, mainly getting them to “invest” in dubious schemes and apps. Deep in one of the most critical election years, we see many AI-generated images or audio of political figures making bizarre statements and behaving in a rather odd way.
While the UK election has seen relatively few deepfake incidents, examples from around the world, particularly in the US, highlight the growing prevalence of this technology. Here’s a guide to identifying deepfakes and understanding their implications.
Identifying Deepfakes: Key Visual Clues
There is no coherence between body proportions or skin tones. Disproportionate facial and body sizes and mismatched skin tones can also signal a deep fake. In March 2022, a video of Ukrainian President Volodymyr Zelenskiy asking civilians to surrender to Russian forces was debunked when viewers noticed the head was disproportionately large compared to the body.
Such “puppet-master” deepfakes often reveal their inauthenticity through immobile body parts below the neck.
Extra fingers and limbs: AI-generated images often fail to accurately render human anatomy, resulting in extra fingers or distorted limbs. In April 2023, Twitter circulated a photograph of US President Joe Biden and Vice President Kamala Harris celebrating Donald Trump’s indictment. The photo was flagged as a deep fake due to Harris’s right hand having six fingers and other visual inconsistencies like a distorted flag and awry floor patterns.
No real alphabets or numbers: AI image generators struggle with reproducing text and numbers. A fake mugshot of Trump published in April 2023 featured nonsensical letters and numbers in the background instead of a coherent height chart. Such garbled text clearly indicates AI involvement, as these tools lack the understanding to generate meaningful symbols.
Odd-shaped mouth and chin when speaking: One of the most telling signs of a deepfake video is the area around the mouth. Deepfake technology often needs help accurately rendering this part of the face, leading to fewer wrinkles, less detail, and sometimes a blurry or smudged appearance. Poor synchronization between a person’s voice and mouth movements can also be a red flag.
For instance, a deepfake video posted in June 2024 depicted Nigel Farage, a political leader heading the UK’s Reform Party, destroying Rishi Sunak’s house in the video game Minecraft. The imperfect sync between Farage’s voice and mouth movements further indicated that the video was manipulated.
Bizarre video edits: Some manipulated videos are so poorly edited that they are easily spotted. Known as “cheap fakes,” these use simple video-editing software to create misleading content.
Before the Mexican elections, a video of then-presidential candidate Claudia Sheinbaum was edited to make it appear she wanted to close churches. The video was a patchwork of different clips, including one where she was refuting the false claim, highlighting how amateurish editing can reveal a video’s inauthenticity.
There is no concept of continuity. Another sign of deepfakes is inconsistency within the video. A video circulated in May 2024 falsely showed US State Department spokesperson Matthew Miller justifying Ukrainian military strikes on the Russian city of Belgorod.
The clip had glaring inconsistencies, such as Miller’s tie and shirt changing color mid-video, indicating manipulation. Such noticeable changes are often indicative of the deep-fake technology at play.
Strange speech patterns: Deepfake videos may also have odd speech patterns. For example, a deepfake of Keir Starmer selling an investment scheme used audio edited over his 2023 New Year address. The video’s sentence structure was peculiar, with Starmer saying “pounds” before numbers, suggesting using a text-to-audio tool that failed to mimic natural speech patterns accurately. This, coupled with a blurred lower facial area and out-of-sync voice and mouth movements, pointed to the video’s artificial nature.
The minute details: Most people who come across a deepfake are usually scrolling through their social media feeds very casually, which makes it difficult for them to spot a deepfake. However, if one observes a given deepfake very carefully, especially if it is an AI-generated image, one will see that it has a ton of idiosyncracies.
A case in point would be this image of Pope Francis in a luxury puffer jacket. If you look at the picture carefully, you’ll observe that many things defy conventional wisdom. The crucifix, for example, is hanging perfectly, but by just one-half of the chain. The Pope’s fingers look very odd around the cup.
Glasses and Mirrors: As sophisticated as deepfakes have become, they have an adamant time dealing with glasses and mirrors. Even though the computers or clouds that most deepfakes are made on are potent and can run circles around practically anything when it comes to physics, they somehow struggle to create how light interacts with glasses or mirrors digitally. Again, let’s take Pope Francis as an example. In the image above, notice how his glasses blend into his skin. Engines that create deep fakes have a tough time processing refraction and reflection.
How to stay vigilant against fakes
As generative AI technology evolves, deepfake detection becomes increasingly challenging. Experts like Dr. Mhairi Aitken from the Alan Turing Institute emphasize the importance of common sense and skepticism when encountering potentially misleading media. Comparing suspicious videos with known actual footage of the individuals can help identify inconsistencies in voice, mannerisms, and expressions.
Deepfakes threaten to mislead voters and undermine trust in authentic media. Awareness and critical evaluation of digital content are crucial in navigating the information landscape during this election year. As disinformation tactics grow more sophisticated, staying informed and vigilant is the best defense against manipulating public opinion.