Deepfake videos of Boris Johnson and Jeremy Corbyn emerge – with a stark message
Two fake videos of UK Prime Minister Boris Johnson and his opponent Jeremy Corbyn doing the seemingly impossible — supporting each other — have been posted online ahead of the snap UK elections in December.
The videos, called deepfakes, were released by Future Advocacy, a think tank that endorses responsible policies around the use of artificial intelligence. The organization made the videos using the biometric data of both politicians and voice actors.
“My friends, I wish to rise above this divide and endorse my worthy opponent, the right honorable Jeremy Corbyn to be Prime Minister. Only he, not I, can make Britain great again,” Johnson tells the camera. Another video of Corbyn doing the same was also posted to Twitter.
“What we’ve done today is good, but it’s going to get way better,” said Areeq Chowdhury, Head of Think Tank at Future Advocacy. “In the past this kind of thing could only really be done by a Hollywood production company – now it’s being done by a small think tank. Soon everyone will be able to do this.”
In the past year, the number of deepfake videos online has almost doubled, according to an October report by DeepTrace, a laboratory in Amsterdam that detects and monitors deepfakes. According to the report, non-consensual revenge porn currently makes up 96% of deepfake videos found on the internet.
Deepfakes are already fairly easy for the average user to create. In September, a Chinese app called Zao was released that lets users digitally plaster their face over actors from their favorite movies, prompting viral videos of users morphed into characters from Titanic and Game of Thrones.
“The type of deep fake we’ve produced is called video dialogue replacement,” said Bill Posters, the UK artist who created the videos of Johnson and Corbyn, and was also behind the deepfake footage of Mark Zuckerberg that went viral in June. Deepfakes are “about engaging the emotional sides of our brains,” he said. “You can create multi sensory experiences that are incredibly engaging and used as narrative devices. In turn, these devices can be used as powerful tools for disinformation campaigns.
“We want to show people where the technology is at,” Chowdhury said. While many videos still use voice actors, voice mimicking software is constantly advancing. In September, the software was reportedly used in the UK for one of the first-ever “AI heists.” Criminals used the software to place a fake phone call to a company employee, successfully requesting he transfer hundreds of thousands of dollars to a foreign bank account.
Videos are less easy to pull off, Chowdhury explained. “Some of them are good, but how good does it need to be before we start regulating it?” he said. ”I don’t think it needs to be perfect.”
The Future Advocacy videos are not perfect — little inconsistencies in intonation or movements of the face give the game away. Chowdhury has some advice for cracking a deepfake: “Look for oddities in the video, the voice, especially around the edges of the face — if you look very closely the edges tend to be less realistic.”
And at the end of the video, Johnson and Corbyn reveal the nature of the video. “I think I may be one of the thousands of deepfakes on the internet using powerful technologies to tell stories that aren’t true,” the fake Corbyn admits.
“The rise of synthetic media and deepfakes is forcing us towards an important and unsettling realization: our historical belief that video and audio are reliable records of reality is no longer tenable,” Giorgi Patrini, CEO of DeepTrace, wrote in a statement last month.
Chowdhury echoed this thought: “I think we need to think a bit more about how we visibly share what is a trusted video,” he said. “I don’t know the answer to that.”