by
For much of 2020, the most visible conversation about the US election and tech was related to deepfakes (images or videos where the subject is replaced by another likeness). They could “destroy democracy” generally, and influence the US election in ways we couldn’t possibly imagine. People talked about disinformation, regulation, and how automated detection probably wouldn’t help a great deal.
It all sounded very bad indeed. And it didn’t happen.
With hindsight we can see that the flash points related to the November election were entirely unrelated to deepfakes. The election came and went with a spectacular fizzling out of the deepfake hype train. The one notable moment I can recall from the election period is a fake of Republican Matt Gaetz. It’s so bad, it resembles an old PlayStation cutscene.
Is your message “If we can do this, imagine what Putin can do”? And is what you’ve done awful? Because if it is, then people aren’t going to take it seriously.
Deepfake creators follow the money
Deepfake pros almost certainly decided to stay where they make their money: dubious porn clips. It’s (somewhat) more under the radar than drawing attention to your election interference. There’s a never-ending supply of people wanting celebrity fakes, or revenge / blackmail pornography.
Indeed, data from Sensity illustrated this perfectly. Although the US was the most targeted nation for deepfake activity, politics wasn’t the target. The most popular sector for fakes was entertainment at 63.9%. Politics weighed in at an incredibly low 4.5%.
Creating a political deepfake capable of turning the tide of an election, when even major outlets can only create very bad fakes? It was always going to be a long shot.
Where did all the deepfake election interference go?
The biggest problem for the November election was disinformation, conspiracy theories, and outright manipulation going viral. Creating a politically charged deepfake and having it be believable long enough before the inevitable debunking seems just plain unnecessary. Why invest all that time and effort into something when you can spin up millions of likes and reposts on social media instead?
These are questions which may not have been asked as rigorously as they should have been. The 2020 election has come and gone, and so has the chance for fakers to make an indelible mark on key aspects of democracy. What we ended up with, was half a dozen poorly made clips which feel more like parody than anything particularly serious. Indeed, the serious part is where folks working in and around Government look at the political clips offered during the run-up to the vote, and genuinely think they’re good uses of technology. They are not, and this suggests they perhaps need to be brought up to speed on the convincing (and not so convincing) aspects of this realm.
The bright side is that it appears the time for deepfakes to impact an election…any election…is gone. Many analysts suspect their best use is as an addition to scams, not the main feature. Even a little scrutiny brings the walls of artifice crashing down, so it’s best to leave them at the edges of peripheral vision.
Actual uses of deepfakes in the wild
Some of the biggest media splashes for deepfakes the past few months have had little, if nothing, to do with electioneering. One smash was the Tom Cruise Deepfakes posted to TikTok back in March which dazzled people with their brilliance. If you missed it, this video from creator Chris Ume will give you a sense of just how good deepfakes can be:
Sadly the genuinely well-done nature of the clips was undone almost immediately:
- The creator posted them to an account called “Deeptomcruise”, which linked to social media accounts of a well-known Tom Cruise impersonator.
- Viral attention was drawn to the clips as intended, instead of them simply being uploaded in low-key fashion and left to spread slowly, unnoticed, across the web for months or years.
- The creator spilled the beans in the press almost immediately, and mentioned they were essentially trying to get work off the back of it.
This was arguably never intended to be a clever commentary on the unreal nature of AI, but a VFX job reel.
The case of the face-swapped biker
The other interesting fake media content happening was the reveal that a popular female biker was using FaceApp to hide the fact he was a middle-aged man. This one genuinely shocked people, and unlike the Cruise approach was designed to conceal the truth from the get-go. If they hadn’t had a change of heart and told all, their many fans would still be none the wiser.
Compare and contrast all of the sophisticated GAN tools you see in the news, with “middle-aged man performs face swap using incredibly commonplace phone app”. Which one is more relevant? Which one had more impact outside of actual observable harm, such as deepfake revenge porn?
Digital detection and disclosure
While the notion of exposing your own fakery seems contradictory, in some ways the Tom Cruise deepfake creator had it right. Yes, it’s fake – but they’re not exactly pretending it’s genuine. By the same token, we now have app developers planning to add watermarks to their user-generated clips. The EU may want organisations to disclose when deepfakes are deployed. Researchers continue to study new methods of deepfake detection. Note that the researcher in that last link also seems more concerned about deepfake antics away from major electioneering.
Wherever you look, there’s a growing consensus that people simply want to know what’s placed before them is legitimate. If there is fakery involved, I suspect they’re cool with it as long as upfront disclosure takes place. The comfort levels around this technology somewhat suggests folks now view it the same way they view cinema-based VFX. This itself could be a problem. Become too complacent with it, and the tech runs the risk of causing unexpected damage down the line. Sure, it’s mostly fun and amusing right now – but what about when it suddenly isn’t?
There is also the significant volume of people out there prone to conspiracy theories and other virtual shenanigans. No matter how bad the fake, or how silly the story it’s attached to, there’s a good chance they’ll believe the content no matter what disclaimer is provided.
For now, deepfakes remain the weapon of choice for malign interference campaigns, troll farms, revenge porn, and occasionally humorous celebrity face-swaps. It remains to be seen, a year on from 2020, if they’ll ever strike a decisive blow in the misinformation wars on a grand scale.