Dire predictions about deepfakes damaging elections in 2024 turned out to be a long way off the mark.
The predictions mostly came from two camps.
The first was politics people who don’t really understand technology. The second was technology people, particularly AI people, who don’t understand political campaigns.
The reality was, for problematic generative AI to have a big impact, it needed to successfully thread the eyes of a series of needles, each a foot away from the other.
So far, AI hasn’t got close to doing that.
Here’s what the politics people didn’t really get:
- AI tools aren’t very good yet.
- They certainly aren’t reliable or mature enough for campaigns to use.
- What you can do with AI, you can generally already do with other tools, often better (even if it’s slower or harder).
- The “data-driven campaign” isn’t usually as data-driven as you think (see also how winning campaigns now all have a “tech genius” whereas losing ones “didn’t invest in the right stuff”).
- AI (and social media companies) do try to stop bad things happening (even if they often fail). People being unable to separate real from unreal is, in the end, bad for business.
And here’s what the tech/AI folks didn’t get:
- Most campaigns won’t risk their reputations on tools they can’t reliably control.
- Actively creating fake stuff about opponents is bad for a campaign (yes, even that campaign). Pros don’t do it, and amateurs tend to overdo it.
- It’s actually hard to come up with damaging fake stuff anyway.
- Fake stuff is fake, and people usually spot it in the end.
- People will tell other people when stuff is fake. Particularly journalists.
- What’s the “Day Two” story? Narratives matter more than individual pieces of information, and AI can’t create or sustain them.
- People distrust AI. Experiments show this. If they know something is AI, even if it’s not, they will believe it less.
- Most people are hard to persuade, and there are relatively few of the ones you can.
- Foreign influence campaigns (aka “the Russians”) have low reach and prevalence and you can ignore them most of the time.
Lastly, both groups didn’t really understand the extent to which media fragmentation has made a mess of things.
The argument was that fragmentation would help AI cause chaos, because it would be hard to stop it taking root, and it would do so on platforms with fewer moderation resources before making the jump to places it could be ubiquitous.
But fragmentation seems to have been a limiting factor, with examples of “bad AI” (or even satirical AI) largely stuck on the platforms where they were first posted, rather than hopping from place to place.
And fragmentation is where we can hook deepfakes – or the lack of them – to the US campaign just gone.
All the classic proxies for successful campaigning – poll numbers, funds raised, doors knocked – evaporated into an ebb and flow of “vibes”, with the “lead” seeming to swap hands every few days. There was no one place where the campaign was happening. It was both everywhere and nowhere all at once.
Amid all that, deepfakes, like the rest of us, felt a little lost.