MIT Technology Review yesterday reported a story about how ‘troll farms’ (probably better described as ‚profiteering content spammers‘) copy and repost content to gain (a lot of) traction on Facebook. The spammers‘ strategy gamed the way the company chooses to display content to users, ensuring their posts got lots of engagement, before racking up even more through being recommended to even more users (allowing the trolls, in the end, to make a bit of money off their ‘work’).
This happens, it seems, because while Facebook cares a lot about how much engagement a post gets when they choose who to show it to, it doesn’t care anything like as much about who’s posting it in the first place. The report cites the fact that the company chooses not to prioritise a “graph-based authority measure” (another way of saying that trusted sources tend to build up links with other trusted pages over time, and that the ties within this network are a good way of deciding whose content should be shown first).
Now, the MIT report is about non-political organic content, but if you swap “organic engagement” for “money”, similar problems apply to paid ads too. If you have the money to spend, you can run as many political ads as you like. Apart from a brief verification step, there’s no “graph-based authority measure” or reputation score to hold you back.
This is exactly what happened in the UK election in 2019, where hereto unknown advertisers spent fairly large sums of money (also of unknown origin) and, in doing so managed (according to our data from the time) to buy more ad impressions on election day than the mainstream parties.
Similar practices were also pretty widespread in the US in 2020, where (mostly Trump-supporting) pages would pop up, buy ads directing people to websites that were, in turn, stuffed with more ads, so they could pocket the profit.
Since the Russian IRA bought ads for the US election in 2016, Facebook (and Google) have run verification programmes for political advertisers. These seem to have staunched the flow of ads being bought from abroad for the purposes of so-called “influence campaigns”. But, as the examples above show, it hasn’t really stopped low-trust advertisers (political or otherwise) doing whatever they want during election periods (there are plenty of crappy commercial ads on social media platforms too…)
The frustrating thing is that there are many things that could be tried to stop it happening – mostly to do with slowing things down for a while before allowing a page to access full advertiser functionality.
Facebook (or Google) could:
- Cap the number of ads a new page can run
- Cap the reach of or spend on those ads
- Don’t let the page use custom/remarketing or lookalike audiences so it has to go broad
- Charge new/untrustworthy pages a premium for their ads, at least higher than the highest price trusted buyers are paying
- Downrank the ads of advertisers who cross-post lots of ads between accounts
- Downrank the ads of advertisers whose ads steal others‘ content
- Downrank the ads of advertisers who run lots of pages without good reason
- Remove engagement features from the page’s ads (we think this is something that should happen all the time)
Finally, as a safeguard, the platforms could work to ensure that people running for office don’t face these “on ramp” type measures (after all, candidates are often selected just a few weeks out from an election and it’s not fair to limit their campaigning because they didn’t exist yet).
Unfortunately, people who want to abuse policies like this will probably work out a way around them over time, which means they’ll likely require further evolution. But they seem pretty simple to imagine and test, and are content-neutral, which makes them appealing.
Perhaps one day the Wall Street Journal will leak a whistleblower’s report about how they were tried and rejected.