A question that’s asked a lot, particularly as discussions about how to regulate political ads rumble on, is “how should we decide what is – and isn’t – a political ad?”
The answer matters because legislators and regulators – rightly – want to impose some costs on political ads in the name of trust and transparency. At the same time, they want to try to avoid imposing these costs on non-political ads and actors.
Unfortunately, having thought about it a lot, we don’t think there’s a perfect answer to the question. No matter how broad you want the definition to be (and we think it should be pretty broad), you have to acknowledge that there will always be some false positives (ads identified as being about politics or political issues, but that aren’t) and some false negatives (ads that should have been identified, but were missed).
So instead of trying to discover and enforce a clear bright line on political vs. non-political ads, we propose a different approach – a position that tolerates a specific level of uncertainty.
Here’s how we’d go about it.
To start, you need an advertising platform (obviously) and an independent auditor.
The platform (a social media company or search engine) sells the ads, but also has «know your customer» and transparency obligations under the law (publishing information about ads and spending via ad libraries or APIs).
The independent auditor periodically checks this process and reports on how well the platform correctly identifies political ads. This ensures the platform doesn’t get to mark its own homework. Obviously the auditor would need unrestricted access to platform ad data. Platforms might choose to grant this, or the law could clear the way.
Next, you need some definitions of a political ad to test the platform’s policies and moderation processes against. We’ve already said there isn’t a perfect one, so instead, we’ll work with several. Think of these as forming concentric circles, with each definition broader than the last.
What should these definitions include?
We might start with a very narrow definition, that describes political ads as those from parties and candidates being used for purely electoral purposes (i.e. during a campaign period).
A second definition could expand on the first to include ads that mention politically relevant topics and keywords, whether in an election period or not, along with national and local government campaigns and relevant ads by media organisations.
The final, broadest definition could include actors who aren’t running for office, but who reference issues with political or policy implications in their ads (this is something like the scope Facebook aims for in its Ad Library). This would include ads from lots of charities and foundations, advocacy groups and campaigns.
For each definition, we also need to decide on a level of accuracy that we want the platform’s policies and moderation processes to try and meet. As ads get less relevant to elections, and the number you’re working with increases, you should tolerate a little less accuracy in how well platforms can identify relevant ads.
For example, for the narrowest definition (specific electoral campaigns), a platform should be expected to catch almost all relevant ads (say 99% of spending). There should be few false positives (non-electoral ads marked as political) and few false negatives (ads that should have been caught but weren’t). By contrast, for the broadest definition, you might be quite tolerant of false positives in order to reduce the number of ads that are missed.
When all this is put together, we find ourselves with a series of concentric definitions of a political ad, each with appropriate accuracy targets for platforms to meet, monitored by an independent auditor.
In terms of «reporting» on this data, it could look something like this (fictional data, reported on a per year, per country basis):
Total spending | Ad spending correctly identified | False negatives (ads missed by platform) | False positives (ads included by platform that shouldn’t have been) | Recommendations for improvement | |
Definition 1: Electoral ads (target: 99%) | £1,254,304 | 99% (pass) | 1% (£12500) | 5% (£62700) | N/a |
Definition 2: Political actors and topics (target: 95%) | £3,455,235 | 97% (pass) | 13% (£44520) | 22% (£449150) | Close to the threshold, so increase audit frequency. |
Definition 3: All political and issue ads (target: 80%) | £7,445,234 | 77% (fail) | 23% (£1714234) | 45% (£3354000) | Below threshold, certain topics missed, retrain ML, provide new guidance to moderators etc. If continues, penalties will be applied. |
If a platform meets the targets for their systems, they get a clean bill of health from the regulator. If a platform fails to maintain a functioning system for identifying political adverts so it can meet its transparency obligations, it could be penalised, or banned from accepting ads from campaigns.
For us, this fits into a wider pattern of ideas that try to understand whether moderation systems (usually a combination of machine learning and humans) are effective. We know that bad stuff always gets through, but finding it tends to create more heat than light. Faced with individual examples of their moderation failures, platforms tend to go on the defensive, arguing that only they have the full picture of a much more complex and positive reality.
By contrast, an analysis and accountability system of the type we propose above would allow for comparison of how different definitions and platforms perform. It’d let platforms and their regulators track results over time and identify actions and improvements. It’d also might help to end the cycle of narrative and counter-narrative between platforms and their critics and take the debate to a more productive place.
To do this though we have to accept that there are some things we can’t bring into focus – the definition of a political ad being one of them. There’s an upside to this though. It might be better to not even try.
Appendix: Attempting an audit using Facebook Ad Library data
To try and see what the above might look like in practice, here’s some recent, back of the envelope analysis we performed. We found that in Germany, for the 30 days to the 17th July, on Facebook:
- There were 3,390 pages that ran at least one “political or issue ad” (per Facebook’s own definition and identification).
- Of those, 1,457 were party affiliated (per our own identification)
- If we include government pages and other political players, the number increases to 1,653 pages.
- The rest (1,737) are a mixture of companies, charities, foundations and individuals. Many of these are on broadly political topics, but a large number are false positives.
- 974 of the pages ran political ads “without a disclaimer”, most likely because they didn’t consider their advertising to be political. For many, this was true – a large proportion of more false positives are found in these pages. We found 65 of these pages to be affiliated with political parties in some way.
- In terms of spending, party affiliated pages amounted to 17% of the total (€430k out of €2.5m). When you add in government pages and other political players, the proportion rises to 60%.
- The 974 pages “without a disclaimer” spent €330k. The 65 party pages “without a disclaimer” spent around €4k, or 0.15% of the total.
- False negatives can’t be found using Facebook’s Ad Library data, as the dataset (by definition) doesn’t include things they missed.
- The NYU team working on political ads estimates that, in the US, Facebook correctly identifies around 90% of «political and issue» ads. However, Facebook has recently shut down their access to relevant data.
- This is why you need an independent auditor with full, unimpeded data access.