How to fact check political ads

01st Jun 2020 | Share:

A while back we published ‘10 ideas for regulating political ads’ (which you might want to read before you read this).

One idea that some argue for that we don’t agree with is having a regulator fact check them. In the heat of an election campaign, we don’t think it’s sustainable or desirable for a regulator, no matter how notionally independent from the government, to monitor the content of thousands of politically charged ads.

But that’s only true for a regulator. It’s why we instead focused on regulatory ideas that increase transparency and trust, which we think would raise the floor for online political ad regulation.

For platforms, our view is different. By working together, we think platforms can do better than set the floor — they can raise the ceiling.

At the moment, self-regulation is very uneven. Twitter has ‘banned’ political advertising (a lauded, but cheap decision for them that still requires some definitional contortions when it comes to deciding what’s ‘political’), Google has banned certain targeting methods for political ads (again lauded, again cheap for them, again fraught with definitional issues as well as actively harmful to legitimate campaigners who wish, for example, to fundraise from their existing supporters), while Facebook allows political advertising (with a reasonable amount of transparency, some limits on particularly egregious content, but with no fact checking and in a form that means paid-for political messages on its platform are hard to hold to account).

Each platform has made different choices and therefore works somewhat differently, but all sell ads (to a greater or lesser extent) to political or issue advertisers wishing to reach the audiences who visit their digital properties.

In our previous post, we argued that there should be a common, regulated standard for ad transparency and common, regulated limits on how many ads a political campaign can run at once. Alongside these, we think there should be a common, self-regulated approach to fact checking political ads.

Before we explain how that might work, we’d like to mention some things about political ads that steer us towards our conclusion.

There are a lot of political ads. Campaigns typically place more paid ads than they make organic posts. Take the Trump campaign, which posts to Facebook tens of times a day, but buys a thousand or more different ads. This difference in scale is important, and places a significant burden on anyone tasked with checking them.

The second is, when it comes to fact checking, content isn’t the only thing. Ads are a combination of message, budget, target audience, engagement, timing and more. This means that although many ads say the same thing, they may be subtly (or radically) different in their impact. Because of the scale of the challenge, you’ll likely have to prioritise your efforts, which means you have to engage with questions along the lines of “What’s worse? An egregious lie that’s narrowly targeted, or a minor falsehood seen by millions?”. The answer to this question isn’t clear.

The third is whether political ads contain things that can be fact checked at all. Taking the Trump campaign again. While many of its ads are unorthodox and packed with hyperbole (and some breach the rules set down by platforms), it’s often hard to pin anything ‘fact checkable’ on them. This is true of most other general political claims, which take the form of vague promises, general future aspirations or attacks that contain a grain of truth. A fact checker would certainly have some work to do, but it’s quite unclear how busy they would actually be. Certainly not as busy as partisans think.

Given all this, we make no claims that initiating a fact checking programme would be straightforward, or easily scalable to the whole world, or cheap to run, or free of very difficult political judgements.

There is no simple solution, but a combination of independence, collaboration, resilience and a little bravery might get something sustainable off the ground.

With that in mind, we propose these ten ideas:

1. Platforms do no fact checking, but instead create an independent organisation to run the fact checking programme.

2. This organisation also does no fact checking itself, instead focusing on rules and coordination, enrolling and funding fact checking organisations, managing a shared definition of what’s “political”, handling appeals from advertisers, facilitating research and evaluating the performance of the programme itself. For example, it might work to create a common data standard for political ads and advertisers to make cross-platform fact checking possible and would listen to the needs of the fact checkers in doing so, but it would never concern itself directly with issues of content.

3. The fact checking programme would enrol fact checking partners that have the independence, process, transparency and resources to do it well. These could be media organisations, specialist fact checking organisations or civil society actors and non-profits. Independence and balance are particularly important — the programme can’t just have a single political proxy of the government fact checking an opposition party. Better to have no fact checking than a one-sided fact checking market controlled by the state.

4. There should be a marketplace of fact checkers for users to choose from. If the New York Times wants to fact check ads, great. If Fox News does, great. If your local paper wants to keep an eye on what the town council is advertising, that’s great too. Note that many of these organisations will already be involved in the fact checking initiatives created by the platforms to monitor organic content.

5. As a user, the platform offers me a choice of who fact checks the ads targeted at me. It does this by allowing me to opt out of seeing fact checks from any specific fact checker. We recognise that fact checking cannot be unbiased, so we don’t try to force that. Instead, checks are performed by organisations who hold known political positions with as much transparency as possible. If a user doesn’t want to engage with those positions, it should absolutely be their choice not to see them. (Users should also be able to choose to switch fact checking off entirely, though it should be switched on by default.)

6. Switching fact checking off also switches off social features such as shares and comments. We think it’s likely best these are removed entirely from ads anyway (see our regulatory ideas), but it shouldn’t be the case that you can be targeted with false information, which you then share frictionlessly with hundreds of other people.

7. Fact checks would be prioritised primarily on the basis of the reach of the advertiser and ads. If the same problematic claims are found to be repeated in other ads with less reach, they can be fact checked too (see above for the ideas of a common data standard for political ads, making cross-platform checking easier). The second focus would be on those advertisers whose ads receive lots of flags. You’d want to check them more often.

8. Platforms should work to share and implement the best, researched, practices on fact checking. Some ads will need labelling or additional context. Others might get removed. Sometimes a user will be warned after the fact of seeing an ad that was later checked. Sometimes they will not. Sometimes two fact checks will disagree and users will have to make up their own mind about who to believe. Responses and policies will need to change over time as research continues. But better for platforms to work together on these common challenges than alone. There needs to be some transparency here too — both the findings of any research on implementations and the aggregated decisions made by fact checkers should be made available to researchers.

9. Advertisers whose ads are flagged can take it up with the coordinating body, who will seek further input from the fact checker, and can recommend another participating fact checker gives a second opinion. There needs to be enough capacity in the system to do this quickly.

10. The fact checking programme is independently audited regularly (say every two years). Recommendations are developed and folded into the programme’s practices continuously (while remaining sensitive to imposing de facto changes in local election law during campaign periods).

As you can see, we think this is complex. Yet further challenges remain (you may have others, please send them our way).

One is institutional. Who does this coordination? How is the independent body constituted?

This is an open question. It probably doesn’t need the star names associated with the Facebook Oversight Board. Some election monitors, some people who have worked on political campaigns, some journalists and some academics would help. It should call on additional expertise when it needs it. It likely needs a structure that embraces a core ‘secretariat’ that can take good practice from country to country, coupled to local expertise for each. It’ll need to adapt continuously, as bad actors constantly look for ways around the rules, or for ways to challenge the rules themselves. After five or so years, it’ll have learned a great deal from a full cycle of elections around the world.

Another challenge is of scope.

Is running a fact checking programme enough? What else could an independent international body with an interest in political ads do? Plenty, we think. It could set standards for ad libraries. It could verify political advertisers. It could grant anonymity for those advertisers who need protection from bad governments. It could advise on how to raise regulatory standards. In sum, it could help the platforms make political advertising more trustworthy and transparent.

Yet another is financial. How does all this get paid for?

Having platforms pay fact checkers directly hasn’t worked reliably. Organisations have joined and left these programmes, with several complaining the work was unsustainable for them. Platforms should look at alternatives such as charging a levy on political ads to pay for the programme (The levy might have to be quite large. The Facebook Oversight Board has reportedly had $100m invested in it. These things cost a lot of money.)

One further thing to consider is that revenues from the US market will likely have to subsidise the programme in other countries. There are several possible outcomes here. One is that the companies withdraw from all but the US market. Another is that they offer the programme in financially and politically relevant markets (i.e. the EU). Another is that they accept the responsibility of funding a fully fledged, global solution. Yet another is that international organisations step in to pay for it, in the name of protecting democracy.

The final challenge is political will. Can this ever be enough for politicians?

To create a sustainable two-tiered structure — a raised floor to regulate political ads and a uniform ceiling for additional self-regulatory measures — all participants need to find some stability in this arrangement. Many politicians have called on platforms to fact-check political ads, but would react with horror if their own ads were checked and labelled false. This would be unhelpful. If they want fact-checking as much as they claim they do, politicians need to be part of the answer by encouraging a better system into existence and allowing it to operate.

Once again, there’s no simple solution. But by working together, platforms might be able to create a system that encompasses independence, collaboration and resilience. But first they need to admit they have a problem, and be brave enough to work together to solve it.

Get involved in our work by downloading the Who Targets Me browser extension!

Chrome Firefox Edge Safari