(This blog post is the product of a series of conversations among EU-based civil society organisations on policy options related to political ads, transparency and microtargeting. Several organisations contributed policy ideas and this post summarises them below)
There’s a lot of public concern about microtargeting and the threats to individual agency and democracy that it might pose. Campaigns now routinely run thousands or tens of thousands of online adverts simultaneously, each targeting different groups and seeking incremental optimisations that add up to… something.
But what’s often lacking is clear agreement on what microtargeting actually entails, whether there’s enough good to outweigh the risk of harm and what, if anything, to do about it and last of all, how it might get done. Let’s try and answer some of those questions.
What is microtargeting in advertising?
There’s no single definition, but we consider it an advertising practice that involves some or all of:
- Targeting small, precise, homogeneous groups based on common factors
- Narrowcasting rather than broadcasting
- Being often based on targeting digital behaviours rather than demographics
- Relying on a dynamic, feedback-loop process that creates revealed preferences in users, not necessarily with their knowledge
Why are we worried about it?
1/ Microtargeting can create a behavioural feedback loop – if you see and react to certain types of content, you’ll likely see more of it. This leads to a number of different potential effects:
- “Filter bubbles” and “echo chambers” where voters live inside narrow, constrained information environments
- Different arguments being shown to different groups, leading to an increased risk of polarisation
- Telling people what they appear to want to hear, rather than a full and balanced overview of the arguments
- Potential voter suppression
- Difficulty of counterspeech
- Hard for anyone to tell what’s “important”, among thousands – or millions – of targeted ads.
Any one of the above results in a loss of democratic agency.
2/ The risk of excessive data collection, particularly of privileged, highly personal data, on voters.
3/ Individual targeting methods can be problematic, discriminatory, immoral or even illegal.
4/ Use of the technique drives up costs for campaigns, making it harder for new entrants to ‘play the game’. Larger parties and candidates who can afford data scientists will invest in these techniques, smaller campaigns cannot.
5/ There is little incentive for governments who are in power, and who may have benefited from micro-targeting type campaign practices, to scrutinise these methods, let alone legislate against them.
6/ There is a lack of transparency into these practices, and as a result, a lack of research. Many of the reported effects of microtargeting are still unknown.
Why might we not worry about it?
1/ Research might eventually find it to be entirely ineffective.
2/ It is only minimally effective as a persuasive technique, even if it is highly persuasive for committed campaign supporters (such as donors).
3/ “Microtargeting” can help campaigners reach and activate the interested portion of the public. In doing so, they can motivate them in new ways, to the benefit of democracy.
4/ The winners of campaigns often claim they used superior techniques to win. These often overstate the sophistication and impact of their methods, particularly when campaign staff or consultants wish to work in subsequent campaigns at home or abroad.
What should we do?
Option 1: Nothing
Accept that new techniques are part of innovation, and that with the costs and benefits of microtargeting still open to debate, inaction is a reasonable position.
Option 1a: Ask for more data
Benefits/Opportunities/Strengths: We move towards real knowledge about the impact of microtargeting, which practices are harmful and therefore what we might outlaw or restrict, if anything.
Costs/Weaknesses/Threats: Access to targeting data has been hard to come by (to say the least). Most platforms and campaigns are unwilling to share exactly how these practices are implemented, due to competitive secrecy and arguments about user privacy.
Option 2: An outright ban on political microtargeting
Benefits/Opportunities/Strengths: This is a strong, privacy respecting option that removes any doubt about harms to democracy or individuals from microtargeting.
Costs/Weaknesses/Threats: Advertising companies will fight this, as microtargeting (not just political) is core to their business practices (data collection, user profiling to create audiences, sales to advertisers).
Challenges (which are common across a number of options) include, if a ban is limited to political ads, defining what’s political (or negatively, what’s not), and limiting the ability of smaller civil society actors to reach the specific audiences most relevant to them.
Option 3: A limitation on methods
Specifically, this option would require a ban on the use of behavioural and inferred data, custom, uploaded, matched and lookalike audiences, voter files or rolls and so on. This would leave simple targeting on the basis of gender, age and location as the methods available to political advertisers.
Benefits/Opportunities/Strengths: This makes target groups more heterogenous and diffuse, reduces risk of leaking data and harms to privacy and removes the incentive for campaigns and platforms to collect data for political purposes. It would result in more simple geographical or contextual targeting of political ads.
Costs/Weaknesses/Threats: As above, this proposal limits smaller, non-incumbent campaigners reaching specific audiences. It’s also possible to envisage proxy methods being worse than the original methods. Lastly, this risks being a Facebook-only law. Google has already limited its targeting methods, so this only affects 50% of the largest players (though would impact many smaller ones).
Option 4: A user opt-in for behavioural targeting
This option would likely prove a de-facto ban on behavioural targeting, as user opt-ins would be relatively rare (the platforms will know this, as the various opt-outs they already offer, both for political ads and under the GDPR, are barely used).
Benefits/Opportunities/Strengths: This is a strong, privacy respecting option that removes doubt about harms to democracy or individuals from microtargeting. Because few would opt-in, there would be little incentive for candidates and parties to profile voters in the current way.
Costs/Weaknesses/Threats: Advertising companies will fight this, as microtargeting (not just political) is core to their business practices (data collection, user profiling, selling access to users). This option still also risks harming smaller, non-incumbent organisations. For political ads, it is also something of a Facebook-only policy proposal.
Option 5: A limit on the number of ads campaigns can run
An example execution of this policy would be limiting the number of simultaneous distinct ads (e.g. message + targeting) a party can run to 500, a candidate to 50 and a non-party campaigner to 10.
Benefits/Opportunities/Strengths: Increases practical transparency for political ads – fewer of them makes it easier to ‘read’ their intentions. Fewer ads also means fewer homogenous targeting groups and less incentive to collect private data to build them. Smaller organisations can still build relationships with supporters using tools like custom audiences, so we believe this is the most favourable option for grassroots campaigns. We also see it as being unlikely to harm free expression.
Costs: The technical challenge of defining the thresholds (which might be captured by larger players, to their advantage) along with managing attempts to break the spirit of the rule by creating new pages and entities to run more ads.
The political challenges of getting anywhere
Landing on a single policy proposal to deal with political microtargeting will not be straightforward. Here are some areas we’ll need to find agreement in.
Getting agreement from civil society: Do we need a shared definition of ‘microtargeting’ and relevant harms? Can we realise one? Our view is we don’t, but that we do need to understand that, within civil society, organisations place different values on privacy, transparency, free expression, democratic reform and more. These positions can be incompatible with each other, leading different actors to favour different options. Where you stand is where you sit.
Getting agreement from politicians: Can we go beyond the self-interested view of those who need to be re-elected (and will use the tools available to them to do so)? We think that will be difficult, but perhaps is easier if you set a timeframe for migrating away from these practices – the best we’ve heard is “these changes will come in after the next election”.
Getting agreement from the platforms: Some of the proposals above are confrontational to the revenue streams and business models of technology companies (of all sizes, but particularly the largest ones). Is it possible to get different companies on the same page? Can they be persuaded of their responsibility to “do the right thing”? Which option is least likely to ‘bleed’ into other aspects of their business? (E.g. accepting option 5, with some rollback if research finds net benefits).
Getting past general competitiveness objections: By imposing costs on business, you create regulation that benefits incumbents. Practically therefore it likely makes sense to set a threshold (a percentage of the online ad market) above which companies should comply. Alongside this, ideas like ‘universal ad libraries’, using common technological standards would allow any new market players to adopt political ad transparency and relevant policies from the outset.
Conclusion: Is there a ‘best of all worlds’ option?
The Who Targets Me position is that Option 5: “Limiting the number of ads a campaign can run”, is the most promising. It satisfies objectives such as preserving free expression, making accountability easier and allowing companies to accept political advertising if they choose to (without changing their products or business model), alongside reducing the motivation for campaigns to collect sensitive voter data to target ads.
In balancing these goals, it feels politically possible, being not just a compromise solution, but a best of all worlds.