Last week, the EU Parliament backed an amendment to the Digital Services Act that banned the use of “special category” data «for the purpose of displaying advertising». If the ban is enacted (the other arms of the EU must agree first), no digital platform or service will be allowed to sell ads targeted at people based on their racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, health status or sexuality.
Reading the reports, you would think that vast swathes of online advertising were being made illegal, and that the “surveillance advertising” business model had been struck a mortal blow.
You would be wrong.
In the Facebook advertising data we looked at, around 0.2-0.5% of ads would be affected by the ban*. The economic effect on Facebook will be negligible.
It’s not really anyone’s fault for not knowing what impact the measure will have. The targeting methods offered by platforms and used by advertisers are still incredibly opaque to all but the most interested observers. But we are interested, so we downloaded a sample of 19,679 Facebook ad impressions captured by EU users of Who Targets Me’s browser extension on October 19th last year. All ads in the sample used “interest” based targeting (per our data, such ads make up around 20-25% of all Facebook ads).
We then worked our way through all 19,679 ads line-by-line. Twice.
For our first pass, we took a “wide” view of an interest that might fall under “special category” data. For example, we included ads targeted at people interested in “Political and Social Themes”, “Christian Music” or “The European Union”.
Of the approximately 20,000 ads, we found 508 (2.58%) using special category targeting, as broadly defined.
For our second pass, we took a narrower view and only looked at ads that targeted people interested in specific political parties, religions (the Catholic Church, “Jesus” and “Wiccanism” all appeared) and ads targeted on the basis of sexuality (such as “LGBT”). We found no ads targeted at trade union members or on the basis of health information.
This narrower view produced just 156 ads (0.79%), from 29 advertisers (out of 3792 in the entire sample). In total therefore, we saw that less than a quarter of a percent (25% of 0.79%) of all Facebook ads were using targeting covered by the special category data ban.
This is probably why Facebook seemed perfectly happy to announce they would no longer allow this data to be used for ad targeting globally (not just in the EU) several months back. Their ban (coincidentally) came into force last week.
Now, we absolutely recognise that special category data can be (and is) used in bad, problematic ways. But in our sample – of a random day in the life of EU users of Who Targets Me – we saw nothing fitting that bill.
What we did see was an ad for L-Mag, a magazine for lesbians, targeting people interested in “feminism, gender studies and genderqueer”. There were ads for Campact, a left-leaning online petition site targeting people who supported a left-leaning German political party. A surrogacy agency targeted people interested in “LGBT”. There was a shop selling bulk dry goods targeted at people interested in the Green Party (and veganism). A USB stick company was targeting ads at people who liked a right-wing political party. Weird perhaps, but a problem in itself?
Under the proposed ban (and under Facebook’s recent one), ads like these would no longer be possible. (You might argue they should never have been, though Article 9 of the GDPR doesn’t ban the use of such data when it «relates to personal data which are manifestly made public by the data subject” and it does seem arguable that posting and liking stuff on social media is “public”, though it’s an open question as to whether every user would agree).
Are these ads so egregious they need to be wiped off the face of the internet? Are the harms caused by their discriminatory evil twins so great that should never risk the use of such data for targeting purposes?
We don’t think so. Based on this snapshot of data, the Parliament’s ban was a step too far.
Instead, there were two better ways of dealing with the risks posed.
The first thing to understand is that advertising is always a combination of message and targeting. Clearly a message can, in itself, be discriminatory. But it’s hard to conceive of standalone targeting that is. The EU Parliament seems to have got this mixed up. Rather than banning targeting methods, they should have focused on content moderation. Certainly, platforms shouldn’t accept discriminatory ads. But if you want them to do this, they should be required to more carefully assess the content of ads that use special category data. Though this would sometimes be difficult (there are always edge cases), it’s far from impossible. You just have to add an extra layer of content moderation**.
Second, many people are obviously concerned about the ways in which ads are targeted. Surveys frequently report this. The strange black box in which Facebook’s targeting interests are created isn’t something we can easily peer inside. Sometimes the box is unerringly accurate and makes us feel uncomfortable. At other times, the box is so off the mark as to be a joke. Rather than banning targeting methods, the legislation should have focused on the way targeting options are generated in the first place. Do users have sufficient control over them? Can they easily choose not to be targeted on specific criteria? Do we know what data is being used to create the criteria in the first place? The answer to all three questions is no.
Bringing these two ideas together points to a smarter alternative regulation. This would require sufficient moderation of ads that use special category data to ensure no harmful discrimination occurs, while opening up the systems that profile us to independent audit and straightforward user control.
Unfortunately, it’s probably too late to point this out. The Parliament has something to celebrate and Facebook has complied before it was even asked to.
If the amendment is ultimately enacted, many discriminatory ads will never see the light of day. But at the same time, LGBT rights organisations will find it harder to campaign. Efforts to increase the number of people who vote will struggle to reach under-represented groups. Workers will find it more difficult to take on bad companies.
In their haste to land a blow on the “surveillance advertising business model”, parliamentarians missed the chance to do something more…targeted. They won the battle, but at what cost?
* Unfortunately, other data isn’t available. No platform currently makes ad targeting transparent. That said, this was just a quick fishing trip into Facebook data to see what we could, or couldn’t see. Our sample is fairly small. Most people who use our software are quite left-wing. The dataset was from just one day. We don’t know how common these techniques are on Google or YouTube, Twitter, TikTok, on the thousands of different ad networks or anywhere else.
** It’s quite possible that Facebook’s decision to ban the use of sensitive categories came from a desire to avoid having to do further moderation, which would cost more money. It also lets them avoid PR risk and maybe even get a PR win, at little to no business cost.