How Facebook could publish targeting information and preserve user privacy

Ever since Facebook first started publishing political ad data in 2018, many organisations, including ourselves, have called for them to include information about ad targeting. So far, there’s been no movement on this, with Facebook arguing there’s a privacy risk if this data is combined with data from other parts of Facebook. 

Here’s what (we think) they’re referring to.

An advertiser runs an ad targeted at people who like “Politics”. The targeting information is published, along with the ad itself, in the Facebook Ad Library.

Someone sees the ad, and notices that it was commented on by a friend. By combining these two pieces of information – the targeting and the comment – they now know that Facebook thinks their friend likes Politics. Perhaps they knew that already. Perhaps they didn’t. 

It’s a fairly innocuous example, but it’s not hard to imagine ones that are more revealing (political party membership or some aspect of financial status). After all, social media platforms learn a lot about us from our use of them, some of which we might not want to reveal to strangers, let alone friends. 

That said, we think Facebook can, and should, publish targeting information for political ads. 

The first thing they would have to do, as we’ve argued several times over, is turn off the social features on political ads. If these ads didn’t includes lists of people who had commented on or shared them, there’d be no risk of revealing those individuals’ preferences and behaviours. 

The second thing would be to publish targeting information at the page or account level. Instead of saying “this ad used this targeting”, Facebook would say “this page used these targeting methods” and list them (demographic, geographic, behavioural, custom and lookalike audiences, use of any data brokers). It’d give users a general, useful sense of the approaches a political campaign is using.

Finally, to make it harder to derive the exact targeting for specific ads by monitoring new ads and observing the changes in the published targeting information, they should release the information in batches. For example, once a week, or when the advertiser has run more than five different new ads. This would reduce the chances of anyone exactly matching targeting information to specific ads.

By publishing targeting information, platforms could improve transparency of political campaigns (political targeting shouldn’t be a trade secret) and increase users’ confidence that their data isn’t being misused by campaigners. None of this should come at the expense of better data access to researchers, but it would make it easier for voters and journalists to hold campaigns to account for the way they use ad targeting.

(Caveat: We think this is a pretty simple solution. Perhaps it’s too simple and we’re missing something. We’d love to hear from you if you think there’s a loophole we’ve missed. Particularly if you work at Facebook and have thought about this already).