Disinformation ads, Systemic Risks and the Digital Services Act

In the middle of 2023, in our research of political ads in Germany, we started to come across pages and ads in Meta’s Ad Library with names like “Clever Music Bistro”, “Bold Health Garden” and “Adventurous Dogs Blog”. Each would have run a single ad, usually including a newspaper-style cartoon featuring a European leader, or the US, making a mess of the war in Ukraine or more latterly in Israel/Palestine. 

At the end of April, the EU announced it would be investigating Meta for potential breaches of the Digital Services Act (DSA). Among a number of issues, the Commission said it would be looking at whether the company had failed to properly moderate political ads, by allowing – or failing to stop – Russia running an extensive disinformation campaign in Germany and France about the ongoing war in Ukraine. The campaign is well known, with a number of researchers uncovering and reporting on aspects of it since the war began.

Recently, AI Forensics published a report on this Russian effort, identifying over 3,000 pages buying at least one ad on Meta’s services between mid-August 2023 and the end of April 2024, and stating that the campaign had reached up to 38 million accounts in Germany and France (the report doesn’t provide a broken out number for Germany alone, though it’s estimated that there are 45 million Facebook users in Germany and 29 million in France).

The DSA frowns upon any failure of a Very Large Online Platform (VLOP) to mitigate “systemic risk”. Potential fines run into the billions. In the case of politics and elections, such harm takes the form of “wide dissemination” of illegal content such as threats of violence, extremist content, harassment of politicians or election workers, foreign interference and so on. 

That some sort of disinformation campaign has, and is continuing to, take place is beyond question. For the purposes of further analysis, we’ll assume the campaign is foreign (we don’t have the investigative skills to follow this thread all the way to the end, but others, such as the German Foreign Ministry do, and they say it’s Russian). We’ll also take it as written that the campaign is illegal (again, legal minds might have other opinions). These things are important. If the campaign was domestic, people would have questions about motivations and funding, they’d have the right to argue with the positions being taken in the ads, but disallowing those positions, or that they are promoted, would breach free expression rights.

So this leaves the question we’re interested in, and the one that Meta will be judged on – does this campaign pose a “systemic risk” under the Digital Services Act? Is it big and important?

AI Forensics’ analysis finds that the campaign reached up to 38 million accounts with its ads. That’s potentially as many as 51% of all German and French Meta users, if they each saw one of the ads. This sounds a lot, but due to the way Meta makes data about ads available, it’s hard to assess the ‘unique’ number of people who have been exposed to the campaign, or how often they saw it. One way of getting to a figure is to add together all of the accounts that saw each ad (which is data that Meta does now provide, under its DSA obligations). That gives you a big number.

Unfortunately for Meta’s case, because they don’t really provide any data access for the research community to understand campaigns like this more deeply, it’s also the only number you could arrive at based on the publicly available information.

But the number is too big. The ads mostly targeted the over 50s (of whom there are fewer than 38 million with Meta accounts), and we know that many users see the same ads repeatedly. What we don’t really know is whether some people see a lot of ads from campaigns like this, while others see few or none. Meta’s Ad Library’s data can’t tell us this. The problem is that this leaves us in the dark about a number of important issues. We know that a disinformation campaign exists and is ongoing. But is it reaching a lot of people a few times or a few people a lot of times? Who sees it most? Do they notice it or is it lost in the deluge of other content they’re confronted with as they scroll through their feeds? Is it very persuasive or only minimally so?

Answering these questions would help us decide whether the campaign is significant and poses a systemic risk (and would therefore make Meta sanctionable under the DSA).

Looking at the problem a different way

Since the 2021 German Federal Election, thousands of people in Germany have installed the Who Targets Me browser extension. This software ‘sees’ the ads the user sees and anonymously donates them for research. By seeing what they see, we can use this data to make some estimates of the prevalence of the campaign and people’s exposure to it.

Between August 19th 2023 and April 30th 2024, the period covered by AI Forensics’ investigation, Who Targets Me’s German users (N = 3,849) saw and donated 3.365 million Facebook ads to our database – an average of 875 ads each.

To help users of the extension learn more about the ads they see, Who Targets Me also constantly researches and updates a list of over 9,000 pages that have run political or issue advertising in Germany. In maintaining this dataset, in the middle of 2023, we started to come across pages and ads in Meta’s Ad Library with the aforementioned unusual names. These pages were clearly created automatically, and would typically run a single ad without immediately triggering Meta’s moderation systems. The ads we saw contained cartoons about the war in Ukraine, usually showing Zelensky, Biden, Scholtz, Macron or Sunak suffering some misfortune. The general thrust of the ads was that the war is too expensive, and that Ukrainian freedom isn’t worth the price tag for the western countries who support it. Later, the ads started to talk about American support for Israel. Eventually, the ads had been caught by Meta (either by a filter, automatic or human review), taken down and added to the Meta Ad Library as being “political”. This is how we found them – through Meta’s own transparency tools. 

Over time, the campaign morphed, probably because Meta got better at detecting it. The old format for the campaign would disappear, and new pages would be created, with names like “Fable canoeing 43”, “Impose decide 07” and “Sophisticated decide 25”. Later, there was another group of pages, using apparently Russian names (albeit with no spaces) like “SaidaZhileikina”, “VeneraMikhailushkina” and “StanislavaNevdakhina”. 

Overall, across these variations, we found just under 350 pages in the Meta Ad Library that used these naming patterns and had run ads and seemed to be part of the same campaign. At the same time, using a different method, AI Forensics compiled a similar list, this time of over 1400 advertisers. Comparing the two, 200 of the advertisers were common to both lists.

How prevalent were ads from these campaigns?

Of the 3.365 million ads donated by WTM browser extension users, we found just 185 ads (0.0061% of the total, or just under 1 in 18,000) that had been placed by the campaign and were found in our search of Meta’s Ad Library. The list provided by AI Forensics returned 205 ads (0.0057% of the total, or 1 in 16,400) for the same period. 

When we looked at how many Who Targets Me users had been exposed to the campaign, we found that 50 of 3,849 (1.30%) had seen at least one of the ads in our dataset. 69 had seen ads from the AI Forensics dataset (1.79%). 

17 (34%) of browser extension users exposed to the pages we identified saw more than one ad from the campaign. 13 of those (26%) saw three or more. Overall, 0.33% of Who Targets Me users in Germany were exposed to the disinformation campaign three or more times.

For the AI Forensics list, 33 users (48%) were exposed to more than one ad, with 23 (33%) of them seeing three or more. This equates to 0.60% of German Who Targets Me users between August ‘23 and the end of April ‘24.

How does this look when compared to other political advertisers?

Among the largest spenders in Meta’s political ad library during this period were Greenpeace Germany (€640,000) and the European Parliament (€275,000). Who Targets Me users saw 3,375 (0.10%) ads from the former and 2,434 ads (0.072%) from the latter – a prevalence of around 1 in 1000 ads. This is at least 15 times more common than those that formed the disinformation campaign. 

When compared to advertising from campaigns by mainstream political groups, users saw 1,146 ads (0.034%) from pages associated with the CDU (N=2055), 1,343 ads (0.04%) from SPD pages (N=2308) and 1,121 ads (0.033%) from those run by the AfD (N=734).

Overall, looking at all advertisers we have associated with political parties or electoral activity in Germany (N=9,377), we find 28,967 ads out of 3.365 million (0.86%) met this definition.

Summing up the numbers 

We find that the disinformation campaign:

  • Formed 0.0061% of all ads seen
  • Was seen by 1.30 to 1.79% of Who Targets Me browser extension users in Germany.
  • Extrapolating this across the 45 million Facebook accounts in German gives us a figure of 580,000 to 800,000 unique German accounts that saw the campaign. 
  • Was around 15 times less prevalent than well-known large advertisers such as Greenpeace or the European Parliament.
  • Made up 0.64% of all recognisable political or electoral advertising (based on our definition and list of advertisers), which in turn was only 0.86% of all Facebook advertising during this period.

Based on the data donated by Who Targets Me browser extension users, exposure to the disinformation campaign was rare. 

Obviously there’s a large discrepancy between the 38 million potential accounts figure (around 24 million of whom would be in Germany, with the rest in France) and the fact that only around 1.3-1.8% of Who Targets Me users in Germany were exposed to at least one ad from this disinformation campaign.

Based on the Who Targets Me browser extension data, users who saw the campaign were exposed to an average of 3.7 ads from it. If we take the idea of 24 million ‘exposures’ in Germany, this would give us a figure of around 6.4 million unique German accounts who saw at least one ad from this campaign.

To go further, we’d need to make further extrapolations based on how people use Meta services. The Who Targets Me extension only gathers data from Facebook.com on a desktop or laptop computer. It doesn’t account for Instagram usage (usually via the app) or for people accessing Facebook on their mobile phone. This could result in more people being exposed than our 1.3-1.8% estimate (i.e. they only saw it on Instagram or on mobile), or it could result in people being more exposed (i.e. seeing more than the average 3.7 ads from this campaign across all services and devices) or, more likely, some of each. Without knowing which of these effects is dominant, it’s hard to say whether the total number of exposed accounts is bigger, smaller or roughly the same.

Conclusion: Did Meta’s moderation (or lack of) allow something to happen that was systemically important? 

As we’ve shown, there is more than one way to understand the extent and impact of this disinformation effort.

On one hand, we have a persistent campaign that repeatedly evolved to evade Meta’s attempts to moderate it, racking up millions of views of ads across two large, influential EU countries in the months leading up to a key election. Many will argue this constitutes a systemic failure, poses a risk to democratic integrity and that Meta was negligent in failing to prevent it happening. 

On the other hand, the disinformation campaign appears to be small compared to the number of people on Facebook and to the reach of more ‘normal’ advertisers on Meta’s services. The company’s defence might concentrate on this, arguing that the campaign had its reach and effectiveness limited because it was forced to take unusual and unsustainable measures to keep placing ads and avoid takedowns. Taking this point of view, Meta’s advertising system is exactly that – a system that has a certain tolerance for failure in order to ensure that it can continue to cope with being used by millions of people to run billions of ads each day. The Commission (and many others) would argue that Meta is still profiting from these ads. Meta would likely counter that it spends far more than it makes from messages like these to try and stop them getting through the system, even if that system isn’t always perfect. 

The final thing to consider is whether this campaign has been ‘successful’. Judging whether something poses a “systemic risk” to democracy, particularly during a sensitive period such as the current EU election campaign, implies that we are able to understand whether it has had an effect (or might at some point in the future). While it seems true that public support for the war in Ukraine in Germany has declined over time, attributing it to this campaign would be a stretch, given that the number of people exposed to it is so relatively small. Instead, the length of the war and its attritional nature, coupled to the fact that what is happening on the front lines in eastern Ukraine is no longer on the front pages every day, all make significant contributions to the shift in opinion. If this disinformation campaign has had an effect, it is a small piece of the total – a further nudge in a direction that events were already taking some people. Based on the data available, it’s impossible for us to know how big that nudge was.

The DSA creates new opportunities for regulating platforms. Exactly how those will play out in this early case will help set the tone for future investigations. On this occasion, our research suggests that, when it comes to the largest platforms, open-and-shut cases may be hard to come by, with different presentations of the same data leading to very different conclusions. As a result, it’s difficult to imagine the DSA ever delivering the types of heavy fines and punishments that some expect of it. Doubt will always remain, and that will likely be reflected in the regulatory outcomes.

That said, the new regulation is at the beginning of the journey, not the end.


Methodological considerations and comments:

  • To perform this analysis, we were forced to extrapolate from small numbers, with wide margins of error. As such, the figures are indicative, not accurate. 
  • Users of the Who Targets Me browser extension are self-selecting. Many of them learned about the software from media coverage during the 2021 German Federal Election campaign, particularly after watching Jan Bohmerman’s ZDF Magazin Royale show about a month before the election. They skew left, young, urban and to the west of Germany.
  • We know nothing about the users of the software beyond their age, gender, political leaning (left or right) and approximate geographical location. If we were to extend this research, we might look at whether some groups were more likely to see the campaign. 
  • The extension only looks at Facebook, but ads that formed part of this campaign also ran on Instagram. It does not work on mobile (though we have no reason to believe the ads would have had higher prevalence there).
  • A future, ideal version of this research would recruit a representative panel of users across Germany (and many other countries), and sustain its size and quality over time. We would also survey these users regularly, to understand more about their preferences, the information they see and what recall and persuasive effect campaigns like these have. We don’t have the resources to do this, but ongoing research of this type would be extremely valuable to the disinformation, regulatory, platform accountability and political research communities.
  • Because all of the adverts cost less than €100, Meta doesn’t publish more accurate information about the amounts spent (if it was above €100, Meta would provide the figure to the nearest euro). This makes it hard to work out how much the campaign cost, and the true price of the ads they bought. For example, 38 million ad impressions at €5 euros per thousand impressions would cost €190,000. 24 million impressions (for Germany alone) would cost €120,000. But if we extrapolate from the prices per impression paid by Greenpeace or the European Parliament, we end up with an advertising cost just under €30,000. The true number will lie somewhere between these extremes.