Technology and data are changing the way elections work, but there’s not enough debate on the part of campaigns and their consultants about whether the new techniques they’re adopting are a good thing for democracy.
I was at a conference for political consultants last week. Those present worked on polling and political messaging all over the world. This was before the Cambridge Analytica story had broken, but the topic of ‘polarisation’ was front and centre, and was seen as a bad thing by all present. However, only a couple of speakers suggested that those in the room might be partially to blame for it.
In a career as a consultant working on political campaigns, you win some, you lose some (my own record: 1 win, 1 loss). In the end though, no-one wants to hire a company with a history of losing. And so consultants look high and low for advantages to help them win. For something you can have that your opponent won’t.
The primary goal (for a campaign at least) is to put together a constituency of voters that adds up to 51% (of the vote, in Parliament, in the Electoral College – whatever holds power). A secondary goal is to make the electorate the ‘right size’. If you need 51 out of 100 voters to win a majority, but only have 50 – or 30 – why not try and encourage some people to stay at home?
To put together 51%, a campaign might need this demographic group, plus this region, plus those who support a particular issue.
To make the electorate the right size, it tells some other groups (who would never support it) that their opponent is a bad person with terrible positions on the key issues of our time.
Historically, those coalitions were usually understood as being ‘left’ or ‘right’ or ‘liberal’ and ‘conservative’, but are now changing, with some theorising that today they resemble ‘forward’ and ‘back’ or ‘open’ and ‘closed’. What helps a campaign successfully build the coalition it needs?
There are two intertwined tendencies at play in the search for new techniques:
- How do I get more data about voters?
- How do I use technology to get the right message in front of the right people, accurately and cheaply?
If you’re unconcerned about polarisation, it doesn’t really matter what your 51% looks like. Some people might argue that it’s an ‘unstable coalition of voters’ and that it’s ‘riven with contradictions’, but this is the weakness of our politically post-ideological age.
If, on the other hand, you are somewhat worried about living in a polarised and divided society, you really do need to consider adding a third tendency. It’s a more moral one: should we do this?
Recently, at another conference, I talked about an example of an American political consultant who offers potential clients a combination of data and tech to target people with ads while they are queuing to vote. He identifies the polling places, puts a geo-fence around them, defines an audience, a relevant message and places bids for ads so that people will see them while they’re looking at their phone, waiting their turn. It’s very similar to a product that Facebook offers retailers. (Google and Twitter also offer geotargeted ad products.)
I asked the room (of academics and citizen-led projects like ours) if this felt like a legitimate use of targeted political advertising. No-one did (again, this was before the CA story broke, which makes the question feel like an innocent one from another time).
From a consultant’s perspective though, this sounds like a smart way to campaign. It marries together a commonly held view of voters (short attention spans, generally poor recall of your campaign’s message) with optimal timing (as they’re about to vote) and data (who wouldn’t run a ‘data-driven’ political campaign these days?).
Despite this conference’s attendees thinking the technique felt ‘off’, in most cases, it would be quite legal. And if it wasn’t (e.g. in one of the countries with a campaigning moratorium on polling day), it would take specific knowledge on the part of a concerned voter to prove it (of electoral law, of taking a screenshot of the ad, of reporting it and so on) and the ability to act on the part of a regulator or the police (What do I do with this screenshot? What does it mean?).
Even if it were proven, to my knowledge, there are zero examples of election results being overturned on the basis of ‘just a few ads’ (as the campaign’s lawyers would surely argue).
A more extreme example of whether a campaign should use a specific technology is Alibaba’s “LuBan”. It’s an AI that can generate, place and optimise 8,000 different banner ads per second. A campaign could feed it some parameters, and off it would go, trying to win the election for you.
Alibaba claimed a 100% increase in ROI for one of its campaigns thanks to Luban’s work. But should a political party use tools like it? It would save a legion of graphic designers making ads. At the same time though, it’ll produce ads you haven’t seen, let alone approved. Are you able to understand how it chooses to create and optimise ads? Should you trust it with your campaign, even if it increases your chances of victory?
Microsoft’s AI-chatbot Tay turned viciously racist and sexist within a few hours because its developers didn’t foresee that people would try to break it. This was deeply embarrassing for the company, but what if tools like it find the perfect pitch for your dog whistle campaign? What would they do if you sent them out to find the 51% for your campaign? Do political campaigns get embarrassed by victory? Not often, despite the unparalleled polarisation this would likely cause.
Ultimately, we currently have no idea what the impact of such tools will be, because we don’t understand the fallibility of the technology (or, more accurately, the combination of human fallibilities behind the technology) and the dimensions it would affect. Ideally, we’ll never find out.
The central point is getting political campaigns and consultants to ask the question – should we do this? – and to judge their answers against some shared criteria – for example, what does a healthy democracy look like? – rather racing to the ethical bottom in search of advantages.
A lot of writing on these issues seems to end here, pointing out things we should be fearful of, but with a “what are we supposed to do about it?” shrug.
So I won’t stop here. Instead, I’ll leave you with some ideas:
- The platforms can treat political content differently. For example, should political campaigns have access to the newest products designed for businesses? This involves grey areas and difficult decisions that the tech platforms have tried to avoid getting drawn into so far. It involves the companies thinking about democratic values, and genuinely holding these to be more important than their own bottom line. If they don’t, regulation is coming for them (it very likely already is). For all their wealth and power, states have more.
- Regulators could do a much better job of understanding the advantages political campaigns seek for themselves, make judgements as to what’s reasonable and issue guidance (“We advise against the targeting of ads at queues of people waiting to vote”). Technology is developing faster than law, so it would be interesting to see an independent electoral regulator offering practical advice guided by a principle of fairness and social responsibility. If campaigns choose not to follow their advice, traditional accountability mechanisms (e.g. getting asked tough questions by journalists) would kick in (as has happened, in the end, with Cambridge Analytica).
- Political campaigns and consultants can work together to behave ethically and transparently. “We are aware of this technique, but have chosen not to use it” they might say. In the US, campaigns sometimes propose a ‘clean campaign pledge’, whereby they stay positive in their messages, cap spending, reject corporate or outside donations and so on. What does the data and technology version of that look like? Accepting advertising transparency? Questioning the integrity of their vendors (again, we’re looking at you, Cambridge Analytica)? Rejecting assistance from AI? Not using deepfakes? Note that we’ve written to the UK political parties asking them for the first of these (and we await their response).
- Voters (you!) can install Who Targets Me and help find out what’s going on, at least with social media advertising. Anyone with a computer can play a role in increasing transparency, and there should be many more projects like ours to help create it.Any citizen can also start to talk to their representatives about these issues. It will be hard to persuade politicians that they should think carefully before accepting new uses of technology and data in their campaigns. Those who hold power have usually benefited from them. But significant public concern about the way in which democratic principles are being undermined by a headlong rush to adopt new and poorly understood campaign techniques could see a shift in that view.
- Everyone can think about how a 21st (or even a 22nd) century democracy should operate. Having this conversation regularly (as opposed to every 200 years, after a revolution) would mean we can update our view as to what’s good and useful, and what’s not, as technology and society evolve. This is big stuff, but there is plenty to suggest that we, as modern societies, would be a whole lot better at it than our forefathers.
None of this was on the agenda at the political consultants’ conference. Maybe next time.