Christopher Furlong/Getty Images

In the days leading up to the UK’s general election, youths looking for love online encountered a whole new kind of Tinder nightmare. A group of young activists built a Tinder chatbot to co-opt profiles and persuade swing voters to support Labour. The bot accounts sent 30,000-40,000 messages to targeted 18-25 year olds in battleground constituencies like Dudley North, which Labour ended up winning by only 22 votes.

The tactic was frankly ingenious. Tinder is a dating app where users swipe right to indicate attraction and interest in a potential partner. If both people swipe right on each other’s profile, a dialogue box becomes available for them to privately chat. After meeting their crowdfunding goal of only £500, the team built a tool which took over and operated the accounts of recruited Tinder-users. By upgrading the profiles to Tinder Premium, the team was able to place bots in any contested constituency across the UK. Once planted, the bots swiped right on all users in the attempt to get the largest number of matches and inquire into their voting intentions.


Facebook teaches bots how to negotiate. They learn to lie instead Facebook Facebook teaches bots how to negotiate. They learn to lie instead

Yara Rodrigues Fowler and Charlotte Goodman, the two campaigners leading the informal GE Tinder Bot team, explained in a recent opinion piece that if “the user was voting for a right-wing party or was unsure, the bot sent a list of Labour policies, or a criticism of Tory policies,” with the aim “of getting voters to help oust the Conservative government.”

Read next Stay Alert never made sense and now we’ve got the data to prove it Stay Alert never made sense and now we’ve got the data to prove it

Pieces in major media outlets like the New York Times and BBC have applauded these digital canvassers for their ingenuity and civic service. But upon closer inspection, the project reveals itself to be ethically dubious and problematic on a number of levels. How would these same outlets respond if such tactics were used to support the Tories? And what does this mean for the use of bots and other political algorithms in the future?



The activists maintain that the project was meant to foster democratic engagement. But screenshots of the bots’ activity expose a harsher reality. Images of conversations between real users and these bots, posted on i-D, Mashable, as well as on Fowler and Goodman’s public Twitter accounts, show that the bots did not identify themselves as automated accounts, instead posing as the user whose profile they had taken over. While conducting research for this story, it turned out that a number of our friends living in Oxford had interacted with the bot in the lead up to the election and had no idea that it was not a real person.

UpVote 10: Cory Doctorow on Trump and privacy Politics UpVote 10: Cory Doctorow on Trump and privacy


It should be obvious to anyone who has ever had to receive approval from an ethics board that this was an egregious ethical violation. While sending out automated reminders to vote would be one thing, actively trying to convince people to vote for a certain party under fraudulent pretenses is invasive and sets a disturbing precedent.

Because they are funded by advertising and personal data, social media platforms feature specific design elements built to monopolise the attention of their users. Tinder’s matching algorithm, for instance, is designed on the basis of classical gambling principles that increase emotional investment and draw users into the platform. As Goodman explains in i-D, their bot was built on the assumption that youth targeted over Tinder would be more likely to respond to notifications from matches, given that matches suggest high-value attraction or interest. This attention-grabbing ecosystem, combined with the intimate nature of the app, creates a dangerous space for automation and deception.

Political bots can have either beneficial or harmful applications: they can fulfil playful, artistic, and accountability functions, but they can also help spread hate speech or disinformation. Our team at the Oxford Internet Institute, which studies the impact of bots on public and political life, has in recent research suggested that a vital future policy issue will concern ways of promoting the positive effects of bots while limiting their manipulative capabilities.

Read next ‘A chaotic mess’: The UK’s Covid-19 testing programme is falling apart ‘A chaotic mess’: The UK’s Covid-19 testing programme is falling apart

One laudable aspect of the Tinder Bot stunt is that it exposes the growing capacity for young, diverse, tech-savvy communities to self-organize and achieve political change through code. However, for this movement to be sustainable, we need transparent, community-based processes for determining whether these tools can be used to strengthen democracy, and if so, how.


For inspiration, there are examples of algorithmic interventions that resemble Fowler & Goodman’s project, only with much more transparency and respect for users. An example is the Voices app, which provides users in the US with the contact information of all of their local representatives, enabling them to be contacted via phone or email directly through the app.

Apps are dying. Long live the subservient bots ready to fulfil your every desire Bots Apps are dying. Long live the subservient bots ready to fulfil your every desire

Social media companies and politicians cannot write this case off as just another example of some rogue twenty-somethings playing with software. And we shouldn’t be distracted by their naïveté and good intentions without serious discussion about what this project means for the vulnerability of democracy.

Consider that a few campaigners managed to pull this off with only 500 crowd-sourced pounds. Any group in the world could similarly start using Tinder to target youth anywhere, for whatever purpose they wished. Consider what would happen if political consultancies, armed with bottomless advertising budgets, were to develop even more sophisticated Tinderbots.

As it stands, there is little to prevent political actors from deploying bots, not just in future elections but also in daily life. If you can believe it, it is not technically illegal to use bots to interfere with political processes. We already know through interviews detailed in our recent study of political bots in the US that leading political consultants view digital campaigning as a ‘wild west' where anything goes. And our project’s research provides further evidence that bots have become an increasingly common tool used in elections around the world.


Most concerning is the fact that the Tinder Bot team is tacitly suggesting the use of such tactics in other countries, such as the United States, as a way to “take back the White House”. To be sure, there is a temptation on the Left to fight back against allegations of right-wing digital manipulation with equivalent algorithmic force. But whether these tactics are used by the Left or Right, let's not kid ourselves and pretend that their deceptive nature isn't fundamentally anti-democratic.

Online environments are fostering the growth of deceptive political practices, and it does not bode well for society if resorting to these kinds of tactics becomes the norm. We must develop solutions to the ways in which social media platforms wear down our social and psychological immune system, cultivating weaknesses that politicians and citizens can and do exploit. We are in the midst of a globally expanding bot war, and it’s time to get serious about it.

Robert Gorwa is a graduate student at the Oxford Internet Institute, University of Oxford. Douglas Guilbeault is a doctoral student at the Annenberg School for Communication, University of Pennsylvania. Both Rob and Doug conduct research with the ERC-funded Project on Computational Propaganda, based at the Oxford Internet Institute.