A new study finds 20 percent of election tweets were generated by bots.
It would hardly be a stretch to call this presidential race the Twitter election. The social media site has played a critical role in the campaigns of both Trump and Clinton as a medium for voter engagement and personal embarrassment, their feeds an endless stream of 140-character quotables that are endlessly debated by the Twittersphere's armchair analysts.
If Twitter were any indication, it would seem that the art of political debate is alive and well in the US. But according to a new study coming out of the University of Southern California, nearly one-fifth of all election-related tweets are being created by lines of code masquerading as human, otherwise known as bots.
As detailed in a paper published today in First Monday, the USC researchers analyzed 20 million election-related tweets that were created between September 16 and October 21. They found that of the tweet sample size, 19 percent of them (3.8 million tweets) were created by pro-Trump or pro-Clinton bots. Moreover, these bots accounted for 15 percent or 400,000 of the 2.8 million Twitter users in the sample.
Interestingly, the USC study found that Trump's robo-supporters almost always tweeted positive things about the candidate, whereas Clinton's robo-army was tweeting a mixture of praise and criticism. Another study on election bots from Oxford University's Project on Computation Propaganda released earlier this week found that pro-Trump bots were generating seven times as many tweets as the pro-Clinton bots.
Moreover, the Oxford study found that between the first and second debate, a full one third of pro-Trump tweets and one fifth of pro-Clinton tweets were generated by bots.
According to USC computer scientists Emilio Ferrara and Alessandro Bessi, the intentional distortion of online election discussion could endanger the integrity of the election by swaying the opinions of real voters who might mistake the bots for people.
"The presence of these bots can affect the dynamics of the political discussion in three tangible ways," Ferrara and Bessi write in the paper. ""First, influence can be redistributed across suspicious accounts that may be operated with malicious purposes. Second, the political conversation can become further polarized. Third, spreading of misinformation and unverified information can be enhanced."
The bots being studied are remarkably sophisticated, capable of generating tweets, retweeting, commenting on posts and even engaging in human-like conversations with real people (although this may be the result of an account that is run by a human while still using automation).
"The bots are not engaging or interacting with the candidates," Ferrara told Motherboard. "They are mostly producing pointers to external news on the web and they are also trying to interact with actual individuals who are posting about the election."
According to Ferrara, determining who is behind the creation of the bots is "the million dollar question." Of course the candidates' campaigns are likely suspects and it wouldn't be the first time politicians have leveraged fake Twitter accounts to influence public perception of a candidate: notorious election hacker Andrés Sepúlveda used social media to rig elections in Latin America and in places like China, armies of internet commenters are used to spam message boards with pro-party posts.
"I don't think anyone from the campaigns is paying for this," Ferrara said. "I think these people are smart enough to not be doing that. The second theory is the opposite, that someone from a candidate's campaign is paying for bots for the other campaign. If someone finds out, then it's not such good publicity for the other candidate—but what if people don't find out?"
Ferrara's personal theory is that the bots are created by a third party who has a stake in a candidate's successful campaign or a state actor trying to disrupt the election. In either case, the use of bots for a US election exists in a legal grey area (although it definitely violates Twitter's terms of service), but that doesn't mean it's not harmful.
"People are getting pretty good at not interacting via replies with the bot, but what we found is that humans are not very good at discriminating between bots and humans when we retrieve information," Ferrara said. "People retweet a lot of bot generated content and that possibly contributes to misinformation."
Get six of our favorite Motherboard stories every day by signing up for our newsletter.