In a modern world saturated by social media, we've grown accustomed to measuring popularity and influence by the number of followers that a person has amassed on Twitter. That's why it came as a shock to some conservative political figures last week when they discovered that many of their followers suddenly had vanished. Other Twitter users complained of being locked out of their accounts while the social media network sought their phone numbers to verify that actual people were behind the accounts, according to the Washington Post and other news outlets.
As detailed in a Feb. 21, 2018 blog post by Twitter developer Yoel Roth, the social media network also is making changes to limit its users who control multiple Twitter accounts from using them in unison to post the same content, or to like, retweet or follow in lockstep.
"One of the most common spam violations we see is the use of multiple accounts and the Twitter developer platform to attempt to artificially amplify or inflate the prominence of certain Tweets," Roth explained. "To be clear: Twitter prohibits any attempt to use automation for the purposes of posting or disseminating spam, and such behavior may result in enforcement action."
The tumult was part of an apparent effort by Twitter to rein in automated accounts called bots, which have proliferated on the social media network. A study released in March 2017 by University of Southern California and Indiana University researchers estimated that bots account for between 9 and 15 percent of active Twitter accounts.
Distortion of Public Debate
Beyond just their sheer numbers, bots also pump out tweets, retweets, likes and follows in enormous numbers — amplifying messages and, in some cases, distorting public debate. As former FBI agent and cybersecurity expert Clint Watts told the Senate Intelligence Committee in March 2017, armies of Russian-controlled bots, disguised to appear as if they were Americans from Midwestern swing states, were deployed to spread disinformation during the 2016 Presidential election campaign, according to a National Public Radio account of his testimony. More recently, after the Feb. 14, 2018 school shooting that claimed 17 lives in Parkland, Florida, Twitter accounts suspected of being controlled by Russia spewed out tweets in an apparent effort to further inflame the public debate about gun control, as reported by NBC News.
So what are Twitter bots, anyway? Basically, they're Twitter accounts that are run by easy-to-create software apps. "Today almost everyone can create a Twitter bot," Emilio Ferrara, an assistant professor in the computer science department at the University of Southern California, explains in an email.
"There are open-source code repositories [that are] one Google search away from anyone interested."
While some bots are just simple scripts, others bots utilize artificial intelligence to perform more sophisticated actions — including acting in ways that can fool someone into thinking they're actual humans at a keyboard. "Advanced bots can even hold short credible exchanges or search for relevant web links to support their points of view," Ferrara says.
Not All Bots Are Bad Bots
Despite the bad reputation that bots are getting, many of them are benign. There are creative bots such as MoMa Robot, which tweets random images from the collection of the Museum of Modern Art, and Dear Assistant, which answers questions in the style of Siri and Cortana. Humor bot We Didn't Start It grabs info from Google Trends and uses it to write new lyrics for a Billy Joel song. And many bots just provide useful information automatically. SF QuakeBot, for example, tweets U.S. Geological Survey seismic data for the San Francisco area.
"An automated account that is properly identified and labeled can provide many legitimate services and functions," explains David Carroll, an associate professor of media design at the New School in New York.
But bots can also be used to commit fraud and identity theft and to carry out misinformation campaigns. According to Carroll, an intermediate-level web programmer is capable of building a "botnet," in which a legion of bots can be controlled and coordinated. "Scripts are available that automate the creation of Twitter accounts to populate a botnet," he explains. "Labor is involved in creating imposter accounts, which can involve social media identity theft where innocent users have their names, photos, and personal details stolen for use on illegitimate accounts." For those who want to outsource the job, Carroll says that pre-built botnets are available on the dark web.
What's the solution to the bot problem? Detection programs such as Indiana University's Botometer can scrutinize a Twitter account's activity in an effort to determine whether it's an actual person or a bot.
"Because bots are automated, they generate a recognizable pattern in their meta-data, especially when compared against human accounts," Carroll says. "Researchers have discovered signals such as account creation date, volume and rate of activity, and analysis of duplicate content across accounts that provide clues toward detecting bots."
"Harmful bots should be banned," Ferrara says. "Bot accounts in general should be labeled as such, similarly to what already happens with sponsored content on Twitter."
Gregory Maus, an Indiana University graduate student who has studied bots extensively, said in an email that Twitter's countermeasures have proven surprisingly effective so far, wiping out 90 percent of the 450,000 mercenary bots that his research group was tracking.
"We're glad that they're taking these steps thus far and hope that they keep it up, but I'm uncertain whether they will sustain the focus," Maus says. "Twitter (and other platforms) has had crackdowns before with short-term effects, but some of the bots may creep back in if they don't maintain their efforts."