Propaganda Critic Header Banner


Previous | Next

Bots, bots, bots.

Bots have generated quite a bit of conversation lately. Publications including The Atlantic and The New York Times have documented the expansive spread and negative effects of bots. Even Twitter’s chairman, Jack Dorsey acknowledges the problems they pose for the social media giant. But many people still wonder, “What actually is a bot?”

In general terms, we can define bots as bits of automated software that are designed to perform some task that would be tedious for a human being.

There are many different types of bots. Helper bots work under the hood and are barely noticeable. Even if you have never heard of them, you have most likely benefited from these friendly workers. They labor behind the scenes on Facebook to update your news feed, find a good match on Tinder, provide relevant search engine suggestions, and even play your favorite song. By automating mundane informational tasks, they improve the overall user experience. However, other types of bots are far from helpful.

Deceptive bots also perform tedious tasks, but with one crucial difference: they pretend to be legitimate human users in order to convince the rest of us that they are authentic human beings who are simply making their way through the digital world like the rest of us.

These bots have emerged as highly effective tools for disseminating propaganda, artificially inflating popularity metrics, and influencing the flow of online conversations. These bots can disrupt the spread of information, manipulate public discussion, and create the illusion of false consensus. Actors, chefs, reality stars, musicians, and public influencers purchase bots to promote their brand and increase the number of followers. Unethical advertisers infect forums and social media feeds with computer-generated messages praising certain products. Politicians use bots to appear more popular, to spread political slogans, and to contain the spread of unflattering news.

In all of these instances, the fact that bots appear to be real human beings creates the illusion that a large number of people feel the same way about something. Elsewhere on this site, we discuss the “bandwagon effect” in which the propagandist argues “everyone else is doing it, and so should you.” Bots are a terrific way of giving the illusion that there are many people riding on the bandwagon at the same time. When certain topics or tags or trending online, we assume that many people are interested in those issues. If a political candidate appears to be followed by thousands of our peers, we might think that their ideas are more valid.

However, as computer scientist Emilio Ferrara points out, it is sometimes “nearly impossible or extremely hard to tell if a conversation is being driven by bots.”1 For this reason, we should always take online polls and lists of trending topics with an enormous grain of salt. Today, it might be useful to repurpose the popular phrase “Don’t judge a book by its cover” to “Don’t judge a person by their number of followers.”

Deceptive bots and fake accounts are often used to generate money and popularity for their controllers. In 2014, criminals used a small army of Twitter bots to pump up the market share of a shady company called Cynk Technology Corp. Despite the fact that the company had no product, no assets, and only one employee, the bots inflated the market value of a share of Cynk stock from $0.10 per share to $20 per share, bringing the total worth of the company to $5 billion.

In 2018, budding vloggers on YouTube who are struggling to get views will purchase fake accounts rather than allowing their following to grow organically. Similarly, celebrities on Twitter, Facebook, and Instagram have purchased fake followers in order to amplify their brand and to earn advertising revenue. In a fascinating expose of the fake follower industry, The New York Times explained that “an influencer with 100,000 followers might earn an average of $2,000 for a promotional tweet, while an influencer with a million followers might earn $20,000.”3

These technologies are also common in the world of electoral politics. When a candidate runs for public office, she usually employ peoples to help spread her message and manage her campaign. Rather than going door to door to reach voters, bots can take on this role by spreading programmed messages with authentic users on social media platforms. Some members of campaign teams allow bots to automate messages from their account. By programming the bots to share scheduled messages, campaigners are able to spread messages to their existing followers.

It’s important to note that there is a role for automated messaging in legitimate political campaigns. The crucial question is: Does this campaign bot pretend that it is a legitimate human user? If a campaign bot is deceiving users about its artificial nature, one has to wonder what other things the campaign is lying about.

Lastly, bots can be used to hide unflattering information about a candidate. Imagine a world where you could cover up any embarrassing thing that you have done… even that awkward interaction three years ago that still haunts you at night. As regular citizens, most of us do not have hordes of journalists analyzing our every move. But, because public figures are people too, it is likely that they have done some embarrassing things that have been documented by the press. Politicians cannot erase these blunders from everyone’s minds, but they can use bots to divert our attention to other stories. By making certain stories trend, bots ensure that more users will see these stories while concealing information that might be harmful to their candidate.

While malicious bots can be used in many noxious ways, we should remember that not all bots are bad. Bots are easy to create, and they can be used to automate many pesky tasks. Some bots clearly identify themselves as bots and do not try to manipulate people. For example, @nice_tips_bot tweets helpful messages and advice to its followers, while @censusAmericans bot uses data from past censuses to tell short stories about anonymous Americans. On a subreddit devoted to the band the Grateful Dead, u/Herbibot posts links to set lists and archived recordings for shows mentioned in the forum.

There are even several subreddits, such as “BotsRights” and “BotsScrewingUp,” that document bot shenanigans and advocate for their rights.

How can you protect yourself from deceptive bots? Although it is not always easy to know if an account is being operated by a legitimate human being, there are certain steps you can take when attempting to judge the authenticity of a source. Elsewhere on this site, you can learn how to recognize common bot characteristics. The Observatory of Social Media (OSoMe) has also created a web-based tool called Botometer which will help you figure out whether or not a particular Twitter account is behaving like a bot.


1 Andrew Good (2016, July 8) “We’re in a digital world filled with lots of social bots,” University of Southern California News.

2 Seth Fiegerman (2014, July 10) “The curious case of Cynk, an abandoned tech company now worth $5 billion,” Mashable.

3 Nicholas Confessore, Gabriel J. X. Dance, Richard Harris and Mark hansen (2018, January 27) “The follower factory,” The New York Times.

Close Menu