How could an algorithm spot trolls on the Internet?

If you’ve never run into a troll online, count yourself lucky. (Of course, they don’t look like this … to the best of our knowledge.)
If you’ve never run into a troll online, count yourself lucky. (Of course, they don’t look like this … to the best of our knowledge.)
© morozena/iStockphoto

It can be fun to engage other users in conversations on the websites you frequent, but the comment sections tend to get out of hand to an upsetting degree. We often have a certain type of personality to thank for an increasingly hostile online environment, replete with vulgarity, insults and sometimes even threats — the Internet troll.

A troll is a person who baits online comment sections with posts designed to get a rise out of people or otherwise disrupt an online community. These extremely negative users make the Internet more gloomy and antagonistic than the fount of information and entertainment it should be.

The anonymity afforded by the Internet allows people (who might comport themselves civilly in person) to shed their inhibitions and engage in antisocial behavior [source: Academic Earth, Breeze]. A study by researchers at the University of Manitoba, University of Winnipeg and University of British Columbia in Canada conducted surveys and found strong correlations between people who appeared to enjoy online trolling and higher scores on personality tests for sadism, Machiavellianism and psychopathy, especially sadism [sources: Buckels, Goldbeck, Mooney].

Researchers have even found that abusive language in the comment section of an article can alter readers' perceptions of the content, skewing them toward a more polarized view of the topic or making them doubt the article's quality [source: Brossard, Mooney, Applebaum, Felder]. This prompted Popular Science to shut off commenting on most of its online articles in 2013 [source: LaBarre].

To minimize the effects of trolling, some sites hire moderators to keep an eye on comment threads and censor or ban offending posts and users, but that takes time and money that not all sites can or will spend, and there are far more trolls than moderators. Others have suggested removing anonymity, although there is some evidence that this might cause most users to never post comments [source: Ingram].

But researchers at Stanford and Cornell have come up with another potential tool for combatting trolls — early detection.

An Algorithm to Spot Trolls

Researchers at Stanford and Cornell (with funding from a Stanford Graduate Fellowship and a Google Faculty Research Award) conducted a study to see if they could use quantitative measures to detect antisocial users. They gained access to user comments hosted by Disqus for the sites Breitbart.com, CNN.com and IGN.com, spanning 18 months from March 2012 through August 2013. The data consisted of around 1.75 million users (nearly 49,000 of them banned), 1.26 million threads and 39 million posts (nearly 838,000 of them deleted and 1.35 million of them reported). They narrowed the banned user data down to around 12,000 users who joined the sites after March 2012, had at least five posts and were banned permanently for something other than spamming URLs [source: Cheng].

The scientists captured data including post content, user activity, community response and moderator actions. They compared messages of users who were never banned to messages of users who were permanently banned, and looked at changes in the banned users' behavior over their time. The team found that the posts of future banned trolls tended to have the following traits:

  • poor spelling and grammar
  • more profanity
  • more negative words
  • less conciliatory or tentative language
  • lower understandability readings based on several readability tests (including the Automated Readability Index) which got worse toward the time of banning
  • use of different jargon and function words from non-banned community members
  • more digression from the topic
  • a much higher number of comment posts than the average user
  • a tendency to concentrate their replies in individual threads
  • a tendency to provoke more replies from others
  • worse behavior over time resulting in their posts being increasingly deleted before banning

On CNN.com, the average user tended to post around 22 posts during the 18 month period, whereas the future banned users posted around 264 times before being banned [sources: Cheng, Collins]. The community was also less likely to tolerate the troll over time.

Using the quantifiable results, the researchers were able to develop an algorithm (a set of steps used to solve a problem or perform a task) that used as few as five comments to determine who would be banned in the future with 80 percent accuracy. With 10 posts, the results were 82 percent accurate, but performance peaked around 10 posts. Earlier user posts were better for predicting whether they would get banned later. The team achieved a similar level of accuracy over all three online communities. Post deletion by site moderators turned out to be the most informative activity studied, but all the data in aggregate resulted in better accuracy [source: Cheng].

Is an algorithm viable, or enough?

The roadblock with using an algorithm to identify trolls is that it could target innocent users who exhibit some of the behaviors that are also common flags for troublemakers. For now, human moderation is still needed.
The roadblock with using an algorithm to identify trolls is that it could target innocent users who exhibit some of the behaviors that are also common flags for troublemakers. For now, human moderation is still needed.
© galaxy67/iStockphoto

Using an algorithm like this to automate troll banning is unlikely in the immediate future. It's in the academic research phase and not available as a usable software package at the moment. And at 80 percent accuracy, one in five of the users it identified would be innocent. The researchers found that being overly harsh or quick to ban or censor users tended to increase antisocial behavior, and even worsened the writing of posters who initially had higher text quality [source: Cheng]. There are also other types of trolling, including more subtle varieties, such as posting vexingly naive questions, that this algorithm wouldn't be able to detect.

Therefore it's still important for a human moderator to review the situation before anyone is banned. But a tool like this could help moderators spend their time more efficiently by flagging potentially disruptive users ahead of time. The findings can be used to develop subsequent studies and future troll-detection and moderation tools.

In the meantime, we can keep trying other measures. Some have suggested instituting user pseudonyms that stay the same over multiple sites [source: Schwartz]. Sites like Slashdot and Reddit have peer moderation systems in place. Sites xkcd and 4chan have even experimented with a bot called ROBOT9000 that mutes users when they say something that's been said before, only allowing original content [source: Munroe].

Google has made changes to YouTube to try to get a handle on its well-known trolling problem, including allowing users more leeway and tools to moderate their videos' comment sections, moving more relevant comments and comments from users' Google+ friends to the top and adding public or private commenting options [sources: Dredge, YouTube].

We, as online users, can also do things to diminish trolling, like making use of the voting and reporting features on online communities. The Stanford and Cornell researchers also found that other users engaging trolls with harsh feedback exacerbated the behavior. This reinforces a commonly repeated Internet maxim: "Don't feed the trolls!"

Author's Note: How could an algorithm spot trolls on the Internet?

Trolling is a major problem, at least for people who like to keep their spirits up and their blood pressure down when they're surfing the web. The very first time I watched a friend interact with other people in a virtual world online, a vile, unprovoked threat was hurled at her within minutes. This is probably what led me to eschew online conversations and stick to lurking for years. As with anything involving human interaction, there's no easy fix, and there will always be jerks making their own fun by trying to spoil ours. But I was happy to learn that research is being done into potential solutions. Any decrease in the mass of negativity online will be a plus. Until then, though, I may give up on article comment sections. I enjoy my sanity too much.

Related Articles

Sources

  • Academic Earth. "The Psychology of the Internet Troll." (June 2, 2015) http://academicearth.org/electives/psychology-internet-troll/
  • Amazon. "Amazon Mechanical Turk." (May 31, 2015) https://www.mturk.com/mturk/welcome
  • Applebaum, Anne. "The Trolls Among Us." Slate. November 28, 2014. (June 2, 2015) http://www.slate.com/articles/news_and_politics/foreigners/2014/11/internet_trolls_pose_a_threat_internet_commentators_shouldn_t_be_anonymous.html
  • BBC. "What is an algorithm?" BBC Bitesize. http://www.bbc.co.uk/guides/z3whpv4
  • Breeze, Mez. "The Problems With Anonymous Trolls and Accountability in the Digital Age." Next Web. October 27, 2012. (June 2, 2015) http://thenextweb.com/insider/2012/10/27/the-problems-with-anonymous-trolls-and-accountability-in-the-digital-age/
  • Brossard, Dominique and Dietram A. Scheufele. "This Story Stinks." New York Times. March 2, 2013. (June 2, 2015) http://www.nytimes.com/2013/03/03/opinion/sunday/this-story-stinks.html?_r=1
  • Buckels, Erin E. et al. "Trolls Just Want to Have Fun." Personality and Individual Differences. September 2014, Volume 67, Pages 97-102. (June 1, 2015)
  • Cheng, Justin et al. "Antisocial Behavior in Online Discussion Communities." Cornell University Library. April 2, 2015. (May 25, 2015) http://arxiv.org/pdf/1504.00680v1.pdf
  • Collins, Katie. "'Troll Hunting' Algorithm Could Make Web a Better Place." Wired. April 14, 2015. (May 25, 2015) http://www.wired.co.uk/news/archive/2015-04/14/google-algorithm-predicts-trolls-antisocial-behaviour
  • Dredge, Stuart. "YouTube Aims to Tame the Trolls With Changes to Its Comments Section." Guardian. Nov. 7, 2013. (June 2, 2015) http://www.theguardian.com/technology/2013/nov/07/youtube-comments-trolls-moderation-google
  • Felder, Adam. "How Comments Shape Perceptions of Sites' Quality — and Affect Traffic." Atlantic. June 5, 2014. (June 2, 2015) http://www.theatlantic.com/technology/archive/2014/06/internet-comments-and-perceptions-of-quality/371862/
  • Fobar, Rachel. "Researchers Develop a Troll-hunting Algorithm." Popular Science. April 13, 2015. (June 2, 2015) http://www.popsci.com/researchers-develop-troll-hunting-algorithm
  • Golbeck Ph.D., Jennifer. "Internet Trolls Are Narcissists, Psychopaths, and Sadists." Psychology Today. Sept. 18, 2014. (June 1, 2015) https://www.psychologytoday.com/blog/your-online-secrets/201409/internet-trolls-are-narcissists-psychopaths-and-sadists
  • Hern, Alex. "Algorithm 'Identifies Future Trolls From Just Five Posts.'" Guardian. April 17, 2015. (June 2, 2015) http://www.theguardian.com/technology/2015/apr/17/algorithm-identifies-future-trolls-from-just-five-posts
  • Ingram, Mathew. "Research Shows That if You Remove Anonymity, You Won't Hear From Most of Your Readers." GigaOm. August 27, 2014. (June 2, 2015) https://gigaom.com/2014/08/27/research-shows-that-if-you-remove-anonymity-you-wont-hear-from-most-of-your-readers/
  • Khan Academy. "Intro to Algorithms — What is an algorithm and why should I care?" (June 1, 2015) https://www.khanacademy.org/computing/computer-science/algorithms/intro-to-algorithms/v/what-are-algorithms
  • LaBarre, Suzanne. "Why We're Shutting Off Our Comments." Popular Science. Sept. 24, 2013. (June 2, 2015) http://www.popsci.com/science/article/2013-09/why-were-shutting-our-comments
  • Mooney, Chris. "Internet Trolls Really Are Horrible People." Slate. Feb. 14, 2014. (June 2, 2015) http://www.slate.com/articles/health_and_science/climate_desk/2014/02/internet_troll_personality_study_machiavellianism_narcissism_psychopathy.html
  • Mooney, Chris. "The Science of Why Comment Trolls Suck." Mother Jones. Jan. 10, 2013. (June 2, 2015) http://www.motherjones.com/environment/2013/01/you-idiot-course-trolls-comments-make-you-believe-science-less
  • Munroe, Randall P. "ROBOT9000 and #xkcd-signal: Attacking Noise in Chat." xkcd. Jan. 14, 2008. (June 2, 2015) http://blog.xkcd.com/2008/01/14/robot9000-and-xkcd-signal-attacking-noise-in-chat/
  • Pullen, John Patrick. "Science Says You Should Ignore Internet Trolls." April 20, 2015. (May 31, 2015) http://time.com/3827683/internet-troll-research/
  • Schwartz, Mattathias. "The Trolls Among Us." New York Times. Aug. 3, 2008. (June 1, 2015) http://www.nytimes.com/2008/08/03/magazine/03trolls-t.html
  • Vaas, Lisa. "New Algorithm Could Auto-squash Trolls." Naked Security. April 15, 2015. (May 25, 2015) https://nakedsecurity.sophos.com/2015/04/15/new-algorithm-could-auto-squash-trolls/
  • YouTube Official Blog. "We Hear You: Better Commenting Coming to YouTube." Sept. 24, 2013. (June 2, 2015) http://youtube-global.blogspot.co.uk/2013/09/youtube-new-comments.html