Researchers at Stanford and Cornell (with funding from a Stanford Graduate Fellowship and a Google Faculty Research Award) conducted a study to see if they could use quantitative measures to detect antisocial users. They gained access to user comments hosted by Disqus for the sites Breitbart.com, CNN.com and IGN.com, spanning 18 months from March 2012 through August 2013. The data consisted of around 1.75 million users (nearly 49,000 of them banned), 1.26 million threads and 39 million posts (nearly 838,000 of them deleted and 1.35 million of them reported). They narrowed the banned user data down to around 12,000 users who joined the sites after March 2012, had at least five posts and were banned permanently for something other than spamming URLs [source: Cheng].
The scientists captured data including post content, user activity, community response and moderator actions. They compared messages of users who were never banned to messages of users who were permanently banned, and looked at changes in the banned users' behavior over their time. The team found that the posts of future banned trolls tended to have the following traits:
- poor spelling and grammar
- more profanity
- more negative words
- less conciliatory or tentative language
- lower understandability readings based on several readability tests (including the Automated Readability Index) which got worse toward the time of banning
- use of different jargon and function words from non-banned community members
- more digression from the topic
- a much higher number of comment posts than the average user
- a tendency to concentrate their replies in individual threads
- a tendency to provoke more replies from others
- worse behavior over time resulting in their posts being increasingly deleted before banning
On CNN.com, the average user tended to post around 22 posts during the 18 month period, whereas the future banned users posted around 264 times before being banned [sources: Cheng, Collins]. The community was also less likely to tolerate the troll over time.
Using the quantifiable results, the researchers were able to develop an algorithm (a set of steps used to solve a problem or perform a task) that used as few as five comments to determine who would be banned in the future with 80 percent accuracy. With 10 posts, the results were 82 percent accurate, but performance peaked around 10 posts. Earlier user posts were better for predicting whether they would get banned later. The team achieved a similar level of accuracy over all three online communities. Post deletion by site moderators turned out to be the most informative activity studied, but all the data in aggregate resulted in better accuracy [source: Cheng].