Like HowStuffWorks on Facebook!

How could an algorithm spot trolls on the Internet?


Is an algorithm viable, or enough?
The roadblock with using an algorithm to identify trolls is that it could target innocent users who exhibit some of the behaviors that are also common flags for troublemakers. For now, human moderation is still needed.
The roadblock with using an algorithm to identify trolls is that it could target innocent users who exhibit some of the behaviors that are also common flags for troublemakers. For now, human moderation is still needed.
© galaxy67/iStockphoto

Using an algorithm like this to automate troll banning is unlikely in the immediate future. It's in the academic research phase and not available as a usable software package at the moment. And at 80 percent accuracy, one in five of the users it identified would be innocent. The researchers found that being overly harsh or quick to ban or censor users tended to increase antisocial behavior, and even worsened the writing of posters who initially had higher text quality [source: Cheng]. There are also other types of trolling, including more subtle varieties, such as posting vexingly naive questions, that this algorithm wouldn't be able to detect.

Therefore it's still important for a human moderator to review the situation before anyone is banned. But a tool like this could help moderators spend their time more efficiently by flagging potentially disruptive users ahead of time. The findings can be used to develop subsequent studies and future troll-detection and moderation tools.

In the meantime, we can keep trying other measures. Some have suggested instituting user pseudonyms that stay the same over multiple sites [source: Schwartz]. Sites like Slashdot and Reddit have peer moderation systems in place. Sites xkcd and 4chan have even experimented with a bot called ROBOT9000 that mutes users when they say something that's been said before, only allowing original content [source: Munroe].

Google has made changes to YouTube to try to get a handle on its well-known trolling problem, including allowing users more leeway and tools to moderate their videos' comment sections, moving more relevant comments and comments from users' Google+ friends to the top and adding public or private commenting options [sources: Dredge, YouTube].

We, as online users, can also do things to diminish trolling, like making use of the voting and reporting features on online communities. The Stanford and Cornell researchers also found that other users engaging trolls with harsh feedback exacerbated the behavior. This reinforces a commonly repeated Internet maxim: "Don't feed the trolls!"


More to Explore