New Algorithm Designed to Help Spot and Deal with Internet Trolls

By May 07, 2015 1 comment

A team of researchers in the United States have developed a means of predicting whether certain online commentators will end up being banned from a discussion community.

Aggressive, doctrinaire, and exhibiting poor spelling and grammar, obnoxious ‘trolls’ are a well-known phenomenon on many discussion websites. These people, who are adept at turning a constructive debate into a war of words, sending the more courteous commentators fleeing from the forum, are a real scourge for any information or blogging site or online discussion platform. Many such sites are forced to make substantial efforts to stop trolls annoying community members and disrupting the dialogue. Some, such as the Popular Science site, have opted for a radical solution: they simply no longer accept comments. Others rely on the skills of conversation moderators, whose life might be about to become somewhat easier following the publication of a study by Stanford and Cornell universities. Funded by Internet giant Google, the report, entitled ‘Antisocial Behavior in Online Discussion Communities’, compares anti-social users or ‘Future Banned Users’ (FBUs), with more cooperative commenters dubbed ‘Never Banned Users’ (NBUs). "We were hoping on the one hand to help moderators with their work, and on the other to find out what drives some Internet users to behave in this way," explained Cristian Danescu-Niculescu-Mizil, an Assistant Professor in the Department of Information Science at Cornell University, who co-authored the report.


Analysing comments helps identify Internet trolls before they become a threat to the existence of the community

80% accuracy in spotting trolls 

Over a period of eighteen months the research team dissected data gathered from 40 million comments posted by 1.7 million online visitors to three leading US websites: major news site CNN, news and opinion platform Breitbart, and entertainment website This lengthy study allowed them to identify some characteristics of Future Banned Users, i.e. trolls-in-the-making. FBUs stood out first of all by their use of language – poor grammar and odd spelling, which made their comments more difficult to read than those of NBUs, plus the frequently profane or insulting formulations that are the troll trademark. Trolls also tend to take a different approach to discussion. Whereas mainstream users’ posts tend to address the general topic, trolls often home in on one or more discussion participants and target their remarks directly at them, clearly with a view to confrontation. Taking this as a starting point, examining around ten comments posted by a user “enables us to predict with 80% probability whether s/he will end up being banned,” claims Cristian Danescu-Niculescu-Mizil.


An over-sharp moderator may provoke rather than discourage undesirable behaviour 

Over-zealous ‘moderation’ can make people less moderate

The Stanford-Cornell algorithm could help to limit the harm done by Internet trolls, without, however, entirely usurping the role played by forum moderators: "We hope our work will lead to the development of a system that will enable moderators to target online troublemakers. However, as our system is not 100% accurate and probably never will be, we’re not looking to replace human intervention entirely; we just want to make it more responsive and efficient,” explains the Cornell professor. However, repression isn’t always the best solution. "We’ve observed that many users who start off with a bad attitude end up steadily taking on board the community’s norms, and behaving more courteously and constructively. We’ve also noticed that those who are harshly sanctioned are less likely to redeem themselves than those who aren’t,” he points out. Discussion moderators might therefore do well to take these findings on board and give trolls a chance to redeem themselves, as calling them to order straightaway could put their backs up and push them to continue with their confrontational approach. 

Page top

1 Comment

i am currently researching trolling as it pertains to political, lgbtq and feminist sites. we are hoping to work with sites in intervening with trolls (the ones we are studying do not intervene and the sites are toxic). can you give me the information on this research so i can look at it in it's academic form?


Submitted by Michelle F Davis (not verified) - on May 19, 2015 at 05:58 pm

Legal mentions © L’Atelier BNP Paribas