A study [PDF] based on observations from 36,000 subreddit communities has found that online dust-ups can be predicted, and the people most likely to cause them can be identified.
“Our analysis revealed a number of important trends related to conflict on Reddit, with general implications for intercommunity conflict on the web.”
Among the takeaways were that a small group of bad actors are indeed stirring up most of the conflict; around 75 per cent of the raids were triggered by 1 per cent of users.
The study also noted that ignoring the trolls doesn’t always work – conflicts grow worse when users stay within ‘echo chambers’ on their own threads, and long-term traffic losses were lessened when the ‘defending’ users directly confronted the forum intruders rather than keep to themselves.
Perhaps the most important takeaway, however, was that forum conflicts could actually be predicted. The Stanford group say they developed an long short-term memory (LSTM) deep-learning formula that, when trained on the set of Reddit posts and user information gathered over the 40 month period, was able to reliably flag when a conflict or raid was likely to flare up on a subreddit.
Now, the Stanford group says it would like to extend the research to other platforms (such as Facebook and Twitter) and look at areas not addressed in the first report, including forums that restrict negative content.