A map of the conflicts (below) helps illustrate the subjects that tend to spark the most activity.

The data also sheds light on who picks these fights and their long term effects. It's typically "highly active" users who start conflicts, but the quieter ones are those who participate. The groups don't usually interact with each other, though, and tend to stay within their own bubbles. Conflicts can be damaging to a community in the long run. Despite what you've been told about engaging with trolls, though, fighting back does appear to help -- a subreddit can mitigate the effect of a raid by engaging with the attackers instead of hoping they'll go away.

This data doesn't exist just to satisfy curiosity. The team has developed a deep learning model that can predict conflicts based on those linked posts. Eventually, it could be useful for moderators on Reddit and other internet forums as an "early-warning system" that gives them a heads-up when a harassment campaign is imminent. It could also be useful for site operators, for that matter. If Reddit could easily detect and track intercommunity raids, it could quickly clamp down on hostile communities instead of waiting for their next attack.