New York (CNN Business) Among the many tragedies of the massacre at two New Zealand mosques on Friday is a bitter irony: The terrorist who killed at least 50 people in an Islamophobic attack resembled in many ways a member of ISIS. If his life had gone different in some way, he might well have ended up one and killed people somewhere else in its name. The type of extremism and hatred is of course different. But they have at least one thing in common: the internet as a tool of radicalization.

There is still much we don't know about the suspect and his background. But before anything at all was known about him, anyone who has studied or covered extremism and these kinds of attacks could have given you an educated guess about what kind of person he was: Male. Probably in his 20s. Decent chance of at least a minor criminal record. More than likely a history of hatred toward or violence against women. Oh, and one more thing — probably spent a fair amount of time on the internet.

People could easily become radicalized before social media. Many are still radicalized without it. But social media, often in combination with other factors, has proven itself an efficient radicalizer, in part because it allows for the easy formation of communities and in part because of its algorithms, used to convince people to stay just a little longer, watch one more video, click one more thing, generate a little more advertising revenue.

The recommendations that YouTube provides , for instance, have been shown to push users toward extreme content . Someone who comes to the site to watch a video about something in the news could quickly find themselves seeing a conspiracy theory clip instead, for instance. (In January, YouTube said it was taking steps to remedy this .) A few years ago, someone looking for information about Islam could soon find themselves listening to a radical preacher.

Combine those algorithms with men who are disaffected, who may feel that the world owes them more, and you have a recipe for creating extremism of any stripe.

"They're picking up an ideology that helps them justify their rage, their disappointment, and it's something available," Jessica Stern, a research professor at Boston University's Pardee School of Global Studies and the co-author of "ISIS: The State of Terror," told CNN Business Friday. "Terrorism runs in fads. We noticed that people were picking up the ISIS ideology who weren't even Muslim, they were converting to Islam. The ISIS ideology was an attractive way for some of these men to express their rage and disappointment. This is another ideology that is becoming very popular, it's another fad."

For all the largely much-deserved criticism they've gotten recently over all the things they've failed to act upon, the social networks did step up and take real and impressive action when faced with a deluge of ISIS supporters and content. The big tech companies could be taking similar action against white supremacists now.

"The issue on mainstream sites is for the most part there's been an aggressive takedown" of ISIS-related content, Seamus Hughes, the deputy director of the Program on Extremism at George Washington University, said. "That same dynamic hasn't happened when it comes to white supremacy."

The companies could take action against white supremacists now. Indeed, they could go on forever like that, playing whac-a-mole with different movements that pop up and begin radicalizing their users, moving against them after enough people have been killed. It would be easier for them to do that than to actually deal with the underlying problem of those algorithms designed to keep people around.

"It makes sense from a marketing perspective; if you like Pepsi then you're going to watch more Pepsi videos... but you take that to the logical extreme with white supremacy videos," Hughes said. "They're going to have to figure out how to not completely scrap a system that has brought them hundreds of millions of dollars of ad revenue while not also furthering someone's radicalization or recruitment."

Perhaps the most disheartening aspect of this is that the companies have been told, over and over again, that they have a problem. Ben Collins, a reporter with NBC News, tweeted Friday , "Extremism researchers and journalists (including me) warned the company in emails, on the phone, and to employees' faces after the last terror attack that the next one would show signs of YouTube radicalization again, but the outcome would be worse. I was literally scoffed at."

So what should the platforms do now?

Asked that question, Bill Braniff, the director of the National Consortium for the Study of Terrorism and Responses to Terrorism (START) and a professor of the practice at the University of Maryland, said, "What I believe we should be asking them to do is to continue to minimize the salience or the reach of violent extremist propaganda, that calls for violence... but not to limit themselves to just content takedowns as the way to do that. What happens when large platform takes down this content or these views is that the content just shifts to smaller platforms. ... Maybe fewer people will be exposed over time, and that's a good thing, but that's not the same as a comprehensive solution."

Content takedowns alone can both contribute to a persecution narrative and drive people to smaller, more radical sites, Braniff noted. And he thinks that means giving up an opportunity to use the algorithms to redirect, rather than reinforce.

"We know that people... can actually be addressed through counseling [and] mentorship," he said. "If instead of directing people who might be flirting with extremism to support, if you censor them and remove them from these platforms you lose... the ability to provide them with an off-ramp."

While noting that platforms should still take down content that explicitly calls for violence, which also violates their terms of service, Braniff said, "There's some content that doesn't violate the terms of use, and so the question is, can you make sure that information is contextualized with videos before and after it on the feed?"

The comprehensive solution he sees is a change to the algorithms, so that they could point people to differing views or even in some cases to support such as counseling.

"Algorithms can either foster groupthink and reinforcement or they can drive discussion," he said. "Right now the tailored content tends to be, 'I think you're going to like more of the same,' and unfortunately that's an ideal scenario for not just violent extremism but polarization ... We're only sharing subsets of information and removing the middle ground, the place where we come together to discuss different ideas... [a] massive part of violent extremism is polarization, and it's really dangerous."