In 2014, a 19-year-old US citizen Asher Abid Khan was planning to go to Syria to fight for the ISIS . Media reports said he was exposed to propaganda videos through Facebook which encourages people from across the world to fight in Syria. In India too, there were reports of a twitter handle @ShammiWitness, which belonged to an alleged ISIS recruiter with a million followers who was nabbed by the Bengaluru police. Or take the case of Anees Ansari who had used Facebook to talk to his handler while trying to make thermite bomb.Groups such as ISIS or even individuals aimed at driving propaganda have used social media platforms like Facebook, Twitter, or even tools from Google and Microsoft to spread their message and recruit people. Though governments and technology companies have systems in place to curb the use of technology for propagating extreme views, it has largely been unsuccessful."Although Facebook tried blocking Anees Ansari, he came back with different avatar's and discussed bombing schools with friends in US. Also, a Kalyan engineering student Areeb Majeed was recruited and radicalized online to become an ISIS fighter before returning home into the hands of Indian police, " says Prashant Mali, Bombay High Court Lawyer and cyber policy expert.Governments and technology companies would want to block explosive content but the problem lies in the sheer scale at which such content is generated and amplified in the digital world. “Law enforcement agencies globally do not have adequate capacity to proactively intervene the amplification of malicious content, considering millions of posts and videos are uploaded every day,” says Gunjan Trivedi, head of global Communications, Aranca, a research and advisory organisation.When it comes to tackling extremist content online, India has been towing the global line of making the social media companies ban such content. "There are provisions within the IT Act that holds the intermediary responsible for the content thereby ensuring they keep the social media companies on their toes. However, our policies and laws, much like our global counterparts, are overly dependent on the social media companies for the execution," says Trivedi. Mali says India needs better content removal policy and harsher punishment.Mishi Choudhary, legal director of New Delhi-based Software Freedom Law Center says companies should be working with other stakeholders for effective information exchange and have a shared database for violent terrorist imagery, or recruitment videos and images, which have been found by their respective services. In this backdrop, Google, Facebook, Microsoft and Twitter have committed to establishing an international forum to share and develop technology to support smaller companies in a joint effort to tackle terrorism online.From the technology perspective, artificial intelligence and machine learning technologies are perhaps the best solutions to identify the large scale of content being spewed online. However, Trivedi cautions that by the time the tools learn to identify and map certain images or keywords, the perpetrators may have moved to another level or mode of communication. "AI algorithms haven’t yet evolved to develop context. It would be hard for self-learning technologies to distinguish between an actual malicious content and a satire. Facebook, Twitter, Google etc understand this conundrum." He says they actually employ a combination of volunteers or human intelligence and machine learning tools to scan and sweep the extremist content of the online space.A spokesperson from Twitter, In response to ET's queries, said they had suspended 935,897 accounts for the promotion of terrorism between August 1, 2015 and June 30, 2017, and government requests accounted for less than 1% of those suspensions. "Instead, 95% of these account suspensions were the result of our internal efforts to combat this content with proprietary tools, up from 74% in our last Transparency Report."But when ET reached out to Facebook, a spokesperson says that " There’s no place on Facebook for terrorism. We remove terrorists and posts that support terrorism whenever we become aware of them. And in the rare cases when we uncover evidence of imminent harm, we promptly inform authorities. We also work with policymakers, civil society, academia and companies to identify and slow the spread of terrorist content online, as well as support counter speech initiatives.”Google's spokesperson directed us to an article by Kent Walker, general counsel of Google, highlighting the four steps taken by them to fight terrorism. The search giant is working with Jigsaw to implement the “Redirect Method” -- more broadly across Europe -- to harnesses the power of targeted online advertising to reach potential Isis recruits, and redirects them towards anti-terrorist videos. "Our engineers have developed technology to prevent re-uploads of known terrorist content using image-matching technology. We have used video analysis models to find and assess more than 50 percent of the terrorism-related content we have removed over the past six months."A mail to Microsoft did not elicit a response before the article went to print.But Mishi Choudhary says although terrorism is a major issue, it is often used for over-regulation or make impossible demands such as making encryption work differently for good and bad people by politicians as we have seen in the UK. "Such demands are a waste of time and are mere posturing that prevents the formulation of effective, efficient strategy to address the real problems. Keeping in mind that over-regulation may lead to curbing of legal expression of speech."