When jihadist content is permitted to spread unchecked across the globe via cyberspace, it is a matter of national and international security. Tragically for Western civilization, its tech and media icons have been colluding -- even if unwittingly -- with those working actively to destroy it.

A separate lawsuit claims that Twitter not only benefits indirectly by seeing its user base swell through the increase of ISIS-linked accounts, but directly profits by placing targeted advertisements on them.

According to the legal complaint, the names and symbols of Palestinian Arab terrorist groups and individuals were known to authorities, and "Facebook has the data and capability to cease providing services to [such] terrorists, but... has chosen not to do so."

That major technology companies are openly stifling the free speech of people trying to counter jihad is bad enough; what is beyond unconscionable is that they simultaneously enable Islamic supremacists to spread the very content that the counter-jihadists have been exposing.

For the past few years, large social media and other online companies have been seeking to restrict or even criminalize content that could be construed as critical of Islam or Muslims, including when the material simply exposes the words and actions of radical Islamists.

The recent attempt by the digital payment platform, PayPal, to forbid two conservative organizations -- Jihad Watch and the American Freedom Defense Initiative -- from continuing to use the service to receive donations, is a perfect case in point. Although PayPal reversed the ban, its initial move was part of an ongoing war against the free speech of counter-jihadists -- those working to expose the ideology, goals, tactics and strategies of Islamic supremacists, and who are trying to defeat or at least to deter the Islamic supremacist global agenda.

Examples of this kind of censorship abound. In October 2016, for instance, conservative radio host and author Dennis Prager's "PragerU" -- which produces five-minute clips presented by leading experts in the fields of economics, politics, national security and culture -- announced that more than a dozen of its videos were facing restricted access on YouTube, a subsidiary of Google. In theory, this meant that users who employed the filter for sexually explicit or violent content would be blocked from it.

Among these restricted videos however, were six relating to Islam: "What ISIS Wants," presented by Tom Joscelyn, Senior Fellow at the Foundation for Defense of Democracies; "Why Don't Feminists Fight for Muslim Women?" presented by Ayaan Hirsi Ali, fellow at Stanford's Hoover Institute and Harvard's Belfer Center; "Islamic Terror: What Muslim Americans Can Do," presented by Khurram Dara, a Muslim American activist, author and attorney; "Pakistan: Can Sharia and Freedom Coexist?" and "Why Do People Become Islamic Extremists?" presented by Haroon Ullah, a foreign policy professor at Georgetown University; and "Radical Islam: The Most Dangerous Ideology," presented by Raymond Ibrahim, author of The Al Qaeda Reader.

PragerU is now pursuing legal action against Google/YouTube, having just filed a potentially major precedent-setting suit against the internet giant in U.S. District Court in California on grounds that Google/YouTube is allegedly discriminating against and censoring PragerU's videos based on the entity's conservative political identity and viewpoint.

PragerU is not alone in having its content -- presented by reputable thinkers -- treated by social media companies as comparable to pornography, or similarly inappropriate or offensive material. For instance:

In January 2015, a mere two weeks after Facebook CEO Mark Zuckerberg penned a #JeSuisCharlie statement in defense of free speech -- in the wake of the Islamist terrorist attack on the Paris-based satirical journal Charlie Hebdo -- Facebook censored images of the prophet Muhammad in Turkey.

-- Facebook censored images of the prophet Muhammad in Turkey. In January 2016, the Facebook page "Justin Trudeau Not," which contained content critical of the Canadian prime minister's views on Islamic supremacism, was deleted by Facebook as a "violation of community standards." The offense? The page's authors "contrasted Trudeau's immediate condemnation of a pepper spray attack against Muslims in Vancouver with his complete refusal to address a firearm attack by Muslims in Calgary."

In May 2016, the administrator of a pro-Trump Facebook group was banned from Facebook for posting: "Donald Trump is not anti-Muslim. He is anti ISIS. What Trump is trying to say is that Homeland Security cannot differentiate which Muslim is [a] radical wanting to cause harm and which is a harmless refugee. Who is willing to sacrifice their family's safety for the sake of political correctness? Are you?"

In June 2016, YouTube removed a video -- "Killing for a Cause: Sharia Law & Civilization Jihad" -- elucidating the aim of Islamic supremacists to subvert the West from within.

Also in June 2016, Facebook suspended the account of Swedish writer Ingrid Carlqvist for posting a video, produced by Gatestone Institute , on "Sweden's Migrant Rape Epidemic." After Gatestone readers responded critically to the censorship, the Swedish media started reporting on the case, and Facebook reinstated the video, without any explanation or apology.

on "Sweden's Migrant Rape Epidemic." After Gatestone readers responded critically to the censorship, the Swedish media started reporting on the case, and Facebook reinstated the video, without any explanation or apology. In May 2017, Jayda Fransen, the deputy leader of Britain First, a party "committed to the maintenance of British national sovereignty, independence and freedom," was banned from Facebook for 30 days for "repeatedly posting things that aren't allowed on Facebook." The post that reportedly triggered the temporary ban was a meme quoting the passage from the Koran: "O you who believe! do not take the Jews and the Christians for friends...Allah does not guide the evildoers."

Also in May 2017, Facebook blocked and then shut down the pages of two popular moderate Muslim groups -- managed and followed by Arabs across the world who reject not only violence and terrorism, but Islam as a religion -- on the grounds that their content was "in violation of community standards."

In August 2017, a YouTube channel containing a playlist of videos featuring best-selling author and scholar Robert Spencer, the director of Jihad Watch, was removed for a supposed violation of the platform's "Community Guidelines."

Later in August 2017, the Independent reported that Instagram, Twitter and YouTube allegedly had been cooperating with the Iranian regime to block or censor "immoral" content.

In the past year, social media companies have been editing their user guidelines to broaden the scope of the type of content that may be flagged for removal. These necessarily end up targeting content and users that counter the use of jihad, or war in the service of Islam. Examples of this procedure include the following:

In September 2016, YouTube released new "Advertiser-friendly content guidelines," according to which: "Video content that features or focuses on sensitive topics or events including, but not limited to, war, political conflicts, terrorism or extremism, death and tragedies, sexual abuse, even if graphic imagery is not shown, is generally not eligible for ads. For example, videos about recent tragedies, even if presented for news or documentary purposes, may not be eligible for advertising given the subject matter." It is easy to see how such rules could be used against people trying to counter jihad.

In March 2017, Google revealed that it was seeking to improve its search function by having its 10,000 "quality raters" flag "upsetting-offensive" content. The data generating the quality ratings will then be incorporated into Google's algorithms for monitoring and forbidding content. Two months later, Google updated the guidelines for "non-English-language web pages." One example cited by Google as "upsetting-offensive" is a post titled "Proof that Islam is Evil, Violent, and Intolerant – Straight from the Koran..." In contrast, Google calls a PBS Teachers Guide on Islam a "high-quality article...with an accurate summary of the major beliefs and practices of Islam."

In August 2017, YouTube posted "An update on our commitment to fight terror content online," which is sure to put counter-jihadist content in its crosshairs: "...[W]e have begun working with more than 15 additional expert NGOs and institutions through our Trusted Flagger program, including the Anti-Defamation League, the No Hate Speech Movement, and the Institute for Strategic Dialogue. These organizations bring expert knowledge of complex issues like hate speech, radicalization, and terrorism that will help us better identify content that is being used to radicalize and recruit extremists. We will also regularly consult these experts as we update our policies to reflect new trends. And we'll continue to add more organizations to our network of advisors over time...We'll soon be applying tougher treatment to videos that aren't illegal but have been flagged by users as potential violations of our policies on hate speech and violent extremism. If we find that these videos don't violate our policies, but contain controversial religious or supremacist content, they will have some features removed. The videos will remain on YouTube behind an interstitial, won't be recommended, won't be monetized, and won't have key features including comments, suggested videos, and likes."

It bears noting here that one group cited above -- the ADL –previously negatively flagged and profiled various counter-jihadist individuals and organizations. This is in keeping with the political slant of its new president, Jonathan Greenblatt, who has taken the organization in a decidedly left-leaning direction.

That major technology companies are openly stifling the free speech of people trying to counter jihad is bad enough; what is beyond unconscionable is that they simultaneously enable Islamic supremacists to spread the very content that the counter-jihadists have been exposing. It is a practice that the Shurat HaDin-Israel Law Center is actively engaged in battling through litigation. The following four lawsuits against key platforms shed light on the way in which incitement to terrorism is able to flourish unfettered on the Internet, while those trying to combat it are targeted for "hate speech."

Lakin v. Facebook : The lawsuit, representing 20,000 Israeli plaintiffs, was brought to stop Facebook from "allowing Palestinian terrorists to incite violent attacks against Israeli citizens and Jews on its internet platform." The plaintiffs attributed the surge in Palestinian terrorism that began on October 1, 2015 -- during which "more than 200 stabbings, more than 80 shootings, and more than 40 attacks using vehicles" were perpetrated against Israelis – in part to a "campaign driven by Palestinian terrorists using Facebook to incite, enlist, organize, and dispatch would-be killers to 'stab' and 'slaughter Jews.'" According to the complaint, the names and symbols of Palestinian Arab terrorist groups and individuals were known to authorities, and "Facebook has the data and capability to cease providing services to [such] terrorists, but...has chosen not to do so."

Force v. Facebook : The lawsuit, representing five American victims of Hamas terrorist attacks and their families, sought monetary damages against Facebook under the U.S. Antiterrorism Act (ATA) for providing material support and resources to a designated foreign terrorist organization. The suit alleged that known members of Hamas, including "leaders, spokesmen, and members," had "openly maintained and used official Facebook accounts to "communicate, recruit members, plan and carry out attacks, and strike fear in its enemies," as well as to "issue terroristic threats, attract attention to its terror attacks, instill and intensify fear from terror attacks, intimidate and coerce civilian populations, take credit for terror attacks, communicate its desired messages about the terror attacks, reach its desired audiences, demand and attempt to obtain results from the terror attacks, and influence and affect government policies and conduct." In spite of these activities, the suit claims, Facebook has knowingly allowed Hamas and related individuals and entities to use its platform, while determining in several instances that the group's Facebook pages did not violate company policies, or by deleting only certain content, yet allowing the pages to remain active.

Cain v. Twitter : The case, filed in federal court on behalf of two victims/families of the Islamic State (ISIS) terror attacks in Paris in November 2015 and in Brussels in March 2016, sought damages under the ATA by alleging that Twitter has provided material support for ISIS. The suit alleges that Twitter has been used by ISIS in the way that Facebook has been used by Hamas, among other things to: recruit, connect and communicate with members; plan and carry out attacks; inflate its image through the use of twitter bots and hashtags; and distribute videos, images and magazines that contain violent messages intended to incite, while making ISIS appear more legitimate. The suit claims that Twitter has facilitated such uses by providing resources and services to the Islamic State and its affiliates – many of whom openly maintained accounts – while refusing to identify Islamic State Twitter accounts, and only reviewing them when reported by Twitter users or third parties. The plaintiffs further argued that Twitter had protected ISIS by: notifying users if it suspects government surveillance of Twitter accounts; suing the U.S. Department of Justice to defy orders requiring Twitter to keep details of investigative subpoenas secret, even if disclosure might harm national security; barring U.S. intelligence agencies from purchasing Twitter's Dataminr analytics tool, which could be used to identify terrorist activities and threats; and using its anti-harassment policies to ban Twitter accounts of users reporting Islamic State accounts to Twitter. Last but not least, the lawsuit claims that Twitter not only benefits indirectly by seeing its user base swell through the increase of ISIS-linked accounts, but directly profits by placing targeted advertisements on them. One example cited: "[O]n May 17, 2016, Twitter placed an advertisement for a digital marketing company, OneNorth Interactive, on the Twitter account of "DJ Nasheed" (@djnasheedis), an ISIS Twitter account used to post jihadi music videos produced by ISIS's al-Hayat Media."

Gonzalez v. Google: The case, filed in federal court on behalf of the family of a young American woman murdered in the November 2015 ISIS terror attacks in Paris, seeks damages under the ATA, based on Google's provision of YouTube access to ISIS. The suit alleges that ISIS has used YouTube to distribute violent videos, images and recordings to instill terror and bolster its image as all-powerful. It claims that YouTube facilitated these activities by refusing to identify ISIS-linked accounts known to Google -- reviewing only those accounts reported by other YouTube users.

Regardless of the legal merits of these cases, it is clear that jihadists reap significant benefits from social-media platforms, and that there are, at best, serious lapses in the platforms' policing of jihadist accounts. At worst, there is "willful blindness" in relation to jihadist material, and the application of a double-standard to posts that counter jihad. A Middle East Media Research Institute (MEMRI) report from June 2017 reveals the extent to which jihadist content that is flagged by YouTube users is left alone, in spite of assurances that such material would be removed. In fact, of the 115 videos that MEMRI flagged on YouTube in 2015, 69 remained active as of February 27, 2017. Many are still online to this day. Some are so gruesome that the MEMRI report includes a warning to readers about "graphic images."

This is not merely a free-speech issue. On the contrary, there is evidence to suggest a direct correlation between jihadist incitement and terrorism. After the London Bridge attack in June 2017, for example, it emerged that one of the perpetrators had been inspired by videos posted online from a Michigan-based imam named Ahmad Musa Jibril. The International Centre for the Study of Radicalization found that many of Jibril's followers had joined al-Qaeda or ISIS. As early as 2005, federal prosecutors described Jibril as someone who "encouraged his students to spread Islam by the sword, to wage a holy war," and "to hate and kill non-Muslims." In spite of Jibril's background, his YouTube channel is still accessible. When asked by Conservative Review's Jordan Schachtel to comment on this, a Google spokesman did not indicate that Jibril had violated YouTube's content guidelines. A Facebook fan page and Twitter accounts dedicated to Jibril's sermons also remain online today.

A related manifestation of bias against counter-jihadist material in favor of jihadist posts on Internet platforms is additionally reflected in the promotion of the Palestinian Arab cause and simultaneous discrimination against Israel. Among other examples of this disparate treatment:

A July 2017 piece in Tablet Magazine sheds light on the way in which algorithms can be and are used to perpetuate pro-Islamic and anti-Israel or anti-Semitic narratives. Writing about Google's new "Perspective API" (Application Program Interface), which employs "advanced machine learning to help moderators track down comments that are likely to be 'toxic,'" Liel Leibovitz recounts:

"I asked Perspective to rate the following sentiment: 'Jews control the banks and the media.' This old chestnut, Perspective reported, had a 10 percent chance of being perceived as toxic...I tried again, this time with another group of people, typing 'Many terrorists are radical Islamists.' The comment, Perspective informed me, was 92 percent likely to be seen as toxic."

The same, he said, applied to straight news, as in the statement of fact: "Three Israelis were murdered last night by a knife-wielding Palestinian terrorist who yelled 'Allah hu Akbar.'" That, Leibovitz wrote, was also "92 percent likely to be seen as toxic."

The reason for this, he explained, is that

"machines learn from what they read, and when what they read are the Guardian and the Times, they're going to inherit the inherent biases of these publications as well. Like most people who read the Paper of Record [The New York Times], the machine, too, has come to believe that statements about Jews being slaughtered are controversial, that addressing radical Islamism is verboten, and that casual anti-Semitism is utterly forgivable... No words are toxic, but the idea that we now have an algorithm replicating, amplifying, and automatizing the bigotry of the anti-Jewish left may very well be."

Private technology companies are within their rights to make all manner of decisions as to how they operate and whom they allow to make use of their services. In a free-market system, it is the consumers -- and competitors -- who ostensibly have the power to affect the popularity of a product. It is for this very reason that detrimental activity must be exposed -- so user and market pressure forces such pivotal firms to reform. Yet one cannot deny the global reach and scope of Facebook, Google and the other Internet giants, which make it extremely difficult for dissatisfied customers to find or create an alternative. The fact is that in today's world, individuals and businesses barely are seen to exist without having a presence on these platforms. If such platforms wish, they can cripple those who dissent from their ideological orthodoxy.

This is problematic not only for political conservatives and counter-jihadists who are treated negatively by the major media firms. It is also worrisome from the point of view of freedom of expression. When jihadist content is permitted to spread unchecked across the globe via cyberspace, it is a matter of national and international security. Tragically for Western civilization, its tech and media icons have been colluding – even if unwittingly – with those working actively to destroy it.