The Pentagon has unveiled an initiative to fight ‘large-scale, automated disinformation attacks’ by unearthing deep-fakes and other polarizing content – with the eventual goal of rooting out so-called ‘malicious intent’ entirely.

The Defense Advanced Research Projects Agency (DARPA) is seeking software capable of churning through a test set of half a million news stories, photos, and audio/video clips to target and neutralize potentially viral information before it spreads. In DARPA jargon, the aim is to “automatically detect, attribute, and characterize falsified multi-modal media to defend against large-scale, automated disinformation attacks.” “Polarizing viral content,” however, includes inflammatory truths, and the program’s ultimate goal seems to be to stamp out dissent.

The Semantic Forensics program will scan news stories and social media posts with a barrage of algorithms in the hope of identifying inconsistencies that could mark a story as fake. The desired program will not just identify a meme as inauthentic – it will identify the source of that meme, the alleged intent behind it, and predict the impact of its spread.

To hear them tell it, the Pentagon just wants to even the playing field between the ‘good guys’ – the fake-hunters pursuing the cause of truth in media – and the ‘bad guys’ sowing discord one slowed-down Nancy Pelosi speech at a time. But the Pentagon’s targets aren’t limited to deepfakes, the bogeyman-of-the-month being used to justify this unprecedented military intrusion into the social media and news realm, or fake news at all. If the program is successful after four years of trials, it will be expanded to target all “malicious intent” – a possibility that should send chills down the spine of any journalist who’s ever disagreed with the establishment narrative.

Also on rt.com ‘Conspiracy theory’? US Homeland Security wants to track journalists & analyze media ‘sentiment’

To adequately test the program, the Pentagon has to spike its array of 500,000 test stories with 5,000 convincing fakes, some of which could conceivably make their way into the “live” news stream - although the mainstream media has not exactly had trouble generating false stories on its own in recent weeks. MSNBC’s wholly unverified and still incompletely-retracted “Russian cosigners” fiction and the scare story that the Trump administration would end birthright citizenship for the children of US service members born overseas both took social media by storm before the fact-checkers could boot up their computers.

And the government itself, including the Pentagon, has an extensive history of running fake social media profiles to collect data on persons of interest, including through the NSA's JTRIG information-war program revealed in the Snowden documents. Agents regularly deploy reputational attacks against dissidents using false information. Fake identities are used to cajole unsuspecting individuals into collaborating in fake FBI "terror" plots, a phenomenon which might once have been called entrapment but is merely business as usual in post-9/11 America.

Which begs the question – how will DARPA handle the malicious falsehoods generated by “friendly” media? This, it would seem, is where the “impact” and “intent” fields come in – fakes from “trusted sources” will be let through, while fakes (and real stories) designed to “undermine key individuals and organizations” will be terminated before they have an impact. When “disinformation” is redefined to include all potentially polarizing stories that don’t conform to the establishment narrative, reality is discarded as so much fake news and replaced with Pentagon-approved pablum.

Helen Buyniski, RT

@velocirapture23

If you like this story, share it with a friend!