WASHINGTON — U.S. Special Operations Command is seeking prototypes to detect misinformation campaigns in near- to real-time to directly support the command’s information operations, according to a Dec. 12 request for information.

The prototype software should analyze social media and web data, identify viral and trending content online for threat assessment, and highlight the likelihood of the information being fake.

The prototype is to use a combination of deep learning, natural-language processing and dynamic network analysis to examine the spread of disinformation across all platforms regardless of its form, according to the RFI.

In their response, vendors should address anomaly detection, deepfakes and foreign influence by using proven machine learning capabilities.

The Defense Department as of late has been responding to the proliferation of deepfakes, which are machine-manipulated media that depict events that never happened. In 2018, a blog post designed to look like a Lithuanian news outlet claimed that four U.S. Army combat vehicles had killed a local child in a collision during a training exercise in the Baltics. The post of the fabricated incident included a manipulated image showing indifferent soldiers near a child’s lifeless body and crushed bicycle.

Existing U.S. programs created to fight the spread of misinformation include the Semantics Forensics and Media Forensics programs, which, respectively, aim to develop technologies for analyzing media and provide detailed information about media manipulation.