Darpa has a well-earned rep for some of the most ambitious, over-the-top research programs of all time. But this might be the most over-the-toppest of all. The very first step? Create a unified mathematical language for everything the military sees or hears.

The armed forces are overwhelmed by all the data its various sensors are sniffing out. They want a single data stream that combines drone video feeds, cell phone intercepts, and targeting radar. Darpa's solution, found in the brand-new Mathematics of Sensing, Exploitation, and Execution program is to design an algorithm that teaches the sensors how to interpret the world – how to think, how to learn and what data, accordingly to collect.

Sensors "process their signals as if they were seeing the world anew at every instant," Darpa laments in its call for algorithms. To put it in Philosophy 101 terms, existence is, to a sensor, what William James called a "blooming, buzzing confusion": an unmediated series of events to be vacuumed up, leaving an analyst overloaded with unsorted data. Wouldn't it be better if a sensor could be taught how to filter the world through a perceptual prism, anticipating what the analyst needs to know?

That's the specific military application of MSEE. But to get there, Darpa takes a rather unconventional path. To get the "economy and efficiency that derives from an intrinsic, objective-driven unification of sensing and exploitation," it wants to create an "intrinsically integrated" algorithm for the machines to interpret reality. "All proposed research must describe a unifying mathematical formalism that incorporates stochasticity fundamentally," Darpa tells would-be designers.

In other words, one mathematical formula has to teach machines how to create order out of the chaos of the world around them, and to use that common ontology to develop a "learning capacity and expected rate-of-learning." Naturally, human interaction is to be limited: the sensors should "learn in unsupervised or semi-supervised fashion" instead.

"Specifically excluded is research that results primarily in evolutionary improvements to the existing state of practice," Darpa writes. You think? It even italicizes that passage in its bid. If you're going to teach an infrared sensor pod how to make sense of the shapes it observes, there's no half-stepping allowed.

By the time the MSEE's produced a prototype – about three and a half years (!) – multiple types of sensors ought to be able to orient themselves using the algorithm. Specifically, Darpa says that an MSEE prototype has to "furnish sensor output products" from imagery and video, communications intercepts and the tracking of a moving target. If your algorithm can train those very distinct sensors how to determine for themselves what relevant data is, you'll have gone a long way to draining oceans of data into a customizable kiddie pool for military analysts.

Oh, and you also may have introduced a new kind of artificial intelligence to machines used to track people and deadly weapons of war. At the least, you'll have designed a mark-one Cylon, one that might recognize other sensors as its kin. Darpa – notably! – is silent on the most critical question of all: what will reality look like to a sensor?

See Also: