The U.S. military is fast running out of human analysts to process the vast amounts of video footage collected by the robotic planes and aerial sensors that blanket Afghanistan and other fronts in the war on terrorism.

Speaking at an intelligence conference last week, Marine Corps Gen. James E. Cartwright, vice chairman of the Joint Chiefs of Staff, said he would need 2,000 analysts to process the video feeds collected by a single Predator drone aircraft fitted with next-generation sensors.

“If we do scores of targets off of a single [sensor], I now have run into a problem of generating analysts that I can’t solve,” he said, adding that he already needs 19 analysts to process video feeds from a single Predator using current sensor technology.

The unmanned Predator and Reaper drones are the primary weapons the military and CIA deploy against al Qaeda’s leadership in Afghanistan, Pakistan and, more recently, Yemen.

It’s a classic conundrum for U.S. intelligence: Information-gathering technology has far outpaced the ability of computer programs — much less humans — to make sense of the data.

Just as the National Security Agency needed to develop computer programs to data-mine vast amounts of telephone calls, Web traffic and e-mails on the fiber-optic networks it began intercepting after Sept. 11, 2001, the military today is seeking computer programs to help it sort through hours of uneventful video footage recorded on the bottom of pilotless aircraft to find the telltale signs of a terrorist the drone is targeting.

The military’s surveillance technology includes the newest generation of sensors and cameras fitted on the bottom of spy planes, pilotless drones and blimps or placed on telephone polls. These ball-shaped sensors give military and intelligence agencies the capacity to monitor cell-phone calls and e-mails, to illuminate landscapes at night with infrared cameras and to record high-definition video of targets from the sky.

With names like Gorgon Stare and Constant Hawk, the newest generation of these “dense data sensors” also can mesh together thousands of video feeds to cover a geographic area the size of a city.

But the great advantage in surveillance is boring intelligence analysts to tears.

Forced to watch what Gen. Cartwright called “Death TV,” bleary-eyed analysts at ground stations and other outposts spend hours wading through useless data until they spot signs of a target and recommend that the drone fire its missile.

“Today an analyst sits there and stares at Death TV for hours on end trying to find the single target or see something move or see something do something that makes it a valid target. It is just a waste of manpower. It is inefficient,” Gen. Cartwright said.

His remarks came Thursday at the annual conference for the U.S. Geospatial Intelligence Foundation, an organization that serves in some ways as a trade association for the burgeoning intelligence, surveillance and reconnaissance (ISR) market.

While intelligence procurement is shrouded in secrecy, one industry insider told The Washington Times that he thinks ISR is a $5 billion to $10 billion annual industry.

And while it’s likely many other weapons programs will be cut in the coming years as the Pentagon looks to trim costs, the intelligence and military budgets for ISR appear to be expanding. Last year, Defense Secretary Robert M. Gates created a special ISR task force with the goal of getting the latest sensors into Afghanistan without the usual delays associated with military contracting.

Gen. Cartwright said he gets “love notes” from Army Gen. David H. Petraeus, the commander of coalition forces in Afghanistan, saying he has an 800 percent shortfall of ISR assets.

Air Force Lt. Gen. John C. Koziol, director of the ISR task force, told the intelligence conference last week that he was looking to get new products out to the battlefield within a year of issuing contracts. He also stressed that he needed computer filters to help his analysts sift through all the video feeds.

“We don’t have the time to try to discover data anymore,” Gen. Koziol said. “Especially when we are putting wide-area surveillance into country rapidly right now. We have to increase discoverability. We have to allow that analyst sitting on the ground looking at this massive amount of data, he or she does not have time to [sift] through all this stuff.”

John Delay, director of architecture for the motion imageries division of the Harris Corp., said he and his team were working on how to bring technologies used in broadcast television, such as computer-generated tags for different kinds of video, to the military and intelligence agencies.

The Harris Corp., which has several defense contracts, also has made transmitters for broadcast television since 1969.

Mr. Delay works on the Full-Motion Asset Management Engine, a set of computer systems that look to archive, store and help process video feeds from remote sensors used by the government.

He said the technology to have computers sort through video feeds in the way Gen. Cartwright is asking is three years away.

“Within three years, it will be technically feasible to run these sophisticated algorithms and extract relevant essence data from the content. It will be feasible to deal with the motion-imagery data problems in three years,” Mr. Delay said.

Noah Shachtman, a nonresident fellow at the Brookings Institution and editor of Wired’s Danger Room, said the problem facing the military is similar to that faced by large cities that have installed cameras on traffic lights and telephone poles to prevent crime.

“The problem is that the more sensors you put up there, the more analysts you need,” he said. “Ninety percent of the footage will be useless. The problem is, you don’t know which 10 percent will be important. Just having someone stare at it, bleary-eyed and slack-jawed, is a huge waste of time.”

Sign up for Daily Newsletters Manage Newsletters

Copyright © 2020 The Washington Times, LLC. Click here for reprint permission.