More robots, fewer people. That’s where the US military is headed in the future. But what kind of robots?

Army Gen. Robert Cone, four-star commander of the powerful Training and Doctrine Command (aka TRADOC), said that the service is studying how robots could help replace 25 percent of the soldiers in each of its 4,000-strong combat brigades. That’s because the current budget crunch is pushing the military to replace expensive human beings — and the expensive hardware required to keep them alive — with cheaper and expendable robots. The Army is under particular pressure because it has the most people, spending almost half its budget on pay and benefits, and those people take the heaviest casualties.

What’s hotly debated, however, is what jobs robots should do, under what level of human control. Should do they the drudge work of war, sparing humans the “dirty, dull, and dangerous” jobs like clearing roadside bombs? Or should we trust robots to kill on their own initiative?

The Army basically wants R2-D2s and mechanized mules, helpful bots that haul supplies, scout ahead, and provide technical support to the human heroes who do the actual fighting. They want small robots that trundle alongside the foot troops, loaded with sophisticated sensors so they can point out potential dangers, “robots that respond, if you will, like a bird dog,” said TRADOC’s Maj. Gen. William Hix in a conference call with journalists this morning. They want mid-size robots that carry extra supplies for infantrymen on long patrols, a concept once officially called MULE. They want big trucks that drive themselves, entire supply convoys where a long line of robots plays “follow the leader” behind a single human-driven vehicle at the front. They want scout drones that fly ahead of manned helicopters and report back what they find.

But, as TRADOC Col. Kevin Felix once told me, “No Terminators.”

Not so outside the Army. In a thinktank report released today, 20YY: Preparing for War in the Robotic Age, former Navy Under Secretary Robert Work and co-author Shawn Brimley call for developing “autonomous attack systems” cheap and numerous enough to form “reconnaissance-strike swarms.” Think big, robotic killer bees that attack with smart bombs instead of stingers and that coordinate their maneuvers using wi-fi instead of pheromones.

[Click here to read Brimley’s response to this article]

Both sides agree there’ll be more robots in the future military — and not just our military. In fact, when you put together cuts to military research and development, the spread of high technology worldwide, and our unease about letting computers decide when to pull the trigger, there’s a real fear that more agile and less ethical enemies may field killer robots first.

These are questions of life, death, and taxes — that is, the tens of billions of your taxpayer dollars that the military will have to invest in whatever it decides to do.

The Army is already experimenting with armed robots like the MADDS prototype pictured above, but they always have a human being pulling the trigger, albeit by remote control. TRADOC doesn’t anticipate the actual fielding an of “unmanned combat platform” until around 2035 — and the military programs its unmanned systems not to fire without direct orders from a human. That’s not a restriction Army leaders are eager to release.

Work and Brimley, by contrast, are much more confident that robots can make the call themselves already — at least in some circumstances. “For some types of target sets in relatively uncluttered environments, it is already possible to build systems that can identify, target and engage enemy forces,” they write, “although current DOD guidelines direct that a human be in the loop for offensive lethal force decisions.”

The defensive value of robotics is equally important: Work and Brimley argue that if you replace human beings with robots wherever possible, you don’t have to worry about bringing your troops back alive, because they were never living beings in the first place. Currently, the US invests in a small number of very expensive, very capable, and very well-protected systems — stealth fighters, Navy destroyers, Army tanks — that have human beings inside them. Take the humans out and you can take out the air supply, armor protection, food, and a host of other systems that humans need to survive. Then you replace a few precious, large, manned vehicles with a swarm of expendable, small, unmanned ones.

“Human controllers, safely removed from harm’s way, would provide mission-level control over the swarm,” Work and Brimley write, “but the leading edge of the battlefront across all domains” — that is, air, land, sea, outer space, and cyberspace — “would be unmanned, networked, intelligent and autonomous.”

Killer robots aren’t the only things Work and Brimley are unsettlingly sanguine about. How are the “human controllers” who are “safely removed” going to “control” anything? The answer involves even more reliance on the kind of long-range wireless networks that the military has invested in massively since the 1990s.

Computer networks transmit orders and reports far faster and in far more detail than human voices, and they give GPS-precise coordinates to both lost soldiers and smart bombs, but they are vulnerable to enemies more technologically sophisticated than the Taliban. Work and Brimley do briefly discuss “the need for robust and reliable communications” to link robots and humans, and they acknowledge that “cyber is likely to be the new ‘high ground’ in future warfare.” (“Cyber” is, by the way, a vague and debated term that boils down to “stuff that has to do with computers”). But on the whole they seem to just assume we can buy new technologies to stop enemy radio jamming and computer hacking: “many of today’s concerns about being able to communicate reliably with unmanned systems over long ranges seem likely to be ameliorated,” they write.

By contrast, the Chief of Naval Operations, Adm. Jonathan Greenert, thinks protecting our networks is going to be a major battle. Or as TRADOC Col. Christopher Cross said this morning: “All of our technologies [today] rely on a reliable, redundant, and secure network….If we lose that, we lose all the advantages.”

Where the Army and the thinktankers agree, however, is that we cannot take our current technological superiority for granted. Work and Brimley don’t give an exact date for this vision — that’s why their title begins with 20YY — but they say we need to get started now. As advanced technology spreads around the world, they write, “the dominance enjoyed by the United States in the late 1990s/early 2000s in the areas of high-end sensors, guided weaponry, battle networking, space and cyberspace systems, and stealth technology has started to erode. Moreover, this erosion is now occurring at an accelerated rate.”

The Army’s own analysis showed the US was already losing its lead in areas such as long-range artillery, that our dominance in unmanned aircraft (drones) would be in danger by the mid-2020s, but our advantage in robotics and other key areas would remain secure until the 2030s.

“We thought we were too pessimistic,” said Col. Cross. “We thought we’d given the enemy too much credit. But when the Army shared its analysis with a conclave of academics and engineers held recently at the College of William and Mary, Cross said, their response was that, if anything, “we’ve been too optimistic.”

The issue isn’t just technological: It’s ethical. How smart does a robot have to be before we let it make its own decision whether to shoot? American standards for distinguishing combatants from civilians are higher than those of organizations that regularly blow up combatants and civilians alike with roadside bombs, like the Taliban, or inaccurate rockets, like Hamas and Hezbollah.

“Adversaries won’t play by the rules we play by in terms of the law of war,” warned Col Felix.

We may not deploy autonomous lethal robots ourselves any time soon, Col. Cross added. But, he warned, “we will fight against robotic platforms in the future.”