By Matthew Rosenberg and John Markoff, New York Times

CAMP EDWARDS, Mass. — The small drone, with its six whirring rotors, swept past the replica of a Middle Eastern village and closed in on a mosque-like structure, its camera scanning for targets.

No humans were remotely piloting the drone, which was nothing more than a machine that could be bought on Amazon. But armed with advanced artificial intelligence software, it had been transformed into a robot that could find and identify the half-dozen men carrying replicas of AK-47s around the village and pretending to be insurgents.

As the drone descended slightly, a purple rectangle flickered on a video feed that was being relayed to engineers monitoring the test. The drone had locked onto a man obscured in the shadows, a display of hunting prowess that offered an eerie preview of how the Pentagon plans to transform warfare.

Almost unnoticed outside defense circles, the Pentagon has put artificial intelligence at the center of its strategy to maintain the United States’ position as the world’s dominant military power. It is spending billions of dollars to develop what it calls autonomous and semiautonomous weapons and to build an arsenal stocked with the kind of weaponry that until now has existed only in Hollywood movies and science fiction, raising alarm among scientists and activists concerned by the implications of a robot arms race.

The Defense Department is designing robotic fighter jets that would fly into combat alongside manned aircraft. It has tested missiles that can decide what to attack, and it has built ships that can hunt for enemy submarines, stalking those it finds over thousands of miles, without any help from humans.

“If Stanley Kubrick directed ‘Dr. Strangelove’ again, it would be about the issue of autonomous weapons,” said Michael Schrage, a research fellow at the Massachusetts Institute of Technology Sloan School of Management.

Defense officials say the weapons are needed for the United States to maintain its military edge over China, Russia and other rivals, who are also pouring money into similar research (as are allies, such as Britain and Israel). The Pentagon’s latest budget outlined $18 billion to be spent over three years on technologies that included those needed for autonomous weapons.

“China and Russia are developing battle networks that are as good as our own. They can see as far as ours can see; they can throw guided munitions as far as we can,” said Robert O. Work, the deputy defense secretary, who has been a driving force for the development of autonomous weapons. “What we want to do is just make sure that we would be able to win as quickly as we have been able to do in the past.”

Just as the Industrial Revolution spurred the creation of powerful and destructive machines like airplanes and tanks that diminished the role of individual soldiers, artificial intelligence technology is enabling the Pentagon to reorder the places of man and machine on the battlefield the same way it is transforming ordinary life with computers that can see, hear and speak and cars that can drive themselves.

The new weapons would offer speed and precision unmatched by any human while reducing the number — and cost — of soldiers and pilots exposed to potential death and dismemberment in battle. The challenge for the Pentagon is to ensure that the weapons are reliable partners for humans and not potential threats to them.

At the core of the strategic shift envisioned by the Pentagon is a concept that officials call centaur warfighting. Named for the half-man and half-horse in Greek mythology, the strategy emphasizes human control and autonomous weapons as ways to augment and magnify the creativity and problem-solving skills of soldiers, pilots and sailors, not replace them.

The weapons, in the Pentagon’s vision, would be less like the Terminator and more like the comic-book superhero Iron Man, Work said in an interview.

“There’s so much fear out there about killer robots and Skynet,” the murderous artificial intelligence network of the “Terminator” movies, Work said. “That’s not the way we envision it at all.”

When it comes to decisions over life and death, “there will always be a man in the loop,” he said.

Beyond the Pentagon, though, there is deep skepticism that such limits will remain in place once the technologies to create thinking weapons are perfected. Hundreds of scientists and experts warned in an open letter last year that developing even the dumbest of intelligent weapons risked setting off a global arms race. The result, the letter warned, would be fully independent robots that can kill, and are cheap and as readily available to rogue states and violent extremists as they are to great powers.

“Autonomous weapons will become the Kalashnikovs of tomorrow,” the letter said.

The debate within the military is no longer about whether to build autonomous weapons but how much independence to give them. Gen. Paul J. Selva of the Air Force, the vice chairman of the Joint Chiefs of Staff, said recently that the United States was about a decade away from having the technology to build a fully independent robot that could decide on its own whom and when to kill, though it had no intention of building one.

Other countries were not far behind, and it was very likely that someone would eventually try to unleash “something like a Terminator,” Selva said, invoking what seems to be a common reference in any discussion on autonomous weapons.

Yet U.S. officials are only just beginning to contend with the implications of weapons that could someday operate independently, beyond the control of their developers. Inside the Pentagon, the quandary is known as the Terminator conundrum, and there is no consensus about whether the United States should seek international treaties to try to ban the creation of those weapons, or build its own to match those its enemies might create.

For now, though, the current state of the art is decidedly less frightening. Exhibit A: the small, unarmed drone tested this summer on Cape Cod.

It could not turn itself on and just fly off. It had to be told by humans where to go and what to look for. But once aloft, it decided on its own how to execute its orders.

The software powering the drone has been in development for about a year, and it was far from flawless during the day of trials. In one pass over the mosque, the drone struggled to decide whether a minaret was an architectural feature or an armed man, living up to its namesake, Bender, the bumbling robot in the animated television series “Futurama.”

At other moments, though, the drone showed a spooky ability to discern soldier from civilian, and to fluidly shift course and move in on objects it could not quickly identify.

Armed with a variation of human and facial recognition software used by U.S. intelligence agencies, the drone adroitly tracked moving cars and picked out enemies hiding along walls. It even correctly figured out that no threat was posed by a photographer who was crouching, camera raised to eye level and pointed at the drone, a situation that has confused human soldiers with fatal results.

The project is run by the Defense Advanced Research Projects Agency, known as DARPA, which is developing the software needed for machines that could work with small units of soldiers or Marines as scouts or in other roles.

Unlike the drones currently used by the military, all of which require someone at a remote control, “this one doesn’t,” said Maj. Christopher Orlowski of the Army, a program manager at DARPA. “It works with you. It’s like having another head in the fight.”

It could also easily be armed. The tricky part is developing machines whose behavior is predictable enough that they can be safely deployed, yet flexible enough that they can handle fluid situations. Once that is mastered, telling it whom or what to shoot is easy; weapons programmed to hit only certain kinds of targets already exist.

Yet the behavioral technology, if successfully developed, is unlikely to remain solely in American hands. Technologies developed at DARPA do not typically remain secret, and many are now ubiquitous, powering everything from self-driving cars to the internet.

Since the 1950s, U.S. military strategy has been based on overwhelming technological advantages. A superior nuclear arsenal provided the American edge in the early days of the Cold War, and guided munitions — the so-called smart bombs of the late 20th century — did the same in the conflict’s final decade.

Those advantages have now evaporated, and of all the new technologies that have emerged in recent decades, such as genomics or miniaturization, “the one thing that has the widest application to the widest number of DOD missions is artificial intelligence and autonomy,” Work said.

Today’s software has its limits, though. Computers spot patterns far faster than any human can. But the ability to handle uncertainty and unpredictability remain uniquely human virtues, for now.

Bringing the two complementary skill sets together is the Pentagon’s goal with centaur warfighting.

Work, 63, first proposed the concept when he led a Washington think tank, the Center for a New American Security. His inspiration, he said, was not found in typical sources of military strategy — Sun Tzu or Clausewitz, for instance — but in the work of Tyler Cowen, a blogger and economist at George Mason University.

In his 2013 book, “Average Is Over,” Cowen briefly mentioned how two average human chess players, working with three regular computers, were able to beat both human chess champions and chess-playing supercomputers.

It was a revelation for Work. You could “use the tactical ingenuity of the computer to improve the strategic ingenuity of the human,” he said.

Work believes a lesson learned in chess can be applied to the battlefield, and he envisions a military supercharged by artificial intelligence. Brilliant computers would transform ordinary commanders into master tacticians. American soldiers would effectively become superhuman, fighting alongside — or even inside — robots.

Of the $18 billion the Pentagon is spending on new technologies, $3 billion has been set aside specifically for “human-machine combat teaming” over the next five years. It is a relatively small sum by Pentagon standards — its annual budget is more than $500 billion — but still a significant bet on technologies and a strategic concept that have yet to be proved in battle.

At the same time, Pentagon officials say that the United States is unlikely to gain an absolute technological advantage over its competitors.

“A lot of the AI and autonomy is happening in the commercial world, so all sorts of competitors are going to be able to use it in ways that surprise us,” Work said.

The American advantage, he said, will ultimately come from a mix of technological prowess and the critical thinking and decision-making powers that the U.S. military prioritizes. The U.S. military delegates significant decisions down its chain of command, in contrast to the more centralized Chinese and Russian armed forces, though that is changing.

“We’re pretty confident that we have an advantage as we start the competition,” Work said. “But how it goes over time, we’re not going to make any assumptions.”

Experts outside the Pentagon are far less convinced that the United States will be able to maintain its dominance by using artificial intelligence. The defense industry no longer drives research the way it did during the Cold War, and the Pentagon does not have a monopoly on the cutting-edge machine-learning technologies coming from startups in Silicon Valley, and in Europe and Asia.

Unlike the technologies and material needed for nuclear weapons or guided missiles, artificial intelligence as powerful as what the Pentagon seeks to harness is already deeply woven into everyday life. Military technology is often years behind what can be picked up at Best Buy.

“Let’s be honest, American defense contractors can be really cutting edge on some things and really behind the curve on others,” said Maj. Brian Healy, 38, an F-35 pilot. The F-35, America’s newest and most technologically advanced fighter jet, is equipped with a voice command system that is good for changing channels on the radio, and not much else.

“It would be great to get Apple or Google on board with some of the software development,” he added.

Beyond the practical concerns, the pairing of increasingly capable automation with weapons has prompted an intensifying debate among legal scholars and ethicists. The questions are numerous, and the answers contentious: Can a machine be trusted with lethal force? Who is at fault if a robot attacks a hospital or a school? Is being killed by a machine a greater violation of human dignity than if the fatal blow is delivered by a human?

A Pentagon directive says that autonomous weapons must employ “appropriate levels of human judgment.” Scientists and human rights experts say the standard is far too broad and have urged that such weapons be subject to “meaningful human control.”

But would any standard hold up if the United States was faced with an adversary of near or equal might that was using fully autonomous weapons? Peter Singer, a specialist on the future of war at New America, a think tank in Washington, suggested there was an instructive parallel in the history of submarine warfare.

Like autonomous weapons, submarines jumped from the pages of science fiction to reality. During World War I, Germany’s use of submarines to sink civilian ships without first ensuring the safety of the crew and passengers was seen as barbaric. The practice quickly became known as unrestricted submarine warfare, and it helped draw the United States into the war.

After the war, the United States helped negotiate an international treaty that sought to ban unrestricted submarine warfare.

Then came the Japanese attack on Pearl Harbor on Dec. 7, 1941. That day, it took just six hours for the U.S. military to disregard decades of legal and ethical norms and order unrestricted submarine warfare against Japan. American submarines went on to devastate Japan’s civilian merchant fleet during World War II, in a campaign that was later acknowledged to be tantamount to a war crime.

“The point is, what happens once submarines are no longer a new technology, and we’re losing?” Singer said. He added: “Think about robots, things we say we wouldn’t do now, in a different kind of war.”

To Learn More:

Report Warns that Autonomous Weapons in Action Could be Rendered Uncontrollable (by John Markoff, New York Times)

U.S. and U.K. Accused of Impeding Progress on U.N. “Killer Robot” Ban (by Noel Brinkerhoff, AllGov)

U.N. Convenes to Discuss Danger of Killer Robots while Nobel Laureates Urge They Be Banned (by Noel Brinkerhoff, AllGov)

U.N. Calls for Global Ban on Autonomous Killer Robots (by Noel Brinkerhoff, AllGov)