The maker of the famous AK-47 rifle is building “a range of products based on neural networks,” including a “fully automated combat module” that can identify and shoot at its targets. That’s what Kalashnikov spokeswoman Sofiya Ivanova told TASS, a Russian government information agency last week. It’s the latest illustration of how the U.S. and Russia differ as they develop artificial intelligence and robotics for warfare.

The Kalashnikov “combat module” will consist of a gun connected to a console that constantly crunches image data “to identify targets and make decisions,” Ivanova told TASS. A Kalashnikov photo that ran with the TASS piece showed a turret-mounted weapon that appeared to fire rounds of 25mm or so.

Kalashnikov did not respond to a request for comment before press time.

Kalashnikov’s new gun isn’t the first reported Russian-made lethal robot. In 2014, officials with the Russian Strategic Missile Force said they would begin deploying armed sentry robots that could autonomously spot and shoot at intruders.

Russian weapons makers see robotics (and the artificial intelligences driving them) as key to future sales, according to Sergey Denisentsev, a visiting fellow at the Center For Strategic International Studies. “There is a need to look for new market niches such as electronic warfare systems, small submarines and robots, but that will require strong promotional effort because a new technology sometimes finds it hard to find a buyer and to convince the buyer that he really needs it, ”Denisentsev said in April.

Already, Russian government information agencies have claimed that Russian-made ground battle robots are securing victories in Syria. Last year, Sputnik reported that that Syrian government forces used a combination of Russian small drones and heavily armed tank robots to kill 70 enemy combatants in Latakia province. But Bellingcat’s Aric Toler revealed the claim as likely false, and definitely poorly sourced. “Sputnik simply rephrased and reposted a crude, fake blog entry from a Russian social network,” he wrote.

Russia’s willingness to embrace lethal autonomy stands in stark contrast to U.S. policy. In 2012, then-Deputy Defense Secretary Ash Carter (later defense secretary) signed a directive forbidding the U.S. to allow any robot or machine to take lethal action without the supervision of a human operator.

In 2015, then-Deputy Defense Secretary Bob Work said fully automated killing machines were un-American. “I will make a hypothesis: that authoritarian regimes who believe people are weaknesses,” he said , “that they cannot be trusted, they will naturally gravitate toward totally automated solutions. Why do I know that? Because that is exactly the way the Soviets conceived of their reconnaissance strike complex. It was going to be completely automated. We believe that the advantage we have as we start this competition is our people.”

Work is the brains behind the so-called Third Offset strategy, the Pentagon’s grand scheme for securing military advantage over China and Russia by developing artificial intelligence and autonomy, among other technologies. But Work is slated to be replaced by former Boeing executive Patrick Shanahan .

Still, Dale Ormond, who directs research at the Office of the Assistant Secretary of Defense for Research and Engineering said at Thursday’s Defense One Tech Summit that he did “not foresee that Department of Defense would give AI the ability to make decisions on executing lethal force.”

Instead, the U.S. military wants its AI to focus first on helping intelligence analysts sift through data and make faster decisions. Said Ardisson Lyons, the science and technology director at the Defense Intelligence Agency: “Some of the best breakthroughs we have on North Korea are AI-derived.”

After that? “The Department of Defense is starting with computer vision because there’s been immense breakthroughs from the research into self-driving cars, but everything will be touched,” said Marine Col. Drew Cukor, chief of the Algorithmic Warfare Cross Function Team in the office of the defense undersecretary for intelligence. “We have an acquisition community that is larger than the entire Marine Corps. We have a large maintenance corps that could be helped by the kind of predictive AI already in use by commercial aviation companies.”

Ultimately, he said at the Tech Summit , ‘We hope the flame-front of AI burns through the entire department.”

Brad Peniston contributed to this report.