Ever since the United States dropped the atomic bomb on Hiroshima and Nagasaki in 1945, the most powerful militaries in the world have been those that have nuclear weapons. After all, a massive army means little compared with the ability to level entire cities with a single explosion. But technologies change over time, and [a new report] suggests artificial intelligence could soon surpass nuclear weapons as the world’s greatest military threat.

The report, which comes from Harvard University’s Belfer Center for Science and International Affairs at the request of the Office of the Director of National Intelligence’s research arm, suggests A.I. could give smaller countries unable to amass big armies or develop nuclear weapons a way to rival even superpowers militarily. While the United States does figure to particularly benefit from its huge investments in A.I., it doesn’t take overwhelming resources for a country to use machine-learning systems to create dangerous cyberwarfare applications. A small, malicious hack could be enough to cripple huge swaths of another country’s weapons capabilities — including its ability to launch nukes.

Here’s another example: Many U.S. military operations rely heavily on drones to target hostile parties and climate them. If another country wanted to stop those operations, it wouldn’t necessarily need to build and launch air-defense systems — it may simply need to figure out a way into the military’s cyber defenses in order to neutralize those drones. Smaller nations that cannot invest in high-powered missiles and conventional weapons may simply need to find some wunderkind hackers who can bypass digital firewalls.

Part of the report states:

“Since cyber capabilities were far cheaper than their non-cyber equivalents, smaller states with less powerful militaries also made use of cyber. Ethiopia and many other governments, for example, used cyber tools to monitor political dissidents abroad. Likewise, hostile non-state actors, including both criminals and terrorists, have made effective use of cyber tools for geographically dispersed activities that would be much more difficult to execute in the physical domain. In the near term, the Cambrian explosion of robotics and autonomy is likely to have similar impacts for power diffusion as the rise of national security operations in the cyber domain did.”

The other major takeaway digs into the implications behind automated weapons systems. The report argues the National Security Council, the Department of Defense, the State Department, and other federal institutions need to work soon to figure out an international framework for regulating A.I.-based weapons systems. Cyberwarfare won’t be going away, obviously, but the idea of automated weapons, like, say, killer robots, makes more than a few people incredibly nervous.

Figuring out how to be prepared for A.I. weaponry won’t be easy. The definition of automated weapons has not been established, and trying to impose a specific standard on what is an automated weapon and what isn’t will be a very contentious debate. Nevertheless, the report says it behooves the U.S. and its partners to figure out how to best limit the proliferation of A.I.-based weapons the same way it has for biological and chemical weapons.

Nuclear weapons might have the potential to end the species, but A.I. has just as much of a potential to irrevocably transform how humans do their fighting. Without a better handle on how we should use these systems, things could get out of hand pretty fast.