Could the United States ever go to war to defend our access to Facebook? It sounds crazy, but in an effort to tame the wild west of cyberwarfare, this is the kind of question that ethicists and policymakers are currently scrambling to address.

Cyberwarfare will dramatically alter the face of war, ushering in an era where warriors battle online and with code, instead of on the battlefield with bullets. For example, political tensions boiled to the surface two weeks ago with the U.S.’s surprising indictment of five hackers, allegedly affiliated with China’s People’s Liberation Army. The high-profile Stuxnet attack in 2010 is another such example, in which a computer worm, introduced by a flash drive, crippled Iran’s nuclear program. While there have been centuries of thought devoted to the ethics of war, can that thought corpus accommodate the unique issues posed by cyberwarfare?

A recent report by the National Academy of Sciences suggests a code of ethics for cyberwarfare, as well as for other existing and emerging technologies. A conference at Notre Dame brought together experts to discuss various facets of the report, and that conversation was continued at an event we organized at the International Committee of the Red Cross, in Geneva, Switzerland. What makes the conversation both interesting and challenging at the same time is that it trades on a wide range of disciplinary backgrounds: moral philosophy, international humanitarian law, military operations, and national security.

Just to see what sorts of issues are in play, consider a comment made by a U.S. official in 2011, “If you shut down our power grid, maybe we will put a missile down one of your smokestacks.” One thing that makes cyberwarfare so profound is exactly the sentiment expressed in this (now infamous) comment. How do the cyber and the conventional—or “kinetic”—relate? Can cyberattacks justify kinetic responses?

Article 51 of the U.N. Charter grants a state the right of self-defense in the light of an “armed attack” against it. This notion of an armed attack made sense at the time of the charter’s entry into force in 1945. Back then, the world was reeling from the German invasion and occupation of most of Europe. Certainly the international community wanted to prevent acts like these, and so drafted Article 51 to prevent that kind of aggression in the future.

But could an attack in cyberspace be an “armed attack” that triggers the right of self-defense? It is not yet clear what the answers to these questions would be under international humanitarian law, but suffice it to say the specifics are going to matter. Even setting aside the legal considerations and turning to the ethical ones, is retaliatory force on the table?

One thing worth keeping in mind is that an “armed attack” has to mean something less than violence against individuals. Of course, that would count— individuals can kill in self-defense when they are confronted with lethal force. But suppose terrorists were going to burn down crops on which some isolated village depended on for subsistence. There is still clearly a threat to life, particularly given the threat of mass starvation. And most people think that these would-be terrorists would be fair game for lethal attack.

But the cases can get muddy quickly. What if the enemy force were going to disrupt the water supply to a town, causing serious inconvenience short of death? Or what if the enemy were to do something that had a substantial and adverse effect on the economy, like seizing natural resources? Or knocking the milk supply off-line? Or fiddling with the stock market? The transition to cyberattacks can look seamless, but has anything substantial changed?

One plausible answer is that serious meddling with the economy, or with state sovereignty, can trigger the right of self-defense, even if that interference is nonlethal. Crashing the stock market, for example, would have catastrophic effects throughout society. Some people might think that tanking the Dow Jones would warrant lethal force—though Occupy Wall Street veterans would likely disagree.

But what about when the threat gets even more minor? Imagine, not a serious threat to the economy, but rather some sort of persistent, nagging inconvenience. For example, imagine that hackers coordinate an extensive denial of service attack, knocking millions of Americans offline for days. Or say that those hackers get into the computers of millions of Americans, substantially—but not inoperably—slowing them down. Imagine they could pull Facebook down for a few hours, a few days, or a few weeks.

Setting aside the technological facets and viability of such attacks, they raise the option of responding with coercion. Should states be able to defend themselves with lethal force against nonlethal, but inconvenient deprivations? There are two starkly different ways to answer this question.

One way to look at it is that a bunch of minor inconveniences can add up to justify the use of lethal force. This is where Facebook comes in—if millions of Americans are going to be knocked off of the website for a week, we might think that their collective inconvenience could, at least in theory, make the terrorist liable to a lethal attack. Sure, being knocked off Facebook isn’t that important, but if enough people are knocked off and suffer enough frustration for long enough, we might think that is just as morally bad as killing a few people. And killing a few people would definitely be enough to justify a lethal response.

The other way to look at this is that the inconvenience, however much and however extensive, just can’t justify the use of lethal force—to use the technical term, these are “incommensurable” harms. It doesn’t matter whether a thousand people are knocked off Facebook, or a million, or a billion. It doesn’t matter whether these people are knocked off Facebook for an hour, for a week, or forever. Missing enough of your distant relatives’ updates about Dancing With the Stars or photo albums of drunken escapades, could never add up to justify a killing.

When we attended the International Committee of the Red Cross conference in Geneva recently, this proved a divisive issue. Of course it doesn’t really have anything to do with Facebook—the issue is a structural one, going to the very basis of the legitimate use of force. David Rodin, a moral philosopher at Oxford, argued for this second view: Killing and inconvenience are different kinds of threats, and no number of inconveniences could ever be as morally bad as a single death. But we’re more inclined—albeit cautiously—to the other view, namely that enough of one sort of harm could outweigh the other.

In one sense, this is a deeply theoretical question about the ethics of war. In another, though, it portends an imminently practical issue, and one that will almost surely be confronted under the aegis of cyberwarfare. The kind of cyberwarfare the world has seen so far is a persistent low boil of economic espionage and theft of intellectual property—witness the five indictments recently handed down by the State Department against members of China’s People’s Liberation Army for stealing corporate secrets. If this kind of persistent annoyance continues long enough, it could eventually have significant economic consequences for America—more dire consequences than a loss of life, even. And the international community is going to have to soon decide whether we are allowed to resort to war to defend against it.

The authors would like to thank the National Science Foundation for supporting this project, as well as conference participants at both the University of Notre Dame and the International Committee of the Red Cross.

This article is part of Future Tense, a collaboration among Arizona State University, the New America Foundation, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.