And Russia is not alone. Indeed, Moscow and other governments are learning key disinformation tactics from non-state actors. In 2014, ISIS had its Blitzkrieg moment in Mosul. Deftly deploying bots, Twitter accounts, hashtags, and smartphone apps, the terrorist group deceived and overcame U.S.-trained Iraqi Security Forces. The violent campaign went viral in unprecedented ways, recruiting and galvanizing jihad groups around the world. The United States was hardly immune; many American social-media users saw a doctored screenshot purporting to show an ISIS sniper on a rooftop in Colorado. Today, ISIS’s social media output has fallen off as the group works to cope with the loss of its so-called physical caliphate in Iraq and Syria, but its weaponization of social media has deeply impressed America’s adversaries and rivals.

These developments suggest a future in which both non-state and state actors will contest the United States through online disinformation campaigns, even while more traditional global power competition tied to geography continues to play out. Moreover, it seems inevitable that the Chinese, Iranians, and others will escalate their malign social media efforts much as the Russians have done. FBI Director Christopher Wray recently acknowledged that other countries have been exploring such influence efforts.

Only time will tell what form these escalating campaigns will take, but they will doubtless continue to harness new technologies. Artificial intelligence is already being used to create deep fakes: still and video imagery so realistic that they can foment fear and confusion at a level that affects national security.

U.S. counterintelligence must go on the offensive to expose disinformation. Today’s influence operations target Americans who are polarized and susceptible to believing that which confirms their predispositions. We need an intelligence community that, in key respects, works very much in the open, identifying disinformation spread by foreign adversaries and swiftly debunking it before it can “go viral” and entrench itself in American minds. Social media, then, can be a potent double-edged sword: a place where disinformation begins to spread but is quickly identified and rebutted.

Getting this right demands a number of key steps. First is better, faster understanding by the U.S. government of what disinformation American adversaries are spreading—or, ideally, anticipation of that spread before it actually happens. That, in turn, requires penetrating foreign governments in traditional intelligence-gathering ways, but it also means dramatically increasing our investment in “open-source” intelligence gathering, meaning scouring and analyzing what’s available to anyone—such as the internet.

Second is, in appropriate circumstances, the swift, clear, and direct intervention of U.S. government spokespersons to expose falsities and provide the truth. That includes unmasking and debunking publicly the disinformation U.S. adversaries are spreading and the sources peddling it—a public-facing role for U.S. intelligence agencies that is unfamiliar but vital in the modern era of influence campaigns.

Third is an expanded set of U.S. government partnerships with technologies companies to help them identify disinformation poised to spread across their platforms so that they can craft appropriate responses. While tech companies have unique insights into their own platforms, the government has unique insights across platforms and, moreover, in connecting developments in the virtual and physical worlds. The government’s focus has long been on safeguarding what it knows, but it has become at least as important for the government to share certain information quickly with private sector actors who can act on it.