The Democratic National Committee sent an urgent alert on Monday to every presidential campaign aimed at avoiding a repeat of the cybersecurity fiasco the party suffered at the hands of Russia and WikiLeaks in 2016.

The subject of the email was “Counter-Disinformation Update,” and it was part of a regular series of communications by DNC Tech, the party’s in-house group responsible for internal security and monitoring the spread of fake news about Democrats.


POLITICO obtained the full archive of DNC Tech’s missives to the presidential campaigns. They reveal a party struggling to combat the continued onslaught of the twin threats faced by the Democratic Party: cyber penetration from state actors abroad and the spread of disinformation about its top presidential candidates by Donald Trump and his allies at home.

Democrats are entering a critical stretch of the campaign when voters are paying more attention, the top candidates and their desperate single-digit rivals are more likely to begin attacking one another, and Trump, facing both impeachment and a slew of general election polls that show him losing to most Democrats, is pillorying Joe Biden in an attempt to shape the nomination contest to his benefit.

It was a moment, the DNC warned, to be hypervigilant about fake news.

“[A]ll campaigns should expect to see heightened disinformation and discourse manipulation activity leading up to, during, and after the debates with the goal of polarizing opposing Democratic supporters,” Monday’s predebate email said. Tech, as the DNC Geek Squad is known inside Washington headquarters, asked each presidential campaign to report “inauthentic or suspicious activity” to the DNC as well as to the major social media platforms (Twitter, Facebook/Instagram, Google/YouTube).


Since 1992, Democratic Party war rooms haven’t changed all that much from those scenes in The War Room showing James Carville and George Stephanopoulos sitting around watching the news and yelling at reporters. But today denizens of Democratic war rooms are more likely to have a computer science background than experience on a big Senate campaign, and the main battlefront after a big debate isn’t Johnny Apple’s front-page interpretation of the event but whether fringe disinformation penetrates the mainstream social media conversation.

In its note to campaigns, the DNC explained that as with previous debates, on Tuesday night Twitter would also have its own cyber war room set up to monitor “debate-related discourse for coordinated manipulation.” Campaigns were instructed to “forward any hashtags your campaign expects to use or expects to be used against you (‘hatetags') during the debates” to the DNC for monitoring.

During Tuesday’s debate, the DNC will use a software tool called Trendolizer to track trending disinformation. Trendolizer — think TweetDeck but with lots of fancy graphs — scrapes data from social media and across the internet to surface hidden content that is on the cusp of going viral. Its “Hotness” filter alerts users when something new is about to take off. If it’s potentially damaging misinformation about a Democratic candidate, DNC Tech flags it for its “counter-disinformation contact” on the affected campaign and recommends that they reach out to one of the third-party fact-checkers that Facebook relies upon—such as The Associated Press, Poynter, or FactCheck.org.

A fact-check is crucial: Once it’s attached to the news, Facebook will adjust its algorithm and diminish the spread of the story. (In guidance to the campaigns, the DNC is careful to note that “Checkyourfact.com, which is owned in part by Tucker Carlson, is another Facebook third-party fact-checker that the DNC is not in communication with.”)


The DNC flag-and-respond operation is more art than science. Campaigns can ask the DNC to turn the dials up and down on Trendolizer to figure out how much a "misinfo" threat has to spread before they are alerted. It’s up to the individual campaigns to figure out whether it’s worth worrying about the fake story or crafting a response.

“When contemplating a response to disinformation narratives, campaigns should consider whether misinformation has reached a tipping point where the costs of ignoring the issue are higher than the costs of the amplification that a response might generate,” the DNC privately instructed presidential campaigns on Monday.

In a previous email the party argued that a good indication of when to respond is when “the information has left closed communities,” such as 4chan or a fringe blog or social media account. (The DNC views the president’s son, Donald Trump Jr., as a frequent conveyor belt for disinformation, pulling it out of closed communities and dumping it into the mainstream media, and so they watch his account carefully.)

“It is up to every campaign to figure out where that inflection point is,” said a senior member of the DNC Tech team. “When and how do you push back?”

In the age of political disinformation that may be the most important question facing any Democratic presidential campaign.

A CNN sign hangs on the side of a buidling near an athletic field outside the Clements Recreation Center where the CNN/New York Times will host the Democratic presidential primary debate at Otterbein University in Westerville, Ohio. | John Minchillo/AP Photo

It’s a cliche to say that Democrats care, above all else, about electability this year. That fuzzy concept has been defined in different ways. For some Democratic voters, the most electable candidate is the one with the best policies to contrast with Trump. For others, the concept of electability is about identity, with some voters arguing that only an old white guy can beat Trump, the candidate of white grievance, and others countering that only a woman or person of color can reassemble the diverse — and winning — Obama coalition. Sometimes electability is defined as the absence of some glaring personal or political flaw — a candidate’s age, voting record, or whether he’s authentic.

But what isn’t talked about as much is that what might make a Democratic nominee “electable” in an era dominated by disinformation warfare, is whether the candidate has a strategy and team to combat the kind of false attacks that Trump used to nuke Hillary Clinton and is now beta-testing against Biden. That capability is undoubtedly more difficult for Democratic voters to assess than a candidate's typical attributes and flaws.

“If there’s some Democrat watching the field saying, ‘Oh I think X would be better to take on Trump because X doesn’t have to worry about Y,’ they are naive,” said Philippe Reines, Clinton’s longtime confidant who has advised several 2020 campaigns about the lessons he learned — and is psychologically scarred by — running against Trump in 2016. “Because Y is either something that marginally exists that they’re going to blow up or Y is something that they’re going to make up. So everyone is going to end up in the same boat. So it really comes down to who is best to handle it, not whose background is freest of vulnerabilities.”

'Dogpiling' and the spotlight effect

Combating disinformation is a collective action problem. The DNC is modest about what its early detection system can accomplish and is quick to direct reporters to the campaigns. The campaigns often send you back to the DNC. “If you’re already talking to the DNC I think that’s your best bet,” a spokesperson for Elizabeth Warren said. “They track it better than most campaigns have the capacity to at this point.”


While the campaigns and the DNC often browbeat the mainstream media into handling disinformation more responsibly, they can’t do much more than that. Social media platforms, Democrats universally argue, are the most responsible for spreading disinformation — and the least responsive to correcting it. A campaign can call any editor at The New York Times. When they want to complain about a viral YouTube video accusing a top candidate of hiding a meth lab in his basement, they are instructed to send an email to an anonymous email address (civics-outreach@google.com).

Despite the limitations, the DNC Tech’s archive of confidential correspondence to the Democratic presidential campaigns shows the party has done quite a bit. “I’m not one to say the DNC is pitching a perfect game but they have been really forward thinking about misinformation,” said one top adviser to a presidential candidate. “It’s the only thing I’ve seen that works.”

By mid-September, the DNC had reported over 40 “misinformation incidents” that caused over 4,000 social media accounts to be removed. They helped campaigns connect with an assigned FBI field agent and a contact at the Department of Homeland Security. The DNC regularly warns of sophisticated new phishing attacks. There was a recent attempt to compromise accounts via a fake calendar invite and a warning about the Iranian hacking group Phosphorus, which Microsoft informed the DNC was “attacking accounts of journalists, politicians, and at least one presidential campaign,” reportedly Trump’s.

After the September debate, DNC Tech sent out a summary of suspicious activity it spotted by monitoring Twitter. The party found that a Twitter thread by Beto O’Rourke was the target of “dogpiling,” the technical term for when trolls coordinate on one thread to dominate the candidate’s mentions. The Beto dogpile concerned a popular meme that says Beto is a furry.

Not everything deserves a response. When the DNC reached out and reported the furry dogpile to O’Rourke’s campaign, the DNC staffer noted, “I haven't flagged this activity to Twitter as I think the backlash to any action taken by them might be worse than the current activity.”

In addition to the dogpile, the DNC reported after the September debate “[a]t least three new right-wing narratives targeting candidates,” and that there was a “direct threat to one candidate, which Twitter removed in the early morning following the debate.”

The O’Rourke campaign recently faced a false story more concerning to them than the (mostly) harmless furry meme. Last month, a story spread online that the Odessa shooter, who killed seven people, had a Beto sticker on his car. “It’s the perfect kind of misinfo because it seems possible,” said Rob Flaherty, O’Rourke’s digital director. “It would undercut the whole narrative. If it were a reporter, we could just call them and they would issue a correction.”

As Trendolizer showed the story spiking in fringe corners of social media, the DNC alerted the O’Rourke campaign and it watched as it seemed to metastasize. Or did it?


In psychology there’s something called the spotlight effect, which refers to the human tendency to believe that many others notice something about you that you notice about yourself. In reality most people probably don’t notice or care about the mole on your chin that you’re anxious about or that you’re having a bad hair day.

Presidential campaigns spend every day debating the spotlight effect. Flaherty and his team had to decide whether everyone else might be paying close attention to the same thing they were obsessively monitoring.

“We had internal conversation about what to do,” he said. “There’s no playbook. We debated: ‘Do you say anything to fuel it or not? Are you at a tipping point where it’s worth having that conversation or do you let it die out?”

Richard Stengel, in his new book, Information Wars: How We Lost the Global Battle Against Disinformation and What We Can Do About It, notes that a body of research suggests that repeating a lie, even in a fact-check, just spreads it. “We don’t have a fake news problem, we have a media literacy problem,” he told me, citing a study about how repeating falsehoods creates what one academic calls a “belief echo.”

“The solution is to not state the false belief,” he said. "If you put the problem in a marketing context, you kill a brand by ignoring it. The head of McDonald’s never talks about Wendy’s. It varies case by case, but if I were a political strategist I would contest Trump on content reasons and policy reasons, but not attack him on his own lies.”

This is an age-old debate in politics. But most campaigns and Democratic strategists are moving away from the ignore-the-brand view.

“For 15 years, at least half the time I, or we, would say to Hillary, ‘Let’s not give that oxygen,’” Reines said. “And it was a mistake! You can’t let anything go. We will all go to our deaths never hearing Donald Trump say, ‘I won’t dignify that with an answer.’”

Inside the O’Rourke campaign, his aides watched with increasing nervousness as versions of the false Odessa story gained likes and retweets. The main version of the story grew to 34,000 shares on Facebook (tipping point!). Then again it only had a suspicious 46 comments (spotlight effect!). Eventually O’Rourke’s campaign decided it should respond to the fake news in a thread by campaign manager Jen O’Malley Dillon.


Flaherty noted several lessons about the episode. One was to err on the side of responding rather than ignoring.

“There’s value in being more aggressive about this stuff,” he said. “Traditional political communications tells you that if there’s something in the grocery store tabloid it’s not worth talking about it because it just raises it up. The lesson for us is to be confrontational.”

The second lesson is that there is a generational divide among Democratic operatives. Older comms people who come out of the grocery store tabloid era are still more likely to argue for trying to deny a story oxygen, while younger staffers, who grew up only in social media and are unimpressed by the spotlight effect theory, want to respond to everything.

The final lesson Flaherty took, which is widely shared by almost everyone I talked to about combating political disinformation, is that Twitter, Facebook and Google are the problem.

“The platforms aren’t up to it,” he said. “Why is the responsibility on me as the campaign’s digital director? It’s ludicrous to me that my place in the ecosystem is to flag this stuff for them rather than them coming to me and saying, ‘Hey, you might have a problem!’”

While the disinformation crisis will be among the greatest challenges for whomever the Democrats nominate, only a few campaigns have had to grapple with serious threats so far. An aide to Cory Booker — who has been vocal in defending Biden from Trump’s false attack linking Biden’s vice presidential diplomacy in Ukraine to his son’s private-sector work advising a Ukraine energy company — pointed out that disinformation has been the presidential campaign equivalent of a first world problem: Only the longtime front-runner has been seriously affected. But the adviser, like Booker, was sympathetic to the bind that Biden is in, especially when it comes to Facebook’s much-criticized decision to exempt political advertising from its fact-checking regime.

“Say the Trump campaign started running an ad that said Cory ran over a dog and started calling him 'Dog Killer,'” the aide said, adding that Booker has not, in fact, ever killed a dog. “It’s not true. We all know it’s not true. But Facebook would promote that. Then you might be forcing groups like PETA to weigh in, because they would say, ‘Oh this is obviously terrible even though it’s untrue.’ Eventually you’re building it up into this whirlwind that actually shouldn’t exist anyway because Facebook should do the responsible thing and not run that ad.”

Facebook has repeatedly responded that it won’t police the speech of politicians. “We don’t believe,” Nick Clegg, the top communications exec at the company, said recently, “that it’s an appropriate role for us to referee political debates and prevent a politician’s speech from reaching its audience and being subject to public debate and scrutiny. That’s why Facebook exempts politicians from our third-party fact-checking program.”


Biden’s response to Trump’s false attack about Ukraine has received mixed reviews. Most senior advisers at rival campaigns did not want to criticize Biden’s team either because it seemed like blaming the victim or because it’s too early to know how to assess his strategy.

But several things stand out. The first is that Biden's campaign waited longer than what the most aggressive outside agitators, like Reines, insist is necessary in 2019. The increasingly popular hawkish view in Democratic circles is that if Trump attacks with a false charge, the target candidate — not his or her surrogates — needs to immediately and forcefully respond.

Biden is not generally considered a candidate with strong social media skills, and his long delay in confronting Trump on the issue directly and on camera suggests that the campaign was paralyzed with the traditional debate about the dangers of repeating the lie. (Two Biden aides confirm this was the case.)

The campaign was more effective in working the refs. Biden’s team sent a flurry of letters to news organizations and social media platforms. The news side of the mainstream media has overwhelmingly responded by carefully reporting that there’s no evidence that Biden tried to help his son. The advertising side of networks, like MSNBC and CNN, responded by refusing to run a Trump ad that included the false allegation.

But the final lesson, not surprisingly, is that Twitter, Facebook and Google have been unmoved by the Biden campaign’s appeal to halt the spread of false information. A consensus is emerging in Democratic politics that these platforms are the greatest threat to the party’s eventual nominee.

Several advisers to presidential candidates noted that in this new front against the social media giants, it was Warren’s clever effort to shame Mark Zuckerberg that stood out. The Warren campaign ran an ad on Facebook that included a false statement about the platform’s founder and CEO — that he endorsed Trump for reelection — that was perhaps more effective than the Biden campaign’s stern letter to the company.

There is so much anger at Facebook’s role in spreading disinformation that the Booker adviser has a fantasy for Tuesday’s debate that would be an even more high-profile version of Warren’s ploy.

“What if every candidate on the debate stage just got up there and started spewing absolute lies about Mark Zuckerberg?” the aide said. “I’m sure he wouldn’t like that.”