There's a race to "inoculate" consumers against false and inflammatory content ahead of the 2020 elections, according to the head of R&D for an Alphabet subsidiary that monitors online disinformation. But two trends are going to be particularly hard to stop: "deepfake" videos, which are videos altered to show speakers saying inflammatory things, and propagandists using real videos out of context.

Yasmin Green is the director of research and development at Jigsaw, an Alphabet subsidiary created to monitor abuse, harassment and disinformation online. She was speaking on a panel of experts in disinformation at the Aspen Institute Cyber Summit in New York on Wednesday.

Election influence is likely to be pushed through different channels, on different websites and using different techniques than in 2016, Green said. Social media companies and researchers such as those at Jigsaw are working both to pinpoint these new or expanded techniques and find "interventions" for them that protect free speech but alert consumers about the authenticity of what they're consuming.

"I'm not as worried about faked accounts at this time," Green said, referring to the popular fake social media accounts that were started sometimes years in advance of the 2016 election on Twitter and Facebook and were used to sow discord among voters. Social media companies are doing a better job of removing those accounts, and would-be trolls are now having to "start from scratch."

"I do commend Facebook, and I see them doing a lot," she said.

Instead, consumers should expect trolls to use a far wider variety of platforms in the upcoming elections, especially companies that don't have a strong advertising business like the social media giants do.