Industrial-scale creators of fake news are becoming increasingly savvy in their efforts to avoid new web platform rules, defensive AI and readers on guard for propaganda.

Why it matters: The tactics used by bad actors during the last election cycle have been modified to avoid more sophisticated detection and to take advantage of new technologies, making some of them harder to identify and stop in real-time.

The big picture: In 2016, many commonly-used tactics focused on spreading fake news widely, loudly and clumsily. Many of these efforts have been weeded out today by updated and more sophisticated technologies like artificial intelligence (AI).

Bad actors created domains such as abc.com.co or usatoday.com.co. A Google Doc from media professor Melissa Zimdars was spread widely in late 2016 warning users about these types of domains. But a similar tactic is still used by politicians and political groups

or usatoday.com.co. A from media professor Melissa Zimdars was spread widely in late 2016 warning users about these types of domains. But a similar tactic is still used Platforms have cracked down on cloaking, a tactic that gets people to click something, often a video, by misleading them about its content.

Between the lines: Bad actors are looking to mimic more normal communications, instead of spewing bright commentary that could get them flagged for spreading hate or violence.

"The days of Twitter rage are gone," says Padraic Ryan, senior journalist, at Storyful, a social media intelligence company. "Language and behaviors are becoming a lot more sophisticated and human-like to avoid detection."

What's next:

In 2018 and beyond, it's all about avoiding detection.

Fake accounts and botnets remain widely used to spread false news and information.

to spread false news and information. But now that platforms are prioritizing the removal of millions of fake accounts , bad actors are looking to hijack real accounts to avoid detection.

, bad actors are looking to hijack real accounts to avoid detection. Bad actors are looking to appear more human-like on Twitter and Facebook. This means more sophisticated tactics, like using Tweetdeck to schedule pre-written tweets to mimic the posting behaviors of humans.

on Twitter and Facebook. This means more sophisticated tactics, like using Tweetdeck to schedule pre-written tweets to mimic the posting behaviors of humans. Much of the nefarious activity today focuses on avoiding looking like a bot or fake account. The basic premise behind this tactic is to operate more like an influence operation as opposed to an automated operation, says Renée DiResta, Director of Research at New Knowledge, Policy Lead at Data for Democracy andMedia Fellow at Mozilla.

"The new trend is bad actors taking advantage of existing polarization to manipulate groups of real people, as opposed to creating or pretending to be groups of people."

DiResta calls the tactic noted above "commandeering," where bad actors use hijacked accounts to target real people that have already mobilized into partisan groups on social networks or messaging since 2016.

Malware attacks on everyday social media users are increasing as bad actors look to hijack real identities to avoid detection.

We’ve seen the trend increase more and more with bad actors using malware botnets than anything else. A few years ago, 60% of non-human activity was malware, but now roughly 75% of bot attacks come from compromised devices (botnets)."

— Tamer Hassan, co-founder & CTO at White Ops, a fraud detection firm

Encrypted messaging: Facebook-owned Whatsapp and Messenger get the bulk of scrutiny for fake news on private messaging platforms, but experts say smaller one-to-one communication platforms are also being weaponized, like Twitter direct messaging and Telegram.

Dark texts: Anonymous "peer to peer" texts could be another source of untraceable communication, often asking people for money to support ads, Vice reports. New text platforms are enabling political campaigns to text tens of millions of people without asking permission first.

Deepfakes: Technology has become so sophisticated that publicly available software is making it easy for bad actors to create "Deepfakes" or altered photos or videos. As Axios' Kaveh Waddell reports, most software swaps one person’s face onto another person’s body, or makes it look like someone is saying something they didn’t.

Many platforms are investing in technologies to weed out Deepfakes, but some are becoming so sophisticated that even journalists are being stumped by them. In March, Facebook said it's beginning to test fact-checking of photos and videos.

More sophisticated bot tools: New tools are being created to manipulate information at the blog or comment level on everyday websites, says Mike Marriott, researcher at Digital Shadows, a digital security firm.

Go deeper: In a new report , Marriott explains that tools such as BotMasterLabs and ZennoStore claim to promote content across hundreds of thousands of platforms, including forums, blogs, and bulletin boards, but in reality, they control large numbers of bots that are programmed to post on specific types of forums on different topics.

New stomping grounds for A/B testing. As the Trump campaign has boasted, Facebook was a tremendous platform for quickly testing thousands of messages for effectiveness. While it's still a breeding ground for misinformation, bad actors are relying more heavily on closed network groups to A/B test messaging, memes and stories.

"There’s an element of coordination on the fringe platforms like Reddit, 4chan and Gab, where people are trying to band together with the objective of bringing disinformation to bear on more mainstream platforms, like Facebook and Twitter, to influence more mainstream discussion ahead of the elections."

— Padraic Ryan, Senior Journalist at Storyful

The bottom line: Most platforms in the digital ecosystem have taken more action to remove the financial incentives for those creating fake political news. Unfortunately, a lot of the work done by bad actors in 2016 has laid the foundation for even more sophisticated attacks in 2018.

"One of the biggest shifts in past few years is that people have now become so predisposed to violate what used to be common security principles by clicking things they are unfamiliar with. Bad actors are taking advantage of this, as well as the further susceptibility of people who have been exposed to fake news before."

— Tony Lauro, manager of cybersecurity architecture at Akamai Technologies

Go deeper: Meet the troll-fighters tackling fake news ahead of the midterms