AI-enabled malware could soon be the newest weapon in the threat actors' arsenal, a recent report from Malwarebytes warned.

Malwarebytes described AI-enabled malware and cyberattacks as threats that utilize machine learning and AI to find vulnerable systems, evade detection from security products and enhance social engineering techniques. While there are currently no examples of AI-enabled malware in the wild, the report said, it "would be better equipped to familiarize itself with its environment before it strikes," according to the report.

"We are talking about how AI-enabled malware can be harder to detect," said Adam Kujawa, director of Malwarebytes Labs. "It could deliver more targeted malware, create better spearfishing campaigns, because it's able to collect big data from social media, and [create more convincing] fake news and clickbait."

While cybersecurity companies are working on developing and using AI and machine learning to help detect threats and making security tasks more effective, Kujawa warned that companies can expect to see more use of AI by cybercriminals in the next one to three years.

"The same platforms, frameworks and tools -- Google's got their own open source AI project , for example -- are going to be out there and available for cybercriminals to develop on and create their own [malicious] AI that work against us," he said.

For example, the report said malware authors could use available AI tech to beat CAPTCHA challenges. In such a scenario, Google's AI could potentially be used to solve its own CAPTCHA technology.

Malicious AI could also be potentially used to trick automated detection systems and execute serious privacy violation operations, he added. Advances in AI will also facilitate the creation of convincing fake video and audio -- dubbed "deepfakes" -- really easily, he added. Those deepfakes could be used to commit more believable social engineering attacks, the report said.

"Right now we mainly see AI and machine learning, as far as being used by the cyber criminals, as a means to produce malware faster to make it more effective and to make the campaigns used to go after particular targets even more effective because of the data that the AI can collect," Kujawa said.