December 21, 2018 8 min read

Opinions expressed by Entrepreneur contributors are their own.

If you think hacks are bad now, just wait a few more years-- because "the machines" are coming.

Related: 3 Ways To Protect Your Company's Website From Cyber Threats

In the next few years, artificial intelligence, machine learning and advanced software processes will enable cyber attacks to reach an unprecedented new scale, wreaking untold damage on companies, critical systems and individuals. As dramatic as Atlanta’s March 2018 cyber “hijacking” by ransomware was, this was nothing compared to what is coming down the pike once ransomware and other malware can essentially "think" on their own.

This is not a theoretical risk, either. It is already happening. Recent incidents involving Dunkin Donuts' DD Perks program, CheapAir and even the security firm CyberReason's honeypot test showed just a few of the ways automated attacks are emerging “in the wild” and affecting businesses. (A honeypot experiment, according to Wikipedia, is a security mechanism designedto detect, deflect, or, in some manner, counteract attempts at unauthorized use of information systems.)

In November, three top antivirus companies also sounded similar alarms. Malwarebytes, Symantec and McAfee all predicted that AI-based cyber attacks would emerge in 2019, and become more and more of a significant threat in the next few years.

What this means is that we are on the verge of a new age in cybersecurity, where hackers will be able unleash formidable new attacks using self-directed software tools and processes. These automated attacks on their own will be able to find and breach even well-protected companies, aand in vastly shorter time frames than can human hackers. Automated attacks will also reproduce, multiply and spread in order to massively elevate the damage potential of any single breach.

Feeling nervous? You should be. Here are a few ways that automated attacks are evolving:

Password guessing

Crack a password, and you own the account. For years, hackers have been developing better tools to do just that.

One new innovation is an automated cyber attack called “credential stuffing,” which uses previously stolen passwords to break into online accounts. This attack is extremely effective -- and dangerous -- because so many people reuse their passwords across multiple accounts. This creates a major blindspot for businesses, because even if their security is up to par, all it takes is one sloppy employee, and the whole company can unravel.

Expect these attacks to increase significantly next year, especially since there is now a glut of stolen password databases for sale in the Dark Web. Hackers recently used credential stuffing to target Dunkin Donuts’ DD Perks rewards program. More businesses will fall victim to it in 2019.

Related: The Growing Menace of Cyber Attacks in the Asia-Pacific region

However, credential stuffing is just the tip of the iceberg.

Researchers have discovered that machine learning programs can be used to predict the passwords a person will create in the future based on what he or she has used in the past. Think about that for a second. This means that if a person loses a couple of passwords to data breaches over the years (and we all know how easily that can happen), that person could -- in theory at least -- be forever vulnerable to password attacks in the future by malicious AI systems scanning the web. This could lead to continual password breaches, which will be very hard to stop.

Hacker bots

New research shows that hackers are beginning to use fully automated “bots” which can carry out extensive cyber attacks all on their own.

Bots are nothing knew: Hackers have been using rudimentary versions of them for years to send spam and scan the web. However, a recent honeypot experiment shows just how far this technology has evolved: When security researchers set up a fake online financial firm, they were shocked to see what a single bot could do. In just 15 seconds, the bot was able to hack into the fake company, gain complete control of its network, scan for employee workstations and steal all the data it could. Again: This all took only 15 seconds.

At that rate of speed, it would be exceedingly difficult for an IT team to respond. And these attacks will become increasingly common over the next few years.

Malicious chatbots

Commercial chatbots are widely used, and they are expected to save companies up to $11 billion by 2023, according to a Juniper estimate. But what happens when a chatbot goes rogue?

We’ve already seen how easily a benign chatbot can be corrupted by “input manipulation” on the web, as in the case of Microsoft’s Tay.

But cybercriminals can go much further, by hacking the bot or infecting it with malware in order to turn it into an information stealer. Ticketmaster’s Inbenta chatbot fell victim to this type of attack. Hackers could also target the back-end network supporting the chatbot, like the [24]7.ai breach which affected Delta and Sears.

It is also possible for hackers to create and launch their own chatbots, designed for the sole purpose of tricking people into sharing sensitive information or clicking on malicious links. This is happening already in some dating websites and apps, but it’s likely to spread to other businesses in the next few years. Such malicious chatbots could be used to impersonate the legitimate chatbots used by real businesses in order to target those customers.

Bot extortion

A few bad posts on the web can undermine a company’s reputation, and cybercriminals are realizing that this is a huge market opportunity for them. With bots, such “brand extortion” is extremely easy -- and cheap -- to accomplish.

The recent attack on CheapAir, a flight price comparison website, is the perfect example: Cybercriminals threatened to launch an SEO attack on the company unless it paid them off. When CheapAir refused, the criminals followed through on their threat -- unleashing a torrent of negative reviews via bots.

“Review bombing” by bots will gain momentum next year and into the future, since this capability already exists and the attack is easy to carry out. Hackers have been extorting businesses for many years with denial-of-service and ransomware, so brand extortion is a logical next step.

Shapeshifter malware

AI is on the verge of transforming malware and attack toolkits into something far more dangerous than what we have ever seen, and many businesses will be caught off-guard.

Hackers are already tweaking traditional malware to make them stealthier and harder to root out of a network, but in the next few years we will see a new evolution in which AI “nerve centers” control and direct malware, turning them into lethal weapons with immense capabilities.

This isn’t the beginning of Skynet, but it will have serious repercussions for businesses. Because of its advanced capabilities, intelligence assets, mutability, increased speed, etc., AI-based malware will be better able to hunt down specific targets inside a company, hide from detection tools like antivirus ones and spread rapidly and uncontrollably across a network. It will also mutate itself at will in order to unleash multiple attacks at the same time. These attacks could be crippling to business networks that aren’t prepared -- especially smaller companies.

For a better idea of the scary potential with AI-based malware, just look at IBM’s DeepLocker. This proof-of-concept malware uses facial and voice-recognition inputs to hunt down a specific human target. Almost like a guided missile.

What smaller businesses can do.

The bottom line for businesses, especially smaller businesses, is that AI will dramatically increase the potential costs of a cyber attack.

Businesses today still struggle with basic attacks like phishing, but in the years ahead, companies will be far outmatched by intelligent, organized, high-speed automated attacks that take no prisoners.

Related: Is Your Business Prepared for a Cyber Attack? (Infographic)

For this reason, it is imperative that companies, and smaller ones in particular, begin to take steps now to limit their risk exposure to AI attacks: