Inherently biased artificial intelligence programs can pose serious problems for cybersecurity at a time when hackers are becoming more sophisticated in their attacks, experts told CNBC.

Bias can occur in three areas — the program, the data and the people who design those AI systems, according to Aarti Borkar, a vice president at IBM Security.

"One is the algorithm itself," she told CNBC, referring to the lines of codes that teach an AI program to carry out specific tasks. "Is it biased in the way it's approached, and the outcome it's trying to solve?"

A biased program may end up focusing on the wrong priorities and could miss the real threats, she explained.

"If you're trying to solve the wrong outcome, and the outcome is biased, then your algorithm is biased," Borkar said.

The role of AI is expanding in cybersecurity. Many CEOs see cyber attacks as the biggest threat to the global economy over the next decade.

Firewalls and antiviruses are viewed as tools of antiquity as the digital threat constantly evolves, and hackers are now using more advanced technologies, such as AI, to launch complex attacks against businesses.

Once they are able to breach a system, many attackers maintain a low profile, which makes it harder for IT teams to detect their presence. Some would quietly sniff around the network for sensitive data while others may slowly alter important information without anyone noticing — a scenario that experts say can have serious implications over time.