Article content continued

“AI systems that learn are organic and continue to learn … but with lack of governance and guidance they can actually start to inherit and build biases,” he said.

Unlike automation, where systems are simply told to do specific repetitive tasks, the nature of AI is to find patterns, learn from experience and make decisions based on a large volume of data.

It is, however, often only as good as its systems and training data, which are designed by humans.

Microsoft Corp. recently self-published a 150-page book called the Future Computed: Artificial Intelligence and its Role in Society which outlines the many advantages of the technology, while also suggesting steps that must be taken to better protect society from its potential misuses.

For example, AI could be used to make employment decisions, but if the training data was based on past information then the system may find itself biased toward white, male candidates in industries historically dominated by that group.

Similarly, an algorithm weighted toward profitability could potentially be used to unfairly select who should get a mortgage, or whether or not someone should get medical treatment.

There are pitfalls for business as well, such as the possibility of inadvertent price fixing in an industry with few major players who are all using the same available data.

“There are always people involved and the role as humans, to a large degree, is going to be judgement and accountability,” Judah said, while adding that the ability to understand information and data will have to be a core competency for people in 10 years because of AI’s fast growth.