Written by James Orme Mon 8 Jul 2019

Trust, bias and ethics have become a central component of AI strategy, according to new IDC report

A new IDC report has shed light on the importance of ethics to the AI strategies of organisations as they ramp up use of the technology.

According to the report, that surveyed almost 2,500 organisations worldwide, two-thirds of organisations are prioritising an AI-first culture in which considerations of trust, bias and ethics are gaining importance.

Nearly 50 percent of respondents said they have established a framework to encourage ethical AI use, potential bias risks, and trust implications. Close to a quarter have carved out a senior role dedicated to ensuring adherence to these frameworks.

The report also reveals that while half of all organisations view AI as a priority, only 25 percent have developed an organisational-wide strategy.

“For many organisations, the rapid rise of digital transformation has pushed AI to the top of the corporate agenda. However, as AI accelerates toward the mainstream, organisations will need to have an effective AI strategy aligned with business goals and innovative business models to thrive in the digital era,” said Ritu Jyoti, program vice president, Artificial Intelligence Strategies.

Organisations cite improved productivity, agility, customer satisfaction, and faster time to market as the main drivers behind their initiatives. IT operations is the top use case for employing AI, followed by customer service and fraud/risk management.

Alongside the introduction of ethical frameworks, more than 60 percent of organisations report adjusting their business model to align with their AI strategy.

Many are struggling, however, to ensure AI projects produce the desired results, with a quarter of respondents reporting up to a 50 percent failure rate. Organisations are battling with high costs, lack of skilled personnel, the prevalence of data bias and unrealistic expectations, IDC said.