After half a dozen different proof-of-concepts, ANZ bank has found a way to better predict which of its customers will default, but said it will not rush it into production.

special feature AI and the Future of Business Machine learning, task automation and robotics are already widely used in business. These and other AI technologies are about to multiply, and we look at how organizations can best take advantage of them. Read More

Head of Retail Risk at ANZ, Jason Humphrey, said that before the bank moves forward with its neural network-based model, a number of health checks need to be cleared.

While other neural network users can be happy not knowing the ins and outs of the model, the bank said it needs to know which factors are most influencing it.

"It's actually very hard to do, because you've got 70-plus models sitting behind your network, so which attributes are the ones that are the most effective given that one attribute could be across 30 models, another could be across 20 -- it's quite difficult, but we managed to do that," Humphrey told ZDNet at an Nvidia AI conference in Sydney on Tuesday.

Humphrey said that in the United States, banks need to be able to explain the most statistically important attributes that relate to a certain decision being made.

"In a deep-learning environment, it becomes very difficult to work out the factors that were the most predictive for this instance, or for this customer," he said. "Before we roll out any deep-learning models, we need to solve for that -- even though it's not legislated here. I think it is good practice to be able to know why decisions are being made."

The system, developed in partnership with Monash University and Nvidia, was constructed to provide a more accurate model for the bank's risk department that could potentially be applied to areas such as customers taking on new loans, restructuring existing finance, or authorising individual transactions.

In the past, it has taken three to six months to build a new risk model, but the neural network-based system and infrastructure took six weeks to build, with the model itself needing only five days to be built.

Humphrey said it not only needs to meet legal requirements, but the company also needs to be sure no inadvertent biases have been introduced.

"The biggest danger in terms of deep learning is because it is bringing in new attributes and new correlations we've never seen, is that the things that traditionally we have never seen that could be creating bias, that we wouldn't know to look for to say, 'that's something we shouldn't do'," Humphrey explained.

"My fear is: What is the thing that you don't know that creates an unintentional bias that isn't a red flag and actually stands out.

"There's a lot of health checks that we need to tick off before we roll out a deep-learning model, but operationally we know it works."

In its first-half results delivered in May, ANZ reported AU$3.5 billion in after-tax profit, while losing over 3,000 staff members across its business.

Monash University and Nvidia have had a relationship for many years, with the pair working together on GPU-accelerated research.

For the purposes of ANZ, it also helped that the university had a DGX-1 server it could use.

In 2016, Monash professor Tom Drummond said artificial intelligence systems needed to be able to handle rich feedback in order to allow systems to learn why answers were incorrect, rather than the binary yes/no feedback used in neural network training currently.

"Rich feedback is important in human education, I think probably we're going to see the rise of machine teaching as an important field -- how do we design systems so that they can take rich feedback and we can have a dialogue about what the system has learned?" Drummond said at the time.

To illustrate his point, Drummond used the example of a system that was able to recognise and caption images of fire hydrants, and appeared to be working well until one of the images was modified.

"You take Photoshop, and you colour it green, and it says: 'A red fire hydrant'.

"It's not pulling that caption out of a database of captions; there is a recurrent network generating that one word at a time, and there were other images in the database that were green hydrants, but they were physically differently shaped because they were from a different jurisdiction, and in that part of world they paint them green or yellow," he said.

"So it wasn't learning what the word green or red meant, it was learning that that adjective applies when they are this shape.

"That's the problem when you demand yes or no as feedback."

Related Coverage

Nvidia researchers create AI, deep-learning system to enable robots to learn from human demonstration

The paper detailing the method is being outlined at a conference in Brisbane, Australia.

Google's deep learning system aims to tame electronic health records

Google is using a deep learning system to navigate electronic health records without the data wrangling.

Nvidia researchers use deep learning to create super-slow motion videos

The researchers used Nvidia Tesla V100 GPUs and cuDNN-accelerated PyTorch deep learning framework to train their system on more than 11,000 videos shot at 240 frames-per-second.

Linux Foundation pushes open-source AI forward with the Deep Learning Foundation

The Linux Foundation has launched the LF Deep Learning Foundation, an umbrella organization for open-source innovation in artificial intelligence, machine learning, and deep learning. Its first project is the Acumos AI Project, an AI/ML platform and open-source framework.

10 questions machine learning engineers can expect in a job interview (TechRepublic)

Experts in AI are in high demand. Here are some tips on how to answer common machine learning interview questions and land the right job.