Be prepared for a heavy dose of deep learning if you’re on the floor of the Supercomputing 2018 show this week in Dallas, Texas, where the scientific community will demonstrate how neural network-powered AI applications are bolstering research in areas like climatology, geology, genomics, and even particle physics. Deep learning is also driving innovation in the business world, but the applications are decidedly different.

Supercomputers have long been at the cutting edge of scientific research, and the current explosion of interest in deep learning is changing the how computer scientists design and build these systems. GPU maker NVidia is at the forefront of this deep learning revolution, and it’s using the SC’18 conference to showcase a range of solutions for the scientific computing community.

The largest supercomputer in the world, the Summit system at Oak Ridge Leadership Computing Facility, has nearly 28,000 GPUs to go along with more than 9,000 Power9 processors and an abundance of Mellanox switches. As HPCWire’s John Russell explains, the staggering computational power of that $200 million machine will drive all sorts of applications that have deep learning and AI aspects to them, like the University of Tokyo’s earthquake simulation and Oak Ridge National Laboratory’s materials science application, to name just two.

The HPC community has been chasing gargantuan data with humongous compute power for decades. That’s nothing new. But the HPC community is currently the throes of a major architectural shift toward GPU-powered deep learning approaches, which is definitely noteworthy. It doesn’t spell the end of traditional HPC approaches based on understanding the laws of physics and building models to run complex simulations. But researchers are now exploring the use of deep learning for many types of problems.

A similar phenomenon around deep learning is taking place in the wider industry, albeit with different types of applications at a slightly different scale. Depending on the business case and the industry, deep learning is driving better results than what traditional rules-based systems and even classical machine learning systems can offer.

Applied Deep Learning

AI has a long history of ups and downs, or “summers and winters” as the industry likes to call them. Teradata CTO Stephen Brobst caught AI during one of its ebb cycles while studying for his computer science PhD at MIT decades ago.

“I was on the applied side, building MPP systems,” Brobst said during the the Teradata Analytics Universe conference last month. “[I thought] ‘Oh those AI guys, they’re a bunch of theoretical people. They’re not going to do anything real. It’s just a bunch of math. They’ve got lots of thesis published and books published, but they never did anything real because the cost at scale was just too much.'”

But thanks in part to huge boost in parallel processing power brought by GPUs – not to mention the scads of training data generated by the continuing information explosion put in motion by the Internet — those costs have come down to the point where the neural network approach is feasible.

“Some pretty important things happened in just the last five years,” Brobst said. “So now you can actually at scale do this deep learning and we can do it for real, not just in a mathematical paper you publish in a PhD thesis. And that’s how you see this excitement happening now and why five years ago we wouldn’t be having this conversation.”

While traditional machine learning is based on linear math, the non-linear math hidden inside a multi-layer neural network is essentially what gives deep learning its logical boost.

“What it means is my models will be more robust in light of outlier data, missing data, dirty data and the linear models [i.e. classical machine learning models] aren’t quite as friendly in those environment due to those situations,” Brobst says. “So it allows us to detect patterns in highly dimensional data, time-series data for example, that the traditional mathematics aren’t quite as powerful.”

That doesn’t mean you can’t achieve the same level of accuracy with traditional machine learning. “Of course you can do it,” he said. “Data scientists do all kinds of magical things and they can brute-force it. But these algorithms that are now possible with deep learning with these multi-layer neural networks give us additional tools in our toolbox.”

Three Applications for Business

Brobst offered a rule of thumb for when deep learning makes sense. “Decisions that a human makes in two seconds or less, deep learning will make better, faster, and cheaper today,” he said.

Deep learning is gaining traction in three main applications, according to Brobst, starting with predicting demand. Retailers and banks that hold a customer’s attention in a physical branch or on a Website or mobile app can use deep learning techniques to deliver better recommendations in that moment.

“Deep learning is not the complete, full-credit answer. There are some other pieces to that for doing it really effectively,” Brobst said. Still, he said, “It’s very effective.”

Fraud detection is the second area where deep learning is making an impact. Fraud is a multi-trillion-dollar a year business that impacts multiple industries, from retail and financial services to healthcare and government services. With bigger data and better algorithms, companies can reclaim some of that money lost to fraud, which benefits consumers too.

“Fraudsters are getting smarter and smarter, because the dumb ones are already in jail,” Brobst quipped. “The ones that are left, they’re smarter, and we need smarter algorithms, smarter ways of using the data to catch the fraudster. And deep learning has demonstrated the ability to increase the efficiency of doing that.”

Teradata has helped Danske Bank, a Danish bank with the equivalent of $7 billion in annual revenue, to use deep learning to tamp down fraud. According to Brobst, the bank had a system that could stop 40% of the fraud, but at the cost of a 99% false positive rate. With Teradata’s assistance, the company built a deep learning-based fraud detection system that stopped 50% of the fraud while dropping the false positive rate to 60%.

Preventive maintenance of vehicles and other large pieces of equipment is the third big area for adoption of deep learning. “With condition-based maintenance, we can target the interventions or maintenance on an individual vehicle basis because we know how that vehicle is performing, how that car is being driven,” Brobst said. “We can save a lot of money by doing this in a smart way.”

Just like traditional machine learning, deep learning is based on well-understood math that has been around for over 100 years. What makes deep learning stand out is how it can exceed the accuracy of classical machine learning and real humans in some areas.

“So yes there’s a lot of hype,” he says. “But the reality is this is happening in the enterprise. It’s happening.

Related Items:

Deep Learning Is Great, But Use Cases Remain Narrow

Why Deep Learning May Not Be So ‘Deep’ After All