Wringing optimum performance from hardware to accelerate deep learning applications is a challenge that often depends on the specific application in use. A benchmark report released today by Xcelerit suggests Nvidia’s latest V100 GPU produces less speedup than expected on some finance applications when compared to Nvidia’s P100 GPU.

Specifically, the V100’s new Tensor cores are not best suited for recurrent neural networks (RNN) broadly and a specialized version of them, long-short term memory models (LSTMs), according to Xcelerit; both are widely in finance applications for handling time series inputs.

“For the tested RNN and LSTM deep learning applications, we notice that the relative performance of V100 vs. P100 increases with network size (128 to 1024 hidden units) and complexity (RNN to LSTM). We record a maximum speedup in FP16 precision mode of 2.05x for V100 compared to the P100 in training mode – and 1.72x in inference mode. Those figures are many-fold below the expected performance for the V100 based on its hardware specifications (spec below, click to enlarge),” reports Xcelerit, an Ireland-based provider of software tools for quantitative finance, engineering, and research.

The reason for this less than expected performance, according to Xcelerit, is the powerful Tensor Cores in the V100 are only used for matrix multiplications in half-precision (FP16) or mixed-precision mode. “Profiling the tested applications showed that matrix multiplications only account for around 20% of the overall training time in the LSTM case, and even lower in the other configurations. The other operations (e.g. softmax, scalar products, etc.) cannot use the powerful Tensor Cores. This is in contrast to the convolutional networks used for image recognition for example, where the runtime is dominated by large matrix multiplications and hence they can optimally leverage the Tensor Cores,” reports Xcelerit (training and inference comparisons below, click to enlarge).

It’s worth noting that both the P100 and V100 have been wildly successful and there has been a flood of systems with the newer V100 introduced since mid-summer (see, Nvidia, Partners Announce Several V100 Servers). Xcelerit reports, “While V100 displays impressive hardware improvements compared to P100, some deep learning applications, such as RNNs dealing with financial time series, might not be able to exploit the very specialized hardware in the V100, and hence will only get a limited performance boost.”

Link to Xcelerit report (Benchmarks: Deep Learning Nvidia P100 vs. V100 GPU): https://www.xcelerit.com/computing-benchmarks/insights/benchmarks-deep-learning-nvidia-p100-vs-v100-gpu/

Charts and V100/P100 specs: Xcelerit