When it comes to the future of GPU computing, NVIDIA CEO Jen-Hsun Huang said during this week’s GTC keynote that the emphasis will be on blending general computing, physics and simulation into one high performance yet energy-efficient package.

According to Huang, the next generation of both data- and compute-intensive apps will require higher performance to meet real-time demands. On the other end, this all has to happen within an energy consumption framework that doesn’t obliterate the ROI of spicing up a datacenter with GPU acceleration.

Outside of these higher-level focal points, other issues, including snappy access to high memory bandwidth, were cited as critical to growing the GPU user ranks. Memory and power will become even more relevant as data volume and velocity requirements expand into new application areas that are reliant on memory maximization without breaking the power consumption bank. These are among some of the identified “big data” problems NVIDIA is seeking to address for both its research and enterprise users.

While “Maxwell” is still sitting on the sidelines until later this year, Huang said the unified virtual memory approach extends Kepler’s forked focus on power, performance and programmability for the present. The newest addition to the GPU ranks is called “Volta,” which when released in the expected 2016 range, will take the three “P” aspects one step further by stepping up to a stacked memory approach.

In essence, with Volta, they’re removing the power hop of getting off the chip and onto DRAM – instead, as the name implies, they’re going to literally stack the DRAM onto the substrate, pierce through them from top to bottom to connect the stacked memories. While the notion of stacked memory isn’t necessarily new, it is still maturing – and NVIDIA sees serious potential.

The promise of Volta is two-fold – on the one hand, it will represent a big step toward practical stacked memory, something that former Cray wizard and current NVIDIA CTO on the Tesla side, Steve Scott, thinks is not yet ready for primetime. During our chat following the keynote, he said that while there are some noteworthy attempts at bringing stacked memory to market from companies like Micron, there are some serious engineering hurdles left to leap (packaging, capacities, degree of routing needed, etc.).

On the secondary side, anything that can be done to minimize one of the real costs of both performance and energy is data movement. Scott noted that Volta, and also Maxwell to a great degree, derive their energy efficiency by tightening up how far data travels. It’s not hard to see how stacking the package could enhance this efficiency focus – not to mention produce some rather stunning bandwidth.

While current generation GPUs’ bandwidth is higher than with a CPU, it’s still never quite at the level users will want. But NVIDIA claims that once Volta rolls out they’ll be able to boast 1 terabyte per second – the equivalent, as Huang described, of loading an entire Blueray into memory and running it through the chip in 1/50 of a second.

As Scott described, NVIDIA sees the need to “find ways to make more of our memory accesses – to work toward memory structures that take less energy to access.” As he noted, that’s where stacked memory comes in: “you can take 3D stacked memory technology and get much better bandwidth at much lower energy per bit to access memory from these 3D stacks than to go to main memory.” This idea is doubtlessly simple, he says, however the technology hasn’t even begun to round the bend to readiness.

There are still considerations to make in terms of tradeoffs. While latency isn’t much different in this projected new sibling to the NVIDIA GPU family, the on-package memory will be smaller, more expensive, but really high bandwidth and lower energy. It’s good that it’s high bandwidth and low energy but the smaller and more expensive part is tough. Scott says they’re looking to address that, but these are considerations for the coming decade.

Almost all HPC applications will be able to take advantage of a stacked memory offering, just as they can with cache today. As Scott described, “you still have your main memory where most of your data sits but you have some kernel that’s operating on some working set and you can usually block it so that you can put a chunk of data in the near memory and access it multiple times.” There are a few problems that won’t fit nicely in this paradigm, but these are already ones that have a hard time making top use of memory to begin with – a prime example is a graph problem since it lacks a great degree of locality.

The section where Huang talks about the roadmap and provides some visual details is below.