For those who read here often, there are clear signs that the FPGA is set to become a compelling acceleration story over the next few years.

From the relatively recent Intel acquisition of Altera by chip giant Intel, to less talked-about advancements on the programming front (OpenCL progress, advancements in both hardware and software from FPGA competitor to Intel/Altera, Xilinx) and of course, consistent competition for the compute acceleration market from GPUs, which dominate the coprocessor market for now

Last week at the Open Compute Summit we finally got a glimpse of one of the many ways FPGAs might fit into the hyperscale ecosystem (along with other future hardware insight) with an announcement that Intel will be working on future OCP designs featuring an integrated FPGA and Xeon chip. Unlike what many expected, the CPU mate will not be a Xeon D, but rather a proper Broadwell EP. As seen below, this appears to be a 15-core part (Intel did not confirm, but their diagram makes counting rather easy) matched with the Altera Arria 10 GX FPGAs.

This is not a first look at what many expect in the future, which is an FPGA and CPU on a single die—these are in the same package (two chips side by side on a single socket), which begs the question of how the Xeon and FPGA are connected, although one might make a reasonable guess at the same Quick Path Interconnect (QPI) links that are used to link multiple CPUs together so they can share memory and work. Intel is not prepared to comment on that yet, but they are bolstering one of the most important pieces of this story together now, which is on the software and programmability front.

According to Intel’s lead for accelerated computing, Eoin McConnell, this configuration created the best balance between CPU and FPGA performance but what is really needed now are the requisite libraries and programming tools to begin to build out a richer ecosystem. Although the hardware story is compelling, the more important bit here is that Intel is boosting its own library prowess for the future of its FPGA hybrids. For instance, as Jason Waxman described at the Open Compute Summit, there is a set of RTL libraries they are working on now and sending to the community for input. The goal is to create this library collection so that users can, in theory anyway, take their FPGA and suddenly have an SSL encryption accelerator or a machine learning library accelerator—all on the fly and with the ability to tweak and tune.

This capability is nothing new, of course. Some companies, including Convey Computing a couple of years ago took the same concept and gave their systems “personalities” that were tunable through libraries and sold as a system versus libraries. It is possible that Intel may, in the future, package such libraries up for distribution and have a business around this to complement their FPGA-CPU chips, although of course, like everything in FPGA land, that remains to be seen.

As for the FPGA libraries Intel is developing into a suite for their forthcoming FPGA push, they “will help users accelerate their workloads and give developers a platform to start with. With the these libraries, we’ve looked at a range of different acceleration demands and end user demands to put together a suite. We can’t say what all is in it yet, but expect a range of standard acceleration for a number of segments, including cloud, networking and traditional enterprise,” McConnell says. “The goal is to provide the right suites to help people use the FPGAs now for things like compression, encryption, and visualization and we’re continuing to work with Altera and what we now call the Programmable Solutions Group to look at other use cases.”

As we lead into the 2017 timeframe when the Broadwell EP/FPGA hybrids become available, there will be other work going on to bolster the programmability and tooling for similar devices, including (our guess) a Xeon D-matched part. McConnell says Intel is getting a great deal of interest in what they might be able to do for a range of applications where we see FPGAs already—and some new areas, including machine learning.