We’ve known for quite some time that Intel was building special versions of certain cores for its large data center customers, but the chip giant has kept the details of the implementation secret — even when negotiating for contracts with companies like Google, Facebook, and Amazon.

Now, some details on Intel’s deal with Oracle have come to light, and they shed light on the kinds of optimizations the company is baking in. Specifically, Intel’s new Xeon E7-8895 V2 is identical to the Xeon E7-8890 V2 released last winter — but the new 15-core chip (pictured above) has received some validation cycles that the other Xeon hasn’t had.

The enhancements that Intel is discussing all relate to server version of Turbo Mode. When Intel sells a consumer chip — say, a Core i7-4960X — it verifies that the chip can run at certain increased frequencies with a given core count. Two cores might hit 4GHz, four cores top out at 3.9GHz, and all six cores could be limited to 3.7GHz at most.

What Intel has done with the new E7-8895 V2 is to validate the core at the clock speeds and frequencies that other components in the stack would normally target — and to allow it to switch to these alternate operating modes on the fly. Do you want an E7-8895 V2 (15 cores, 3.4GHz maximum clock) to reconfigure itself as an E7-8893 (6 cores, 3.7GHz maximum clock)? The new chip can do that without even requiring a reboot.

Intel has also released a taped interview with the data center group’s general manager, Diane Bryant, in which she states that this kind of collaboration is central to Intel’s long-term plans. “As our collaboration continues, we are actually co-defining — with the Oracle engineers [and] with the Intel engineers — next-generation instructions that will further accelerate the Oracle database solution, and those will be coming in future processor generations. Things such as memory enhancements, vector manipulation acceleration, and cluster interconnect performance acceleration.”

Intel also stated that the new Xeons can hit lower power targets than typical chips and possibly spin the cores up more quickly, but the company hasn’t released any information on this. The implication is that the company may actually be power gating the cores to nearly off, in order to use as much thermal headroom as possible to ramp up frequency on the other chips. Presumably there’s some additional validation associated with this — if Intel wants to hit higher Turbo frequencies with a given number of cores, it needs to know which CPU cores can hit those frequencies and which it should power gate.

A sign of things to come

Building a few extra Turbo Mode-esque capabilities into a CPU core might not seem like much of a feature, but it’s a sign of how seriously Intel is working to contain threats to its data center empire. Chipzilla has already discussed long-term plans to integrate an FPGA on-die — offering customers more flexibility in on-the-fly core count adjustments is just another way of helping ensure that Intel chips are flexible enough to compete with the data center offerings being prepared by AMD, IBM, and other ARM vendors.

I think it’s going to take several years, at least, before we know how much of a threat ARM will be to Intel’s data center business. Proponents of the argument like to point to the low cost of foundry SoCs compared to x86 chips, new support for features like HSA, and AMD’s own entry into the market — possibly with a long-term combined architecture. Intel itself would stress its decades of compatibility, the expense of certifying chips for data center use (it’s not cheap), and the fact that any company that entered the business would still be looking to make high margins and facing an uphill climb in server market share.

Between the FPGA news and this latest announcement, I think Intel is planning to create an increasingly flexible line of x86 cores that can pivot to address customer needs and reduce the need to switch to different silicon — but whether or not that’s sufficient to the task at hand, I can’t say.