If you’re buying or building a new gaming PC for Windows 10 and DirectX 12, your priority should be as many “real” CPU cores you can afford, running at high clock speeds.

At least, that’s the conclusion after playing with the first DirectX 12-based test that uses an actual game engine. After giving a quirk whirl around Oxide Games' new Ashes of the Singularity DirectX 12 pre-beta benchmark, it’s clear that having more CPU cores will matter most when it comes to a potential DX12 performance boost—but clock speed contributes too.

That validates—but also slightly contradicts—my findings from March using preview copies of 3DMark and Windows 10. That testing indicated that core count, including Hyper-Threading, was the biggest factor in a potential DirectX 12 performance increase, while sheer clock speeds matter less.

You can see Oxide’s Ashes of Singularity loading up all cores of a Core i7-4770K chip.

But while 3DMark is a synthetic benchmark and will never be a game, Stardock and Oxide’s upcoming Ashes of the Singularity will indeed ship sometime next year, which makes it more “real world.”

How I tested

For my tests, I used the same Core i7-4770K with 16GB of DDR3/1333 that I used as a baseline last time, but rather than bringing back the GeForce Titan X from my previous test, this time I used an Nvidia GeForce GTX 980 Ti card that was recommended by Oxide and Nvidia.

Ashes of Singularity shows a nice frames per second bump moving from DirectX11 to DirectX12. That performance increase varies depending on the system you test on.

The Ashes benchmark is powerful and flexible and specifically designed to let the user tailor it to a scenario. Rather than spit out a single score, it has granular details on each load. The idea, Oxide told me, is to prevent emphasizing a single score that can taken out of context. The game can be tailored to test a GPU’s DX12 performance or the CPU’s DX12 performance depending on how you load the test up.

First, to get it out of the way, DirectX 12 performance can indeed be significantly better than DirectX 11 performance in Ashes of the Singularity. My test with the same video card and setup mentioned above, but running DirectX 11 mode, puts DirectX 12 at about 30 percent faster. Others reported an even larger gulf, depending on PC component configurations.

Nvidia’s lab ran the new benchmark on a six-core Intel Core i7-5820K at stock clocks and low-clocks, paired it with a Titan X, and saw even heftier improvements going from DirectX 11 to DirectX 12.

Nvidia’s supplied results show that DirectX11's inability to exploit more cores hurts its performance at low clock speeds on a six-core CPU.

Nvidia saw very hefty performance increases of up to 82 percent on DirectX 12 over DirectX 11 on a multi-core low-clock speed chip. The reason? At lower clock speeds, DirectX 11’s inability to use more of the cores on the Core i7-5820K held performance back since it’s mostly single-threaded. DirectX 12 spreads the load across those cores so that even at low clock speeds, you see a significant performance increase.

My own tests reflected that, but also show a little bit more about the CPU’s impact.

After talking with Stardock and Oxide I determined the best benchmark to run would be the Heavy batch load with the graphics set to the default “low” value so as not to make the GPU the bottleneck. My rationale was not to test the GPU specifically, but to try to replicate my previous 3DMark tests to find out the impact of cores counts on gaming in DX12.

I ran the benchmark with the CPU’s default clockspeed of 3.5GHz to 3.9GHz, with all CPU cores and Hyper-Threading on. I then twisted knobs in the BIOS and ran the test with fewer cores active, as well as switching Hyper-Threading on and off and turning the clock speed down to 1.7GHz.

Early testing with Oxide’s beta DirectX 12 test again confirms more cores matter more than pure clock speed.

Next page: Analysis of results, good news for AMD, and a strong rebuke from Nvidia.