One of the most exciting parts of Microsoft's DirectX 12 API is the ability to pair graphics cards of varying generations, performance, or even manufacturers together in a single PC, to pool their resources and thus make games and applications run better. Unfortunately, testing "Explicit Multi Adaptor" (EMA) support under real-world conditions (i.e. not synthetic benchmarks) has so far proven difficult. There's only been one game designed to take advantage of DX12's numerous low-level improvements—including asynchronous compute, which allows GPUs to execute multiple command queues simultaneously—and the early builds of that game didn't feature support for multiple GPUs.

As you might have guessed from the headline of this story, it does now. The latest beta version of Stardock's real-time strategy game Ashes of the Singularity includes full support for EMA, meaning that for the first time we can observe what performance boost (if any) we get by doing the previously unthinkable and sticking an AMD and Nvidia card into the same PC. That's not to mention seeing how EMA stacks up again SLI or Crossfire—which have to be turned off in order to use DX12's multi-GPU features—and whether AMD can repeat the ridiculous performance gains seen in the older Ashes benchmark.

Benchmarks conducted by a variety of sites, including Anandtech, Techspot, PC World, and Maximum PC all point to the same thing: EMA works, scaling can reach as high as 70 percent when adding a second GPU, and yes, AMD and Nvidia cards play nicely together.

That EMA works at all is something of an achievement for developer Stardock. Not only is it the first developer to implement the technology into an actual game, but doing so is hard going. Unlike older APIs like DX11 and OpenGL and multi-GPU support under the the proprietary systems developed by Nvidia (SLI) and AMD (Crossfire), you have to be a tenacious developer indeed to work with EMA and DX12. Under DX12, work that was previously handled by the driver has to be done manually. That's a double-edged sword: if the developer knows what they're doing, DX12 could provide a big performance uplift; but if they don't, performance could actually decrease.

That said, developers do have a few options for implementing multiple GPUs under DX12. Implicit Multi Adapter (IMA) is the easiest, and is essentially like a DX12 version of Crossfire or SLI, with the driver doing most of the work to distribute tasks between GPUs (a feature not part of the Ashes benchmark). Then there's EMA, which has two modes: linked or unlinked mode. Linked mode requires GPUs to be close to the same hardware, while unlinked—which is what Ashes uses—allows any mix of GPUs to be used. The whole point of this, and why this works at all under DX12, is to make use of Split Frame Rendering (SFR). This breaks down each frame of a game into several tiles, which are then rendered in parallel by the GPUs. This is different to the Alternate Frame Rendering (AFR) used in DX12, where each GPU renders an entire frame each, duplicating data across each GPU.

In theory, with EMA and SFR, performance should go way up. Plus, users should benefit from pooling graphics memory (i.e. using two 4GB GPUs would actually result in 8GB of usable graphics memory). The one bad thing about the Ashes benchmark? It currently only supports AFR.