According to a report published by Bitsandchips.it, Nvidia’s upcoming Pascal architecture will not be significantly better at Async Compute than its predecessor (Maxwell). Needless to say, this one is 100% a rumor and should be taken with a grain of salt. Pascal architecture will be landing sometime later this year and improves upon Maxwell with far better FP64 support amongst a plethora of other things. The report further mentions that Nvidia is hoping to win this round by raw performance numbers alone.

Pascal architecture allegedly facing difficulty with asynchronous compute

Asynchronous compute has been a deal sweetener for Radeon buyers ever since the DirectX 12 API hit the stage. AMD currently leads in Hitman and AOTS which utilize their Asynchronous shader technology developed around DirectX 12 API. Interestingly, Nvidia GPUs perform much better without ASync turned on. This is probably due to the fact that Nvidia has disabled ASync from their driver suite due to the fact that its GPUs cannot process ASync concurrently on the hardware level, rather they need context switching which is expensive in terms of frame rate. Geforce GPUs instead rely on a technique called pre-emption. Which is why you see frame rates actually being unpredictable when ASync is forced.

On the other hand, Maxwell remains the only architecture on the discrete GPU market right now which supports DirectX 12 Feature Level 12_1 (Radeon cards only extend up to Feature Level 12_0). This will allow existing Geforce cards to use advanced rendering techniques made available by Direct3D 12 that will not be available to AMD users. VXGI/VXAO and Hybrid Ray Traced Shadows are examples of such a feature. So on both sides of the fence, whether you are in the red camp or in the green camp, there is an upside and a down side. Pascal and Polaris however, were supposed to bridge this gap and unify full compatibility of DirectX 12. Before we let that train of thought run away however, there is one thing we must keep in mind, the entire point of features like ASync is to maximize the use of a GPU’s resources to allow the maximum possible performance.

One reason that we think that this rumor might be true is the fact that chip design isn't something that happens overnight. In fact, it takes an architecture many years to go from the drawing board to hit the shelves. Asynchronous Compute was hyped and became a major point of interest in the last year, which is most definitely not enough time for Nvidia to do anything about it. If Asynchronous Compute wasn't something that was a focus when the Pascal chips were initially designed - then there is nothing Nvidia can do about it at this late in the game. The report further mentions that Nvidia also recently pushed its entire GameWorks SDK into the public domain via GitHub which could be seen as a move to make sure that all games that utilize its technology are fully optimized to leverage Nvidai GPU capabilities (No ASync + Bad Game Optimization = Bad Combo).