Test System & Setup



Main Test System



Processor: Intel i7 4930K @ 4.7GHz

Memory: G.Skill Trident 16GB @ 2133MHz 10-10-12-29-1T

Motherboard: ASUS P9X79-E WS

Cooling: NH-U14S

SSD: 2x Kingston HyperX 3K 480GB

Power Supply: Corsair AX1200

Monitor: Dell U2713HM (1440P) / ASUS PQ321Q (4K)

OS: Windows 8.1 Professional





Drivers:

AMD 14.7 Beta

NVIDIA 344.07 Beta





*Notes:



- All games tested have been patched to their latest version



- The OS has had all the latest hotfixes and updates installed



- All scores you see are the averages after 2 benchmark runs



All IQ settings were adjusted in-game and all GPU control panels were set to use application settings





The Methodology of Frame Testing, Distilled

How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking.



Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen.



FCAT on the other hand has the capability to log onscreen average framerates for each second of a benchmark sequence, resulting in the “FPS over time” graphs. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing and where the Frame Time Analysis Tool gets factored into this equation.



Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render and display each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods.



We are now using FCAT for ALL benchmark results, other than 4K.