In this Titanfall PC video card benchmark, we look at the FPS of the GTX 760, GTX 650 Ti Boost, GTX 750, R9 270X, R7 260X, 7850, the A10-5800K 7660D APU, and Intel's HD4000. I threw a GTX 580 in there for fun. Our thanks to MSI for providing the 750, 260X, and 270X for these tests.

Titanfall's official launch brings us back to the topic of video card performance in the Source Engine-based game. When we originally benchmarked how various video cards performed in Titanfall, we clearly noted that the pre-release state of the game and lack of official driver support likely contributed to SLI microstuttering, CrossFire catastrophic failure, and overall odd performance. We're now back with a full report using the latest beta drivers (with Titanfall profiles and support) and the full version of the game.

We encountered some interesting performance variance when compared against our original test (some cards performed marginally worse than in the beta), but I've done some analysis and will offer my hypotheses on these differences below.

Note: Titanfall has some official driver support now and has been patched on the game-side to be more supportive of hardware. That said, it's still new, so it is very likely that performance-improving drivers will be released in the coming days and weeks. For these tests, we used the latest (as of 3/10/14) beta drivers for AMD & NVIDIA devices (released today); we used Intel's latest IGP (HD4000) driver for the integrated test, which was released in late February.

As I expand upon in the conclusion, this is a cross-section test meant to sample a variety of common cards in the market right now. We are providing enough information for you to determine the relative position of cards we may have left out.

I will be pasting a large portion of the previous article below as much of the information remains the same. If you already read our first benchmark, you can skip a lot of this.

Titanfall Benchmark - Video Overview & Graphics Showcased

Titanfall's Graphics Settings Explained

I'll get into our test methodology and applied settings in a moment, but first, a look at what Titanfall offers:

CPU-Intensive Settings:

Impact Marks: How many visible landing marks are simultaneously tracked and displayed (from Titans and other falling objects). The CPU handles this task almost exclusively.

Ragdoll physics: The accuracy with which dead body physics are presented is contingent upon the CPU's processing abilities.

GPU-Intensive Settings:

Lighting Quality: The diffusion effects, light count, and types of lights displayed on the screen. This is primarily a GPU-intensive task.

Model Detail: The Level of Detail (LOD) for character models in-game. Lower LOD settings will significantly improve framerate.

settings will significantly improve framerate. Anti-Aliasing: Higher anti-aliasing settings (and certain types of AA) will push GPUs harder. Lowering AA settings and selecting the correct type for your GPU will improve framerate.

Texture Filtering: The perceived depth, grit, and 'pop' in surface details by the observer. Higher anisotropic filtration settings will give the illusion of added surface depth (actual texture, feeling) to the painted objects on screen.

RAM-Intensive Setting:

Texture Resolution: As we've discussed countless times before on this website, high-resolution textures are the number one consumer of on-card video memory. Amping-up the texture resolution will eat into video RAM if using a dedicated card and system RAM if using an APU/IGP. System RAM will still be consumed for textures even with a discrete card, but to a lesser degree.

Effects details and Shadow details are a fair mix between CPU- and GPU-intensive processing.

Some items to note:

Triple buffering used to be required to make use of 120+Hz refresh rate displays. Since beta, triple buffering has been removed and double buffering now takes on this role.

Titanfall appears to lock the FPS at the monitor's refresh rate (60Hz = 60FPS locked framerate; 120Hz = 120FPS locked framerate, etc.). Ensure your display device is set to its optimal response frequency and ensure V-Sync is configured properly. You can learn about V-Sync in our previous post, found here; you'll want it to either be disabled or, if you're using a 120Hz display, double-buffered. There are ways to 'cheat' for a higher FPS with v-sync off, as we discuss in our crash fixes post.

All of these items now outlined, it's worth giving some optimization tips for weaker or lopsided hardware configurations. If you find that your GPU is underpowered in comparison to the accompanying CPU, we found the best graphics settings to diminish for Titanfall would be model detail, texture detail (if you're caching-out on the RAM), and texture filtering. The opposite -- a strong GPU and weak CPU -- would do well to minimize ragdoll physics and impact marks, then effect details, then shadow details.

Differences from Last Time: Less Stuttering

It seems that Respawn has dropped triple buffering from the settings, so we've changed V-Sync appropriately to reflect this. In Titanfall's beta, the game exhibited serious frame tearing, micro-stuttering (SLI), and overall "choppiness" during gameplay. Most of this has been resolved with the updated drivers and game. In light of this, we can now run using 'insane' textures without introducing unplayable amounts of screen tearing.

The screen overall was a lot more cohesive in its movements. I'll discuss GPU-specific issues I encountered between AMD and NVIDIA below, after the benchmark results.

Test Methodology

Titanfall appears to lock the framerate to the display, so if you've got a 60Hz display, your GPU will never report more than 60 frames per second. This would obviously have a massive negative impact on relative performance results, so we began investigating both software and hardware solutions. Because Titanfall runs on the Source engine, I spent a bit of time playing around in configuration files in an attempt to force com_maxfps to be a greater value, but to no avail. This is not aided by the fact that Respawn has restricted access to the command console within Titanfall. Finally, we stopped being lazy and substituted in a 27" 120Hz display to counteract this issue. Done. Solved.

The test bench used was our standard hardware review platform, detailed in all HW reviews.

The system was kept in a constant thermal environment (21C - 22C at all times) while under test. We keep the CPU overclocked at 4.4GHz (44x multipler) running a 1.265V vCore. 4x4GB memory modules were kept overclocked at 2133MHz. All case fans were set to 100% speed and automated fan control settings were disabled for purposes of test consistency.

A 120Hz display was connected for purposes of testing Titanfall without a frame-lock limiter. The native resolution of the display is 1920x1080, which is what we used throughout the test for best real-world spread. Double buffering was enabled to allow a 120Hz framerate.

We ran the following settings for all discrete devices under test (DUT):

When testing the A10-5800K Trinity APU with Titanfall, we had to lower settings due to compatibility and performance concerns. For these reasons, the A10 (7660D)'s results on the benchmark are not tested on the same maxed settings as all other devices. See the 7660D's settings below:

The video cards tested included:

We ran each test 5 times for 60 seconds using FRAPS for benchmarking frametimes and framerate. All tests were conducted on the same map in relatively the same area. All tests were conducted on full servers. We did our best to keep a similar approach to play throughout the tests. All five tests for each card were averaged for maximum FPS, minimum FPS, average FPS, and 1% Low FPS (a better indicator than 'minimum' as it eliminates outliers).

Test Objective: Eliminate the CPU as a limiter and test GPU-limited gameplay for framerate performance (exception: APU test, wherein only the APU was used with no discrete component).

Titanfall PC Benchmarks: GTX 750 vs. 260X, R9 270X, GTX 760; HD4000 vs. 7660D

As with our original beta test, I kept loose track of other performance variables during testing. As you may know, the game's preload storage requirement sits at around 48GB (~50GB after patching); a large portion of this (>30GB) is dedicated to audio that questionably lacks any sort of noteworthy compression technology. I won't get into the optimization concerns here, but the key takeaway is that the I/O calls for textures aren't as load-intensive as I originally predicted, having previously made the connection that such a large installation would likely consist of ultra high-resolution textures. System memory still sees utilization ranging from 1.5-3.5GB, depending largely on whether you're using an IGP and other factors that I haven't fully dissected (maps, probably).

Note that framerates are more variable on some of the newer maps. We detected regular framedrops on Lagoon (presumably due to the water and some mist effects), and thus eliminated Lagoon from our spread of tests to normalize the results.

Let's get to the results:

From the previous post:

Average FPS: This is the number you care most about. This is the most realistic representation of what you'll experience with this video card in Titanfall.

Minimum FPS: The lowest FPS ever reported (<0.01% of the total frame count). This is an outlier by nature and is less realistic of a measurement than the next item.

1% Time Low FPS: The (low) FPS displayed 1% of the time; a representation of how low your 'lag spikes' will go when limited by video hardware. This will have serious impact on streaming and video capture.

You'll notice that this time, unlike last time, we have no glaring catastrophic failures during testing. During beta, the HD 4000 failed to run the game at all (exhibited severe "artifacting," resembling "mesh-tearing," as below). We at least saw a few frames this time, but it's still absolutely nothing playable. We were unfortunately unable to validate CrossFire this time due to GPU availability on the bench.

Tearing exhibited by the HD4000 during beta is now resolved.

AMD's A10-5800K Trinity APU really impressed me with Titanfall. It's equipped with the 7660D IGP, for the unfamiliar, and is a couple generations behind the current Kaveri APUs (and significantly less powerful). Running on our above-pasted medium-low settings configuration at 1080p, the 5800K was able to output a stable framerate of around 33FPS. Dropping another GPU-heavy setting or texture resolution (from medium to low) would put you closer to 40FPS, but 33FPS is pretty playable for a lot of people running on this sort of budget-focused build. I know it's unacceptable for a lot of you, but we have to look at things relative to their surroundings.

There was a surprising lack of noteworthy tearing with the APU, making it a fairly smooth experience given the mediocre framerate. I'd point fingers at the fact that APU architecture is found in both modern consoles, and so it would not be unheard of if APUs benefit as a result of some of Respawn's Xbox One optimization efforts. This is why we recommended a Kaveri build for ultra-budget Titanfall gaming PCs.

The GTX 580 tanked in these tests. Badly. I'm not sure if it's an optimization issue with Fermi or that particular card, but the device was tested in multiple other games shortly after this (out of concern for the health of the card) and it performed as expected. If anyone else has a GTX 580 and can run their own test iteration, I'd like to see your results -- something seems odd here, and our card looks to be in good health and high performance elsewhere. It also exhibited fairly severe frame stuttering at times, something consistent with the GTX 760 as well (more on that).

NVIDIA's new GTX 750 (which we detailed here) was another shocker. This card, for whatever reason, was one of the 'smoothest' in its visual frame delivery of all the DUT GPUs -- even better than the 760, which should outclass it in all categories. The GTX 750 shipped alongside the 750 Ti, both hosting the 28nm process version of NVIDIA's Maxwell architecture. Our tested 750 was outfitted with the stock 1GB of GDDR5 and handled Titanfall with an average of 49FPS, dipping to 26.8 1% of the time. These are very playable numbers, and considering there's a ton of room to grow (dropping AA/AF, shaders), upwards of 60FPS is achievable with some settings tweaks. It just depends on what you value more -- graphics or a higher framerate. Either way, both choices are reasonable with the 750.

The GTX 650 Ti Boost slightly outclasses the GTX 750, fulfilling its advertised position in NVIDIA's stack. Interestingly, the GTX 650 Ti Boost exhibited the most consistent performance in testing; its average framerate hovered at 51FPS -- the best of what we've covered so far -- and its 1% time FPS sat around 32, with the minimum FPS at 27.5.

We did not notice any severe tearing with this configuration, as noted with the 580 and 760. These results are consistent with our beta benchmark, wherein the GTX 650 Ti Boost also showcased some of the most predictable, consistent frame delivery performance. The card did perform slightly worse in this iteration of the test, but it should be strongly noted that the beta tests and release tests can't necessarily be directly compared due to an entirely new set of variables (like the maps and full-sized assets).

Having ascended the ladder from the HD 4000 to an APU to three NVIDIA units, we now find ourselves back with an AMD device -- the 7850. We tested a 1GB model of the 7850 to provide a bit more of a cross-section of the budget-class GPUs that users would likely have. The Radeon 7850 1GB graphics solution performed admirably in this test, stepping up a rung against its beta performance (driver optimization helps). This is impressive performance for a card that originally retailed in the ~$150 range (and could be had for around $100 before the cryptocurrency boom).

The 260X and 270X performed as expected -- they were smooth and delivered a playable FPS, with the 270X approaching territory that'd let you get away with video capture while retaining higher settings. There were no noteworthy visual artifacting or tearing issues with either card.

And still, the GTX 760 reigns king of Titanfall, at least as far as our bench goes. I know we've omitted some flagship cards, but quite honestly, it doesn't take an expert to know that everything after the 760 is complete overkill for a single-monitor / 1080p setup. If you're introducing video capture or livestreaming, things could change a bit, but that also hinges heavily upon your software used (ShadowPlay is lighter-weight than FRAPS) and can call on the CPU (streaming).

Conclusion: What's the best video card for my uses in Titanfall?

At the end of the day, frankly, almost any modern budget-or-better video card will run Titanfall well enough to play it on medium/medium-high graphics settings. To max the game out doesn't take much, either; actually, given the 750's availability and price-accessibility, it's reasonable to expect smooth play with a high FPS right around the ~$130-$140 range.

Titanfall feels sub-optimized to me -- it isn't a particularly stunning game, visually, but eats some of these cards far more than more graphically-demanding games. That said, I am pretty impressed with how well it runs on AMD hardware. A higher-end 7850K APU (or similar) could run Titanfall smoothly with ease after a few settings tweaks, judging by these results, which is great news for HTPC gamers. The NVIDIA stuttering and micro-stuttering issues, however, are another story -- and it's pretty disappointing to see that they made it into launch from beta.

These tests were structured to cross-section a large potion of the video card market. We've obviously left out a lot of cards available, but you can make choices based on the performance of the above devices by ranking them relatively against others. Please let us know in the comments (or, preferably, on our forums) if you need help building a Titanfall PC or troubleshooting Titanfall!

- Steve "Lelldorianx" Burke.