Another graphics launch so soon? No, you aren’t seeing double. AMD and Nvidia have managed to roll out new mid-range graphics cards literally within days of each other. The two companies definitely aren’t pulling their punches lately.

AMD threw a mean left hook last Friday with the Radeon HD 7790, which features a brand-new graphics processor and a price tag in the $149-159 range. The card isn’t actually due out until early next month, but according to our testing, it trounces even the highest-clocked variants of Nvidia’s GeForce GTX 650 Ti—which also happen to cost more. As icing on the cake, the 7790 will also come bundled with a free copy of BioShock Infinite when it hits stores. Not bad, huh?

Well, insert boxing metaphor here, folks, because here comes Nvidia’s counterpunch. As of this morning, the GTX 650 Ti is yesterday’s news. The new hotness is the GeForce GTX 650 Ti Boost, which promises better performance at a price tag only slightly higher than the Radeon HD 7790’s. As you’re about to discover, there’s more to this card than the name suggests—and it’s good news for gamers on a sub-$200 budget.

A wolf in sheep’s clothing

The name “GeForce GTX 650 Ti Boost” is pretty evocative for the technically inclined. One pictures a card very much like the GTX 650 Ti, still with a partially hobbled GK106 graphics processor, only this time with the same GPU Boost functionality as higher-end members of the GTX 600 series. Thanks to GPU Boost, one goes on to assume, the GTX 650 Ti Boost simply achieves a higher core clock speed than the GTX 650 Ti when thermal headroom allows. This offers a slight performance increase, perhaps just enough to even the contest with the Radeon HD 7790, which is faster than the standard 650 Ti.

That’s partially true. But it’s not the whole story.

In reality, the GeForce GTX 650 Ti Boost is much closer to a full-blown GeForce GTX 660. It has the same number of active ROP clusters, the same memory interface width, and the same two gigabytes of GDDR5 RAM. It also features the same clock speeds—980MHz base, 1033MHz Boost, and 1502MHz (or an effective 6008MT/s) for the GDDR5 memory. Even the reference board design is identical: 9.5″ in length, with one PCI Express power connector and a dual-slot, single-fan cooler that stretches past a stubbier circuit board.

The only difference between the GTX 660 and the GTX 650 Ti Boost is that, in the latter, one of the GK106 graphics chip’s five SMX units is disabled. As a result, the number of ALUs is cut from 960 to 768, and the number of texels filtered per clock is reduced from 80 to 64. The same goes for the vanilla GTX 650 Ti—but in that instance, Nvidia also lops off one of the 64-bit memory controllers and one of the ROP clusters. This leaves the GTX 650 Ti with a 128-bit memory interface and the ability to process only 16 pixels per clock. The 650 Ti Boost has the full 192-bit interface and can process 24 pixels per clock.

Nvidia uses a similar technique to pare down the GK106 chip for the GTX 650 Ti and GTX 650 Ti Boost. In both cases, the company can disable half of one of the two full-width GPCs, or it can prune the third, half-width GPC. Both methods result in the same number of units being disabled, and Nvidia claims there’s no performance difference between the two. As we noted in our GTX 650 Ti review, this approach gives Nvidia flexibility when repurposing defective GK106 chips, whose flaws might be in different regions.

Base clock (MHz) Boost clock (MHz) Peak ROP rate (Gpix/s) Texture filtering int8/fp16 (Gtex/s) Peak shader tflops Raster- ization rate (Gtris/s) Memory transfer rate Memory bandwidth (GB/s) Price GTX 650 1058 N/A 8 34/34 0.8 1.1 5.0 GT/s 80 $99.99 GTX 650 Ti 925 N/A 15 59/59 1.4 1.9 5.4 GT/s 86 $144.99 GTX 650 Ti Boost 980 1033 25 66/66 1.6 2.1 6.0 GT/s 144 $169.00 GTX 660 980 1033 25 83/83 2.0 3.1 6.0 GT/s 144 $214.99

Here’s how the GTX 650 Ti Boost compares to its compatriots. The $169 price tag is the official suggested e-tail price for the 2GB version of the card; the other prices were pulled from Newegg, where we sought the cheapest representative of each product.

As you can see, the 650 Ti Boost’s peak rates come awfully close to those of the GTX 660. The new card’s only handicaps are reduced shader performance, reduced texturing performance, and lower polygon throughput, which aren’t huge compromises considering the wide price disparity.

Okay, so there is another slight compromise: the free-to-play credit Nvidia bundles with the 650 Ti Boost is worth only $75, or half of what you get with the GeForce GTX 660. The credit is split evenly between World of Tanks, Hawken, and PlanetSide 2, allowing you to buy items and add-ons in each game. It’s not a bad deal for free-to-play junkies, but the credit does feel a little like a second-rate consolation prize compared to AMD’s Never Settle Reloaded bundles. For only $10 more, the Radeon HD 7850 2GB includes free copies of Tomb Raider and BioShock Infinite. And for $20 less, the Radeon HD 7790 ships with BioShock Infinite in the box—a far more exciting offer.

We’re told you can expect to find GeForce GTX 650 Ti Boost cards in stores starting today, which means Nvidia’s counterpunch will actually beat the 7790 to the, uh, punch. Reference-clocked variants of the 650 Ti Boost will sell for $169, and so-called “superclocked” flavors should be available for a little more. If you don’t mind waiting until next month, Nvidia says its partners will also sell 1GB versions of the GeForce GTX 650 Ti Boost for only $149—the exact same price as the stock 7790. Methinks I smell a price war…

EVGA’s GeForce GTX 650 Ti Boost Superclocked

Nvidia sent us a reference version of the GTX 650 Ti Boost, which looks just like the reference GTX 660—and, according to the company, won’t actually be available in stores. The official line is that shipping products will “differ greatly” from it.

Luckily, we have one of those shipping cards in our labs: EVGA’s GeForce GTX 650 Ti Boost Superclocked, which is slated to cost $179.99 when it hits stores tomorrow. At least from the outside, the EVGA card isn’t a drastic departure from the reference design:

It, too, is 9.5″ long. It has the same style of cooler and an identical assortment of display outputs: dual DVI, one DisplayPort, and one HDMI. The cooling shroud looks slightly different, however, and the GPU under it has had a few extra cups of coffee. Instead of using the default 980MHz base clock and 1033MHz Boost clock, EVGA cranks this puppy to 1072MHz and 1137MHz, respectively. You can see the effects of this increase on the card’s peak theoretical rates below:

Base clock (MHz) Boost clock (MHz) Peak ROP rate (Gpix/s) Texture filtering int8/fp16 (Gtex/s) Polygon throughput (Mtris/s) Peak shader tflops Memory transfer rate (GT/s) Memory bandwidth (GB/s) Radeon HD 7770 1000 N/A 16 40/20 1000 1.3 4.5 72 Radeon HD 7790 1000 N/A 16 56/28 2000 1.8 6.0 96 Sapphire Radeon HD 7790 1075 N/A 17 60/30 2150 1.9 6.4 102 Radeon HD 7850 1GB 860 N/A 28 55/28 1720 1.8 4.8 154 Radeon HD 7850 2GB 860 N/A 28 55/28 1720 1.8 4.8 154 GeForce GTX 650 Ti 928 N/A 15 59/59 1856 1.4 5.4 86 Zotac GeForce GTX 650 Ti 2GB AMP! 1033 N/A 17 66/66 2066 1.6 6.2 99 GeForce GTX 650 Ti Boost 980 1033 25 66/66 2066 1.6 6.0 144 EVGA GeForce GTX 650 Ti Boost SC 1072 1137 27 73/73 2274 1.7 6.0 144 GeForce GTX 560 810 N/A 26 45/45 1620 1.1 4.0 128 MSI GeForce GTX 560 Twin Frozr II 870 N/A 28 49/49 1760 1.2 4.2 134

On paper, the 650 Ti Boost Superclocked looks like a suitable competitor for the similarly priced Radeon HD 7850 2GB. Even the vanilla 650 Ti Boost is no slouch, however; it compares quite favorably to the Radeon HD 7790, whose only theoretical advantage seems to be its higher shader throughput. This is all theory, of course—for the practice, turn to the next page.

Our testing methods

I had very little time to put together this review. A GeForce GTX 650 Ti Boost sample arrived at my apartment last Tuesday, but at the time, I was busy benchmarking the Radeon HD 7790 and its rivals. After working multiple 12- to 16-hour days and finally posting the 7790 review at midnight on Friday morning, I was left with exactly four days—including the weekend—to tackle the GeForce GTX 650 Ti Boost.

A few compromises had to be made.

I wound up passing over two cards I had planned to test: the GeForce GTX 660 and the GTX 650 Ti 1GB, since I wasn’t able to obtain samples in time. I also had to re-use results from the 7790 review, since I didn’t have time to benchmark everything again at different settings. You might therefore see the GTX 650 Ti Boost overachieve in some of the tests on the next few pages. Just keep in mind that, if you see frame rates well above 60 FPS (or frame times well below 16.7 ms), chances are the card could happily handle higher detail settings while still staying close to the monitor’s refresh rate.

Anyhow, despite the tight deadline, I was able to supplement the new GeForces with a couple of extra cards in order to provide added context. I underclocked Zotac’s GeForce GTX 650 Ti 2GB AMP! Edition to simulate the company’s non-AMP! model, which retails for $164.99. That’s smack-dab between the 7790 and the stock 650 Ti Boost—a useful reference point. Also, since I didn’t have a vanilla Radeon HD 7850 2GB on hand, I tested an XFX Black Edition model underclocked to match the reference speeds. In terms of performance, this underclocked Black Edition card should be comparable to retail offerings like this one, which sell for $179.99—the same price as EVGA’s GeForce GTX 650 Ti Boost Superclocked.

Oh, and all the Radeons except for the 7790 were re-tested using AMD’s Catalyst 13.3 beta drivers, which include all of the company’s latest frame latency optimizations. The driver AMD sent us for the 7790 review last week also included recent optimizations, but for some reason, they only seem to apply to the 7790—the Radeon HD 7850 and 7770 behave as they do with older driver releases.

You’ll find exact clock speeds and driver version numbers for the aforementioned cards in the last table on this page.

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we reported the median results. Our test systems were configured like so:

Processor Intel Core i7-3770K Motherboard Gigabyte Z77X-UD3H North bridge Intel Z77 Express South bridge Memory size 4GB (2 DIMMs) Memory type AMD Memory DDR3 SDRAM at 1600MHz Memory timings 9-9-9-28 Chipset drivers INF update 9.3.0.1021 Rapid Storage Technology 11.6 Audio Integrated Via audio with 6.0.01.10800 drivers Hard drive Crucial m4 256GB Power supply Corsair HX750W 750W OS Windows 8 Professional x64 Edition

Driver revision GPU base clock (MHz) Memory clock (MHz) Memory size (MB) Diamond Radeon HD 7770 Catalyst 13.3 beta 1000 4500 1GB Sapphire Radeon HD 7790 Catalyst 12.101.2.1000 beta 1075 6000 1GB XFX Radeon HD 7850 1GB Core Edition Catalyst 13.3 beta 860 1200 1GB XFX Radeon HD 7850 2GB Black Edition (underclocked) Catalyst 13.3 beta 860 1200 2GB MSI GeForce GTX 560 Twin Frozr II GeForce 314.21 beta 880 1050 1GB Zotac GeForce GTX 650 Ti AMP! (underclocked) GeForce 314.21 beta 941 1350 2GB Zotac GeForce GTX 650 Ti AMP! GeForce 314.21 beta 1033 1550 2GB GeForce GTX 650 Ti Boost GeForce 314.21 beta 980 1502 2GB EVGA GeForce GTX 650 Ti Boost Super OC GeForce 314.21 beta 1072 1502 2GB

Thanks to AMD, Corsair, and Crucial for helping to outfit our test rig. Asus, EVGA, Diamond, MSI, Nvidia, Sapphire, XFX, and Zotac have our gratitude, as well, for supplying the various graphics cards we tested.

Image quality settings for the graphics cards were left at the control panel defaults, except on the Radeon cards, where surface format optimizations were disabled and the tessellation mode was set to “use application settings.” Vertical refresh sync (vsync) was disabled for all tests.

We used the following test applications:

Some further notes on our methods:

We used the Fraps utility to record frame rates while playing a 90-second sequence from the game. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We tested each Fraps sequence five times per video card in order to counteract any variability. We’ve included frame-by-frame results from Fraps for each game, and in those plots, you’re seeing the results from a single, representative pass through the test sequence.

We measured total system power consumption at the wall socket using a P3 Kill A Watt digital power meter. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench. The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Skyrim at its High quality preset.

We measured noise levels on our test system, sitting on an open test bench, using a TES-52 digital sound level meter. The meter was held approximately 8″ from the test system at a height even with the top of the video card. You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

We used GPU-Z to log GPU temperatures during our load testing.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Tomb Raider

Developed by Crystal Dynamics, this reboot of the famous franchise features a more believable Lara Croft who, as the game progresses, sheds her fear and vulnerability to become a formidable killing machine. I tested Tomb Raider by running around a small mountain area, which is roughly 10% of the way into the single-player campaign.

This is a rather impressive-looking game that’s clearly designed to take full advantage of high-end gaming PCs. The Ultra and Ultimate detail presets were too hard on these cards, so I had to settle for the High preset and leave the game’s TressFX hair physics disabled. Testing was done at 1080p.

Frame time (ms) FPS rate 8.3 120 16.7 60 20 50 25 40 33.3 30 50 20

Let’s preface the results below with a little primer on our testing methodology. Along with measuring average frames per second, we delve inside the second to look at frame rendering times. Studying the time taken to render each frame gives us a better sense of playability, because it highlights issues like stuttering that can occur—and be felt by the player—within the span of one second. Charting frame times shows these issues clear as day, while charting average frames per second obscures them.

To get a sense of how frame times correspond to FPS rates, check the table on the right.

We’re going to start by charting frame times over the totality of a representative run for each system. (That run is usually the middle one out of the five we ran for each card.) These plots should give us an at-a-glance impression of overall playability, warts and all. You can click the buttons below the graph to compare different cards.





We can slice and dice our raw frame-time data in several ways to show different facets of the performance picture. Let’s start with something we’re all familiar with: average frames per second. Average FPS is widely used, but it has some serious limitations. Another way to summarize performance is to consider the threshold below which 99% of frames are rendered, which offers a sense of overall frame latency, excluding fringe cases. (The lower the threshold, the more fluid the game.)

The 99th percentile result only captures a single point along the latency curve, but we can show you that whole curve, as well. With single-GPU configs like these, the right hand-side of the graph—and especially the last 5% or so—is where you’ll want to look. That section tends to be where the best and worst solutions diverge.





Finally, we can rank the cards based on how long they spent working on frames that took longer than a certain number of milliseconds to render. Simply put, this metric is a measure of “badness.” It tells us about the scope of delays in frame delivery during the test scenario. Here, you can click the buttons below the graph to switch between different millisecond thresholds.





Nvidia clearly has the upper hand in Tomb Raider. The $169, reference version of the GeForce GTX 650 Ti Boost manages to outpace even the Radeon HD 7850 2GB, which sells for at least $180 at Newegg right now.

That said, I should point out a couple of caveats.

First, AMD tells us Crystal Dynamics’ latest Tomb Raider patch improves performance by up to 25% on Radeons. We were told to expect this update last Friday, but it didn’t come out until Monday, too late for me to re-test. Compounding the problem, Nvidia claims the patch may negatively affect performance with GeForce GPUs. Nvidia has previously complained about Crystal Dynamics not giving it sufficient time to optimized its drivers, and from what I hear, the two firms’ coordination issues are ongoing.

Second, the talents of our higher-end contenders are clearly wasted at these settings. With a card like the GTX 650 Ti Boost or the Radeon HD 7850 2GB, you’d want to crank up the detail to at least the “Ultra” preset. Doing so would enable tessellation and DirectCompute-accelerated hair physics, which could change the competitive picture to some degree. We’ll have to do more testing to find out.

Crysis 3

Yep. This is the new Crysis game. There’s not much else to say, except that this title has truly spectacular graphics. To test it, I ran from weapon cache to weapon cache at the beginning of the Welcome to the Jungle level for 60 seconds per run.

I tested at 1080p using the medium detail preset with high textures and medium SMAA antialiasing.





Both versions of the GTX 650 Ti Boost have problems with uneven frame delivery—and frequent latency spikes—in Crysis 3. Those problems don’t seem to hurt the cards’ FPS rankings:

However, our 99th-percentile frame time metric shows the GTX 650 Ti Boost cards at the back of the pack. Shockingly, the reference model even falls below the Radeon HD 7770. (The results are all awfully close, though.)





Our percentile curves nicely illustrate the problem. While our two GTX 650 Ti Boost variants have lower 50th-percentile frame times than the competition, those times start to rise around the 80th percentile. The Radeons—and even the other GeForces—don’t really begin to ramp up dramatically until above the 95th percentile, which suggests that they maintain both lower and more consistent frame times throughout a longer stretch of the run.





The early rise of their latency curves might look ugly, but the GTX 650 Ti Boost cards don’t look so bad in our “time spent beyond” metric. They fare poorly in the beyond-33.3-ms rankings, but not dramatically so. (493 ms is less than 1% of the 60-second run time.) What gives?

Well, look back up at the frame-by-frame graphs. Most of the see-saw pattern is sandwiched between 5 ms and 30 ms or so, with very few spikes above that threshold. The problem in this case isn’t really a propensity for long spikes that make the gameplay stutter (which our “time spent beyond” graphs isolate), but a very rapid oscillation between long and short frame times. If small, such an oscillation isn’t a problem. In this case, however, the oscillation is large enough to disrupt gameplay. It feels a little like wading through water: animation seems to speed up and slow down randomly, and input ranges from very responsive to noticeably laggy.

Considering the GTX 650 Ti 2GB doesn’t seem affected despite using the same GPU with fewer units at a lower frequency, I’m tentatively going to chalk up this problem to a driver optimization issue. I’m not sure why the GTX 650 Ti Boost would behave so differently otherwise.

Borderlands 2

For this test, I shamelessly stole Scott’s Borderlands 2 character and aped the gameplay session he used to benchmark the Radeon HD 7950 and GeForce GTX 660 Ti. The session takes place at the start of the “Opportunity” level. As Scott noted, this section isn’t precisely repeatable, because enemies don’t always spawn in the same spots or attack in the same way. We tested five times per GPU and tried to keep to the same path through the level, however, which should help compensate for variability.

I tested at 1920×1080. All other graphics settings were maxed out except for hardware-accelerated PhysX, which isn’t supported on the Radeons.













With no bizarre latency inconsistencies to spoil the fun, the GeForce GTX 650 Ti Boost (and its superclocked sibling) return to the top of the scoreboard in Borderlands 2. This is another example of a game where the new $170-180 cards are so fast that a higher detail settings would be called for. In this case, I’d need a 2560×1440 display, since I’m already running Borderlands with the detail maxed out at 1080p.

By the way, the Radeon HD 7770 and 7850 1GB fare much better here than they did in our Radeon HD 7790 review. That’s all thanks to the new Catalyst 13.3 beta driver. Somehow, the previous driver we used—the one AMD sent us last week—included latency optimizations for the 7790 and not other Radeons. Strange.

Sleeping Dogs

I haven’t had a chance to get very far into Sleeping Dogs myself, but TR’s Geoff Gasior did, and he got hooked. From the small glimpse I’ve received of the game’s open-world environment and martial-arts-style combat, I think I can see why.

The game’s version of Hong Kong seems to be its most demanding area from a performance standpoint, so that’s what I benchmarked. I took Wei Shen on a motorcycle joyride through the city, trying my best to remember I was supposed to ride on the left side of the street.

I benchmarked Sleeping Dogs at 1920×1080 using a tweaked version of the “High” quality preset, with vsync disabled and SSAO bumped down to “Normal.” The high-resolution texture pack was installed, too.













The see-saw pattern is back, except this time, all the GTX 600-series cards seem to be affected. The Radeons do a better job of maintaining consistently low frame times throughout the test run, as our 99th-percentile graph and latency curves illustrate. Subjectively, I didn’t notice any of the weird slowing and speeding up that I saw in Crysis 3. However, the game did feel a little less responsive to input on the GeForces. That made driving a tad more difficult.

Again, the Radeon 7770 and 7850 cards all put together a stronger showing than they did last week. With the previous driver release we tested, those offerings felt even worse than the GeForces—sluggish and noticeably choppy.

The Elder Scrolls V: Skyrim

Here, too, I borrowed Scott’s test run, which involves a walk through the moor not far from the town of Whiterun—and perilously close to a camp of Giants.

The game was run at 1920×1080 using the “Ultra” detail preset. The high-resolution texture pack was installed, as well.













The new GeForces return to the top of the standings in Skyrim.

The Radeon HD 7850 1GB and 2GB aren’t far behind according to our 99th-percentile frame time metric, but their occasional latency spikes seem to be taller and more frequent. Our “time spent beyond” graphs bear that out. At the 16.7-ms and 33.3-ms thresholds, the Radeons are clearly behind the GeForces—and that’s despite using the new Catalyst 13.3 betas.

Battlefield 3

I tested Battlefield 3 by playing through the start of the Kaffarov mission, right after the player lands. Our 90-second runs involved walking through the woods and getting into a firefight with a group of hostiles, who fired and lobbed grenades at us.

I kept things simple, using the game’s “High” detail preset at 1080p.













Chalk up one last win for the 650 Ti Boost and 650 Ti Boost Superclocked. All the current-gen cards have similar latency plots and curves, and subjectively, there’s no appreciable difference between how the game plays on each of them. However, the new GeForces do crank out more frames—and lower-latency frames—than their peers.

Power consumption

Under load, the GeForce GTX 650 Ti Boost draws substantially more power than other cards in its price range. The differences at idle only amount to a few watts, though.

(I should reiterate that I underclocked faster cards to emulate both the Radeon HD 7850 2GB and the GeForce GTX 650 Ti 2GB. As a result, the data for those cards may not be exactly representative.)

Noise levels and GPU temperatures

Both the reference 650 Ti Boost and the EVGA variant have hamster wheel-style blowers, and they’re not particularly quiet, as the numbers above demonstrate. I could hear a distinct growling or grumbling coming from the fans on both models, even at idle. When running a game, the growling turned into a sort of buzzing, not unlike the sound of an old fluorescent tube—and there was white noise added to the mix. The noise might be forgivable if the cards stayed particularly cool, but they don’t appear to.

The Radeon HD 7850 1GB was also a little loud under load. The XFX Black Edition we underclocked to simulate a reference-clocked 7850 2GB was practically silent, no doubt thanks to its dual-fan cooler, though it did run a little hot. For what it’s worth, the cheapest card XFX offers with the same cooler costs $230. Other card vendors make cheaper dual-fan versions of the 7850 2GB—this Gigabyte model is available for $195.

Conclusions We’ll once again wrap things up with a couple of value scatter plots. In both plots, the performance numbers are geometric means of data points from all the games we tested. The first plot shows 99th-percentile frame times converted into FPS for easier reading; the second plot shows simple FPS averages. Prices were fetched from Newegg, the GPU vendors, and the card makers, depending on what was appropriate. The best deals should reside near the top left of each plot, where performance is high and pricing is low. Conversely, the least desirable offerings should be near the bottom right.



That pile-up at the top of the 99th-percentile plot is caused by the Radeon HD 7850 1GB, the Radeon HD 7850 2GB, and the GeForce GTX 650 Ti Boost SC overlapping.

And that tells you much of what you need to know.

In a strange reversal of roles, it’s Nvidia who has suffered from unruly frame latencies this time. Both versions of the GTX 650 Ti Boost fared poorly in Crysis 3 and Sleeping Dogs. They scored easy victories in the other games, but their 99th-percentile frame latencies weren’t substantially higher than those of the Radeons. Averaging the scores with a geometric mean gives us a virtual tie between the like-priced AMD and Nvidia cards. (The average FPS rankings tell a different story, but we think the 99th percentile metric paints a more accurate picture of in-game performance.)

Now, to be fair, it seems highly probable that Nvidia’s issues in Crysis 3 are due to a temporary driver bug rather than a deep-seated problem. The standard 650 Ti is unaffected, even though it’s based on the same GPU. A fix may not be forthcoming for Sleeping Dogs, which has been out for seven months and exhibited problems on all the GeForce cards. AMD has gotten cozy with the studios behind many of the latest triple-A PC releases, and the recent troubles with Crystal Dynamics suggest this may have been done at Nvidia’s expense. GeForce owners may encounter lackluster optimizations in some games.

But let’s not get bogged down in speculation. Right now, the Radeon HD 7850 2GB offers equivalent performance per dollar to the GeForce GTX 650 Ti Boost according to our 99th-percentile metric. It consumes less power, as well, which makes it easier to cool quietly. Last, but not least, it comes with a much more tantalizing game bundle: Tomb Raider and BioShock Infinite. Given those factors alone, the Radeon looks like the better choice—even if you have to pay a $10 premium over the vanilla GTX 650 Ti Boost.

The 650 Ti Boost has definitely made the 7790 less appealing, though. For $20 more, the Nvidia card opens the door to higher resolutions and detail settings. And there’s a good chance the 1GB, $149 version of the 650 Ti Boost due next month will also be quicker than its Radeon competition. Our scatter plots show a negligible difference between the 1GB and 2GB versions of the Radeon HD 7850 at the 1080p resolution and detail settings we used. If there’s a similarly small gap between the 1GB and 2GB flavors of the 650 Ti Boost, then Nvidia could very well knock the 7790 out cold.

In any event, it’s nice to see all this activity in the $150-200 price range. That’s the sweet spot for folks who game on 1080p monitors, and the more choices there are, the better.