GeForce GTX 1650 has potential. It needs to be less expensive, for starters. We also want to see a card that gets all of its power from the PCIe slot. For the time being, AMD's Radeon RX 570 8GB is faster, less expensive, and better able to handle games with big memory requirements.

Today's best Gigabyte GeForce GTX 1650 Gaming OC 4G deals MSI Gaming GeForce GTX 1650... Amazon Prime $159.99 MSI G1650VXS4C Gaming GeForce... Amazon Prime $166 MSI Gaming GeForce GTX 1650... Amazon Prime $184.99 $170.98 Reduced Price MSI GeForce GTX 1650 GAMING X... Amazon $220.98

Nvidia GeForce GTX 1650 4GB Review

AMD's Radeon RX 570 launched almost exactly two years ago. Back then, nobody could have anticipated that the card, based on an even older Ellesmere GPU, would score a fresh win in 2019. But here we are, benchmarking Nvidia’s new GeForce GTX 1650 4GB against the Radeon RX 570 8GB and finding AMD’s board to be not only faster, but in some cases less expensive as well.

Surely, Nvidia has some advantage in this competition. Right?

Well, the GeForce GTX 1650 and its TU117 processor are technically rated for 75W of power consumption, putting them in that rare category of gaming graphics cards capable of pulling all the current they need from a PCI Express slot. Except the sample we’re testing has a six-pin auxiliary connector along its top edge. And if you don’t use it, “PLEASE POWER DOWN AND CONNECT THE PCIe POWER CABLE(S) FOR THIS GRAPHICS CARD” appears as soon as you boot up.

Of course, it’s not all doom and gloom for the GeForce GTX 1650. We were able to confirm the existence of multiple models that don’t require external power. Even those that do should use about half the power of AMD’s Radeon RX 570 under load, making them immensely more efficient. How does Nvidia achieve such an advantage? It’s all in the Turing architecture…

TU117: A New GPU With Familiar Tricks

The GPU at the heart of GeForce GTX 1650 is called TU117-300-A1, and it’s trimmed down even more than GeForce GTX 1660’s TU116 processor. Not surprisingly, TU117 is quite a bit smaller than TU116: it comprises 4.7 billion transistors in a 200 mm² die. The chip is still manufactured using TSMC’s 12nm FinFET process and naturally lacks the RT and Tensor cores so commonly associated with Turing.

Some of the architecture’s other features do rub off on TU117, though. Like the higher-end GeForce RTX 20-series cards, GeForce GTX 1650 supports simultaneous execution of FP32 arithmetic instructions, which constitute most shader workloads, and INT32 operations (for addressing/fetching data, floating-point min/max, compare, etc.).

Turing’s Streaming Multiprocessors are composed of fewer CUDA cores than Pascal’s, but the design compensates in part by spreading more SMs across each GPU. The newer architecture assigns one scheduler to each set of 16 CUDA cores (2x Pascal), along with one dispatch unit per 16 CUDA cores (same as Pascal). Four of those 16-core groupings comprise the SM, along with 96KB of cache that can be configured as 64KB L1/32KB shared memory or vice versa, and four texture units. Because Turing doubles up on schedulers, it only needs to issue an instruction to the CUDA cores every other clock cycle to keep them full. In between, it's free to issue a different instruction to any other unit, including the INT32 cores.

In TU117, Nvidia replaces Turing’s Tensor cores with 128 dedicated FP16 cores per SM, which allow GeForce GTX 1650 to process half-precision operations at 2x the rate of FP32. TU106, TU104, and TU102 boast double-rate FP16 as well through their Tensor cores, so TU117’s configuration serves to maintain that standard through hardware put in place specifically for this GPU. The following chart is an updated version of the one published in our GeForce GTX 1660 review, which illustrates TU117’s massive improvement to half-precision throughput compared to GeForce GTX 1060 and its Pascal-based GP106 chip.

In addition to the Turing architecture’s shaders and unified cache, TU117 also supports a pair of algorithms called Content Adaptive Shading and Motion Adaptive Shading, together referred to as Variable Rate Shading. We covered this technology in Nvidia’s Turing Architecture Explored: Inside the GeForce RTX 2080. That story also introduced Turing's accelerated video encode capabilities, which carried over to GeForce GTX 1660, but did not make it into GeForce GTX 1650. Check out the following screen capture from Nvidia’s website, referring to the 1650’s NVENC engine as Volta-equivalent, making it similar to Pascal.

That means support for H.265 8K encode at 30 FPS is gone, along with the 25% bitrate savings for HEVC and up to 15% bitrate savings for H.264 that Nvidia touted when Turing launched.

Putting It All Together…

Whereas GeForce GTX 1660 is armed with 22 Streaming Multiprocessors, the 1650 features just 14 SMs spread across two Graphics Processing Clusters. One GPC hosts four Texture Processing Clusters and the other has three. With 64 FP32 cores per SM, we end up with 896 active CUDA cores and 56 usable texture units.

Board partners will undoubtedly target a range of frequencies to differentiate their cards. However, the official base clock rate is 1,485 MHz with a GPU Boost specification of 1,665 MHz. Both of those numbers trail GeForce GTX 1660’s clocks, so in addition to losing on-die resources, GeForce GTX 1650 operates at lower frequencies, too.

Since Gigabyte doesn’t seem entirely content with those specs, we’re testing a GeForce GTX 1650 Gaming OC 4G with its GPU Boost clock set to 1,815 MHz. The card had no trouble maintaining a range between 1,890 and 1,920 MHz through three runs of Metro: Last Light.

Four 32-bit memory controllers give TU117 an aggregate 128-bit bus, which is populated by 8 Gb/s GDDR5 modules pushing up to 128 GB/s. That’s a mere 12.5% improvement over GeForce GTX 1050/1050 Ti.

Each memory controller is associated with eight ROPs and a 256KB slice of L2 cache, totaling 32 ROPs and 1MB of L2 across TU117. Similar to TU116, this chip’s L2 cache slices are half as large compared to TU106.

All of the cutting is good for a 45W reduction in power consumption compared to GeForce GTX 1660 and 1660 Ti. By hitting the magic 75W threshold, Nvidia can claim that GeForce GTX 1650 doesn’t need an auxiliary power connector. But pay close attention as you shop—some implementations lack a six-pin connector, while others (like ours) require one. If your card needs external power, attaching that connector won’t be optional.

GeForce GTX 1650 GeForce GTX 1660 GeForce GTX 1660 Ti GeForce GTX 1060 FE Architecture (GPU) Turing (TU117) Turing (TU116) Turing (TU116) Pascal (GP106) CUDA Cores 896 1408 1536 1280 Peak FP32 Compute 3 TFLOPS 5 TFLOPS 5.4 TFLOPS 4.4 TFLOPS Tensor Cores N/A N/A N/A N/A RT Cores N/A N/A N/A N/A Texture Units 56 88 96 80 Base Clock Rate 1485 MHz 1530 MHz 1500 MHz 1506 MHz GPU Boost Rate 1665 MHz 1785 MHz 1770 MHz 1708 MHz Memory Capacity 4GB GDDR5 6GB GDDR5 6GB GDDR6 6GB GDDR5 Memory Bus 128-bit 192-bit 192-bit 192-bit Memory Bandwidth 128 GB/s 192 GB/s 288 GB/s 192 GB/s ROPs 32 48 48 48 L2 Cache 1MB 1.5MB 1.5MB 1.5MB TDP 75W 120W 120W 120W Transistor Count 4.7 billion 6.6 billion 6.6 billion 4.4 billion Die Size 200 mm² 284 mm² 284 mm² 200 mm² SLI Support No No No No

MORE: Best Graphics Cards

MORE: Desktop GPU Performance Hierarchy Table

MORE: How to Stress-Test Graphics Cards (Like We Do)

MORE: All Graphics Content