Read this article at https://www.pugetsystems.com/guides/1210 After Effects CC 2018: NVIDIA GeForce vs AMD Radeon Vega Written on August 2, 2018 by Matt Bach

Always look at the date when you read a hardware article. Some of the content in this article is most likely out of date, as it was written on August 2, 2018. For newer information, see Some of the content in this article is most likely out of date, as it was written on. For newer information, see our more recent articles

Introduction While GPU acceleration has become fairly common in Adobe applications, in most situations it is much more important to have a powerful CPU, plenty of RAM, and fast enough storage. Despite this, a popular request we get is to compare AMD's Radeon Vega video cards to NVIDIA's GeForce cards. In previous articles we have compared these cards in Premiere Pro, Photoshop and Media Encoder, and today we will be rounding out our Adobe testing with After Effects. It is worth noting that while we will be focusing on After Effects performance in this article, choosing a specific GPU to use is a much more complicated topic. Many other factors including current pricing, reliability, power draw, noise level, and available cooler designs are all things that need to be considered. If you would like to skip over our test setup and benchmark result/analysis sections, feel free to jump right to the Conclusion section.

RAM Preview - Benchmark Analysis Since AE version 2015, we have seen a very sharp split in the type of CPU that works best for After Effects. While in the past a CPU with lots of cores would be great for everything, most effects and tasks in AE are now better with a CPU with fewer cores but a higher operating frequency. The exception to this is if you utilize the Cinema 4D CPU renderer where a high number of CPU cores can still make a massive difference in performance. Due to this, we have separated out our testing results between "standard" projects and those utilizing the Cinema 4D CPU renderer.

Before we get into the results themselves, we want to explain the scoring system we used to represent the average performance we saw with each GPU. In essence, a score of "20" would mean that on average that card was able to play our projects at 20% of the project's defined FPS. A perfect score would be "100" which would mean that the system was able to play it back in real time, although with the difficult projects we use for testing this should never actually occur. Starting with the projects that use the C4D Renderer, we saw almost no difference in performance between any of the GPUs we tested. Since rendering with C4D is so heavy on the CPU, this is really to be expected as the CPU will almost always be a performance bottleneck. For the standard projects, however, we saw noticeably higher performance from the NVIDIA GeForce Cards. While pricing varies widely based on numerous factors like current sales or the popularity of bitcoin mining, in general you can think of the following cards as roughly costing the same amount: AMD Radeon RX 580 8GB ~ NVIDIA GeForce GTX 1060 6GB

AMD Radeon Vega 56 8GB ~ NVIDIA GeForce GTX 1070 Ti 8GB

AMD Radeon Vega 64 8GB ~ NVIDIA GeForce GTX 1080 8GB Using this as our primary point of comparison, we saw roughly 15% higher performance with the NVIDIA GeForce cards over their AMD Radeon equivalents when playing our standard projects.

Final Render - Benchmark Analysis Once again, since the results should vary widely based on whether or not you utilize the Cinema 4D CPU renderer, we have separated our testing results between "standard" projects and those utilizing the C4D renderer: In case you missed it explained in the previous section, the score shown in the chart above is a representation of the average performance of each GPU for this test. In essence, a score of "10" would mean that on average that card was able to export or render our projects at 10% of the project's set FPS. Similar to the RAM Preview tests, once again the GPU really doesn't have an impact for the projects that use the C4D Renderer. For the standard projects, however, the AMD Radeon cards were slightly faster than their NVIDIA GeForce counterparts. It was only by 2%, however, so in the real world you likely wouldn't notice much of a difference.

Conclusion The overall score in the chart above is a weighted average of our testing results based on what our customers tend to be the most concerned about. Overall, RAM Preview of standard projects is 40% of the score while the Ram Preview (C4D Renderer), Final Render (Standard), and Final Render (C4D Renderer) each contribute 20% towards the score. With the results weighted in this manner, we get a pretty good estimation of what most users can expect in After Effects with each of these cards. Using the same rough pricing equivalents we used earlier (RX 580 ~ GTX 1060, Vega 56 ~ GTX 1070 Ti, and Vega 64 ~ GTX 1080) it is pretty clear that NVIDIA is going to give you higher performance in After Effects for your dollar. The exact performance you would see in your own projects will obviously vary based on exactly what you are doing, but in our testing the NVIDIA GeForce cards scored about 7% higher than their AMD Radeon equivalents. Keep in mind that this is comparing factory overclocked AMD Radeon Vega cards against stock NVIDIA GeForce cards. While this probably didn't actually affect the results by a large amount, we would estimate that if we used stock AMD Radeon Vega cards the performance would likely be 1-2% lower than what we saw in our testing.

Tags: After Effects, Radeon, Vega, RX 580, GeForce, 1060, 1070, 1070 Ti, 1080, 1080Ti