AMD Radeon 6700 XT Review: A Great GPU at a Tough Price
Earlier this week, we examined the Radeon RX 6700 XT’s IPC and power consumption improvements against its predecessor, the RDA-based 5700 XT. Our tests revealed that the Radeon 6700 XT is significantly more power-efficient than the Radeon 5700 XT when both cards are measured at 1.85GHz. Now we’re taking a fuller look at the RX 6700 XT as compared with the RTX 3070, as well as Nvidia’s previous-generation RTX 2080 and the 5700 XT.
The 6700 XT is based on the Navi 22 GPU core. Its performance against the 5700 XT has been of particular interest, as it’s a near-identical replacement for that GPU as far as core resource allocation. Our tests earlier this week showed that RDNA2 is only slightly faster than RDNA when measured clock-for-clock, but that AMD’s L3 cache and smaller memory bus have paid huge dividends in power efficiency. Here’s how the 6700 XT stacks up against the 5700 XT, as well as the competition from Team Green:
The relationship between the RTX 2080 and RTX 3070 (mostly) mirrors the relationship between the RX 5700 XT and 6700 XT. The RTX 3070 has far more cores than the RTX 2080, and its tensor core and ray tracing performance are higher overall. But the two GPUs share the same number of texture mapping units, render outputs, and ray tracing cores. Memory bandwidth on both GPUs is the same at 448GB/s, they run at nearly identical clock speeds, and they both have 8GB frame buffers.
AMD’s $479 positioning on the RX 6700 XT looks pretty optimistic at first glance. GPUs from different families can’t be directly compared on the basis of core counts, ROPs, or TMUs, but more of these things still tends to be better, and the RTX 3070 packs more of everything the RX 6700 XT has to offer (except VRAM and clock). The base clock of 2325MHz on the 6700 XT is no less than 1.54x faster than the base clock on the RTX 3070, and the 6700 XT offers 12GB of RAM, compared with just 8GB on other cards. We’ll run some tests today aimed at testing how much this additional VRAM matters.
Our RTX 3070 GPU is an MSI Gaming X Trio — we reviewed this card last year if you’re looking for more model-specific information.
Test Setup and Configuration
We’re switching to a new graphing engine here at ExtremeTech, so let us know what you think of the new design when you check it out below. The graph below shows our results for four video cards. You can select or de-select which cards you want to see by clicking on the color buttons next to each card.
Game results were combined for the three Total War: Troy benchmark maps (Battle, Campaign, and Siege), leading to the “Combined” score. Similarly, results from Hitman 2’s Miami and Mumbai maps were averaged to produce a single result. Gaps between the cards in these maps were proportional and this averaging does not distort the overall comparison between the three cards in those titles.
This presentation method prevents us from giving per-game detail settings in the graph body, so we’ll cover those below:
Ashes of the Singularity: Escalation: Crazy Detail, DX12.
Assassin’s Creed: Origins: Ultra Detail, DX11.
Borderlands 3: Ultra detail, DX12
Deus Ex: Mankind Divided: Very High Detail, 4x MSAA, DX12
Far Cry 5: Ultra Detail, High Detail Textures enabled, DX11.
Godfall (RT-only): We only tested Godfall with ray tracing enabled, in Epic Detail. Grats to the Godfall developers for coming up with a credible name for a preset above “Ultra” that isn’t “Extreme.”
Hitman 2 Combined: Ultra detail, but performance measured by “GPU” frame rate reported via the benchmarking tool. This maintains continuity with the older Hitman results, which were reported the same way. Miami and Mumbai test results combined. Tested in DX12.
Metro Exodus: Tested at Extreme Detail, with Hairworks and Advanced Physics disabled. Extreme Detail activates 2xSSAA, effectively rendering the game at 4K, 5K, and 8K when testing 1080p, 1440p, and 4K. Tested in DX12.
Metro Exodus (RT): Ultra Detail, with Ultra ray tracing enabled. The only difference between Ultra and Extreme Detail in Metro Exodus is that Extreme enables 2x SSAA, effectively rendering the game at double the resolution. Hairworks and Advanced physics disabled.
Shadow of the Tomb Raider: Tested at High Detail, with SMAATx2 enabled. Uses DX12.
Strange Brigade: Ultra Detail, Vulkan.
Total War: Troy Combined: Ultra detail, DX12.
Total War: Warhammer II: Ultra detail, Skaven benchmark, DX12.
Watch Dogs Legion (RT-Only): Tested on Ultra detail with ultra ray tracing enabled and disabled.
Our test settings are aggressive and put a heavy load on GPUs, especially Metro Exodus and Deus Ex: Mankind Divided. Testing these GPUs at non-playable speeds can help expose differences in the underlying architectures.
All games were tested using an AMD Ryzen 9 5900X on an MSI X570 Godlike equipped with 32GB of DDR4-3200 RAM. AMD’s Ryzen 6700 XT launch driver was used to test both the 5700 XT and 6700 XT. AMD’s launch Radeon RX 6700 XT driver was used for all AMD GPUs and Nvidia’s 461.92 driver handled NV cards. Smart Access Memory / Resizable BAR was enabled for the Radeon 6700 XT but disabled for the 5700 XT, RTX 2080, and RTX 3070.
Performance Results and Analysis
We’ll talk about rasterization results first, then switch over and chat on ray tracing.
In 1080p, in aggregate, the RTX 2080 is 15 percent faster than the RX 5700 XT and the 6700 XT is 7 percent faster than the RTX 2080. The RTX 3070, in turn, is 11 percent faster than the RX 6700 XT. AMD refers to the 6700 XT as an enthusiast’s 1440p GPU and the data once again bears out this positioning — in 1440p the 6700 XT widens its lead over Turing to 14 percent. The RTX 3070 is still faster overall, but by just 6 percent. The gaps widen again in 4K, with the RTX 2080 winning over 5700 XT by 20 percent, 6700 XT once again 1.07x faster than the RTX 2080, and the RTX 3070 beating the 6700 XT by 1.16x.
There are some benchmarks where the 6700 XT pulls ahead of the 5700 XT by a larger-than-expected margin in 1440p, including Ashes of the Singularity: Escalation, Shadows of the Tomb Raider, Strange Brigade, and Total War Troy, particularly TWT. Total War Troy is an interesting example of a game that responds extremely well to AMD’s L3 cache in one specific resolution. Performance craters in 4K, but it craters for both AMD GPUs.
Our ray tracing results look much as we’d expect, but the 4K data, specifically, is worth your attention:
There’s some evidence to suggest that Nvidia’s decision to equip the RTX 3070 with just 8GB of VRAM really could be a limiting factor in games going forward. There’s evidence of this in both Godfall and Watch Dogs Legion, particularly WDL.
1080p and 1440p show similar patterns of performance between the three cards. Ultra detail is extraordinarily hard on both the RTX 2080 and the 6700 XT, with or without ray tracing enabled. Once we hit 4K, however, things change. Both the RTX 2080 and 3080 fall off a cliff in Godfall, where the RX 6700 XT outperforms them by over 3x.
In Watch Dogs Legion, the RTX 3070 is no less than 2.91x faster than the 6700 XT in 1440p, but loses to it in 4K. While none of the GPUs turns in a playable frame rate, the wholesale collapse of the Nvidia cards at high resolution is indicative of one thing: an insufficient VRAM buffer.
This is concerning in the case of the RTX 3070, which really ought to have enough horsepower to step up to 4K with ray tracing enabled, but can’t do so in titles you can already buy today. While overall ray tracing performance on the RTX 3070 is higher than the 6700 XT in both 1080p and 1440p (and by a significant margin), the RX 6700 XT makes a potent argument in favor of its own 12GB VRAM buffer at 4K and scores a few points in the process.
Power consumption was measured in Metro Exodus and Metro Last Light Redux on the third loop of a three-benchmark run. I threw Last Light Redux back into the mix when I noticed Exodus stressed the 5700 XT a bit differently. I’ve also shown the low-power result from running the Radeon 6700 XT at 1.85GHz (to match the 5700 XT). We discussed these results more in our IPC comparison earlier this week.
There’s a much more efficient chip hiding inside the 6700 XT. Matched clock for clock against the 5700 XT, the 6700 XT is quite power efficient. At 1.85GHz, it’s actually more efficient in terms of power consumption per frame drawn than the RTX 3070. Cranking up the clock to compete with Nvidia reduces the 6700 XT’s efficiency, and the RTX 3070 is more efficient when both GPUs are run at full speed.
Compared with the 5700 XT, the 6700 XT offers a 1.20x increase in performance in Exodus at a slight increase in power consumption. This is a bit worse than its 1440p performance overall, where it offers 1.34x better performance than the 5700 XT. This is a very significant degree of uplift for a GPU that’s a near-mirror of its predecessor, and it speaks to AMD’s work optimizing RDNA2’s clock scheme and overall efficiency.
Here’s one more tidbit. The Radeon VII (not shown) hits around 1.8GHz maximum and draws about 402W in Metro Last Light. The 5700 XT pulls about 350W in this test at a 1.85GHz clock, while the 6700 XT draws 267W at 1.85GHz. Performance between all three GPUs is similar at this clock, with the 6700 XT leading modestly. This means AMD has drawn down its 7nm power consumption from 402W at the launch of Radeon VII to a hypothetical 267W today, at least in this specific test. That data point doesn’t have any direct bearing on our review, since the Radeon 6700 XT is not a 1.85GHz card, but it helps illustrate the long arc of AMD’s RDNA2 efficiency gains.
The 6700 XT has some solid strong points. AMD’s efforts to bring some Ryzen DNA to RDNA2 have clearly paid off. There is no sign of untoward memory bandwidth pressure on the 6700 XT except in Metro Exodus and Deus Ex: Mankind Divided — two titles we benchmark in configurations that put egregious pressure on memory bandwidth. There’s no sign of a problem in any title at settings that yield playable frame rates. AMD may be marketing this GPU as a 1440p solution, but it’s perfectly capable of driving 4K frame rates.
If you read other reviews around the net, you’ll find that the relative gap between the 6700 XT and RTX 3070 ranges from 0 – 10 percent depending on which titles reviewers’ tested. In our test suite, the 6700 XT never quite matches the performance of the RTX 3070, even at its target 1440p resolution. The RTX 3070 theoretically costs just 4 percent more than the 6700 XT and it’s more than 4 percent faster, in every resolution.
Normally, this would be an open-and-shut case, but the high-end ray tracing hit we saw from both the RTX 2080 and RTX 3070 gives us pause. AMD recommends the 6700 XT be used for 1080p ray tracing, so it’s not clear how much ray tracing fans will be doing in 4K anyway. But in some circumstances, the 8GB buffers on the RTX 3070 and RTX 2080 are not large enough for ultra detail settings with RT ladled on top.
The strongest argument in favor of the 6700 XT is the GPU’s 12GB VRAM buffer. The additional VRAM capacity clearly buffers the 6700 XT in Godfall at 4K in a way that isn’t explained by it being an AMD-friendly title, and it allows the 6700 XT to eke out a narrow win in Watch Dogs Legion after losing the first two resolutions. It is possible that the 6700 XT is a rare example of a GPU whose larger VRAM capacity will deliver meaningfully better scaling down the line when cards with less VRAM are forced to disable features like ray tracing at lower resolutions.
I’m really hoping that the next Big Navi GPUs AMD announces find a way to take advantage of the card’s power efficiency rather than relying so heavily on clock. If the Radeon 6700 follows the 5700’s lead, it’ll feature 36 CUs instead of 40 and a reduced clock. It would be nice to see AMD keep more CUs at lower clocks to play up the power consumption angle; wider, slower GPUs tend to be more power-efficient than narrower, higher-clocked GPUs.
If the market were currently normal, I’d argue that the RTX 3070 is the better value if you replace your GPU fairly often or game at 1440p and below, while the 6700 XT might be a better option if you’re concerned about the VRAM issue longer-term. I always tend to weight the here-and-now more heavily than the long-term future of any feature, and the RTX 3070 offers generally faster performance for less than the additional cost of the card in both ray tracing and rasterization.
A $30-$50 price cut would do the 6700 XT a world of good. This is closer to where the card ought to be priced, given its overall competitive performance against the RTX 3070. At $429, the 6700 XT would be an easy recommendation for anyone wanting to save a bit of cash over the RTX 3070 or to step up from an older AMD or Nvidia GPU.
But market prices aren’t normal and they aren’t expected to be normal until 2022, making this talk of hypothetical price comparisons a bit silly. The actual best GPU you can buy right now is the GPU you can get for something approaching MSRP, the Radeon 6700 XT very much included. With six-year-old cards like the R9 390X going for over $400 on eBay, a $479 price tag on this latest GPU, should you ever see one, is an absolute steal.
Comments are closed.