While AMD has slowly eaten away at Intel's position on the CPU market with the 5000 series looking to offer more performance for less money. GPUs have always been more of a grey area, with AMD not having any products to match the big expensive 80 or Titan series from Nvidia. But, with the RX6000 series, AMD has decided to engage in full on, bloody trench warfare, and they made sure to pack more artillery and ammunition than the Death Korps of Krieg.
The base card is called RX 6800. It's a lot slimmer, with a more nimble design than its competitor, although with reversed PCI power connectors (the tap is on the other side). The cards look very much the same, with the number of compute and Ray accelerators rising, starting at 60 each with the review sample card. All three cards are fitted with 16GB of GDDR6 memory, and a 128MB Infinity Cache. PCIe 4.0 is of course used and supported. It's a subtle card, with a very subtly lit-up logo. I like it a lot, no RGB, no massive presence in your system, it's sleek and modern.
The Infinity Cache acts as a bandwidth amplifier, so despite the RX 6800 "only" having a 356bit Memory Interface, which normally maxes out at 512 GB/S, the Infinity Caches in reality make for a 1664 GB/s bandwidth, making a 256-bit bus far superior to a 384-bit bus that delivers 936 GB/s while keeping latency and power use to an absolute minimum.
As AMD has brought a lot of new things to the table, this review has been cut down, as it would otherwise be a two hour read. But in essence, AMD has managed to get more performance per watt, and has improved the chip structure at the same time.
And while many may think that Nvidia invented Ray Tracing, it was invented and used before Nvidia was even founded. AMD has another take on it, and rather than using cores that do nothing else, they have implemented Ray Accelerators, and added one per Compute Unit on the card. But, it means the same thing. The card has dedicated hardware to do the calculations, instead of relying on software solutions.
It has triple axial fans, and a very sturdy outer shroud in metal. The fans are impressively silent, and while SPL can be measured at up to 40 dB, the frequency of the noise is very low, and so is the perceived loudness of the fans. It's kept in a standard 2-slot design, and with a die-cast aluminium back plate. But they do their job, even without spinning up: they kept the card at 77 degrees, and a more aggressive fan curve could have done even better.
The review sample is the most basic card, the RX 6800 that is £529.99, while the Nvidia RTX3070 is priced at £469. But good luck finding one. The bigger cards on the other hand, got a progressive bigger price difference to their direct competing Nvidia cards.
The RX6800 is like the other cards built on the RDNA2 architecture, which is also behind a lot of new games and the next-gen consoles. This personally fills me with the hope of vastly better console ports, and a lot shorter waiting period between PC versions of console games.
Amongst the more interesting things, the RX 6000 series support DirectX12 Ultimate - which makes Raytracing and Variable Rate Shading possible. It also supports new features like the anti-lag, where synchronization between GPU and CPU is ensured.
Smart Access Memory is one of the tools that you might be interested in if going full AMD, as the CPU can access all of the VRAM at once, while normally being severely limited. This requires PCIe 4.0 and might be copied by others in the future. It is however needed for the user to have a 500 chipset motherboard, a 5000 series CPU and a 6000 series graphics card, and as we didn't have the CPU needed in time for testing and concluding on this, along with time for manual overclocking, there is performance gains to be had beyond the numbers presented in this review.
Oh, and then there is AMD FidelityFX, which is basically every other trick in the book. Contrast adaption with built-in upscaling, ambient occlusion, variable rate shading (downgrading rendering in areas that are not in focus to improve performance, a part of DirectX12) and Screen Space Reflections, as well as denoising and HDR optimisation.
The driver was provided by AMD, The Radeon Adrenalin has been revamped, and it seems that the Ryzen Master template was taken in to consideration in terms of options, and made a lot more visual, with a lot of graphs, numbers, and not to mention, a lot of tweaks and features. We didn't have time to play with all of them, but things like one-click overclocking and OSD worked really well, and was extremely easy to use, it's a black and red blend of the Steam in-game overlay and Nvidia Shadowplay - but with an enormous amounts of information, options and usability. It also monitors individual games, as well as overall performance. It doesn't have the same sleek look as a modern mouse or keyboard driver, but it does ooze custom-options and a lot of potential for tinkering. It is now also a tool for livestreaming and recording, making it a one-stop piece of software.
Overclocking was a mixed bag of goodies, and while the test system was our standard 3900X on an X570 motherboard, and 16 GB of 3600Mhz CL 16 RAM, the needed 4400 RAM and Ryzen 5000 series was not available. We will update the review when those numbers are in. We did get one more FPS in 4K, 7035 in time Spy Extreme instead of 6935, not huge improvements, but we will have to re-run those tests when we have installed and enabled Smart Access Memory - because that is the intended use. Godfall did provide a 4.2% increase.
We compared it with the Asus TUF RTX3070. Baring in mind some titles were AMD optimised, and that is borderline cheating, as optimised actually means something this time.
Godfall in 4K gave 187 FPS with the RX6000, while the RTX 3070 managed 111 FPS. Same goes for Dirt 5 at 1440p; 76.9 FPS for the RTX3070, with 96.0 for the RX 6800. However Battlefield V did give a 73 vs 75 FPS victory to the RTX3070 in 4K, although something was off, and that needs to be retested. Same goes for Control in a pure 4K render, where the RTX scored 33 FPS against the 26 FPS for the RX6800.
RX6800 faired a lot better in Doom Eternal with a 145 FPS score, and also maintaining a lot more stable FPS, as the RTX 3070 only got 124 in best case scenario. 145 FPS also puts the RX6800 besides the RTX3080, which had dips to 143 on overlock, and even lower with only factory overclocking. The synthetic benchmarks is where the 16GB of VRAM really got to stretch their legs.
Time Spy: 13873
Time Spy Extreme: 6875
Fire Strike: 28868
Fire Strike Extreme: 18726
Fire Strike Ultra: 10323
While Fire Strike was a 12% increase, the others were around 5%, some less. The real surprise was the 10323 score, which not only beat the RTX3070 score of 8706, but got real close on the 10.970 Core of the RTX3080.
Heaven had much the same story to tell:
1080p: 4542 (+4.5%)
Some games are still just better suited for Nvidia, Total War: Warhammer II was a small loss in both 1080p and 1440p. But double the amount of RAM would normally help in 4K, where the RX6800 won by a whole 2 FPS. Same goes for Metro Exodus, which relies heavily on DLSS for delivering playable frame rate. Here the RTX 3070 beat the RX6800 with 54.94 to 61.76 in 1440p, and 30.67 to 34.19 FPS in 4K.
Assassin's Creed Odyssey was worse with RX 6800 scoring 65/64 in 1080p/1440p, and the RTX3070 scoring 78/70 in 1080p/1440p. For some reason, both cards refused to do 4K. Far Cry 5 was the same, with the RTX3070 taking the frame rates over 100 FPS, and the RX 6800 still oozing out by a few FPS.
The story is the opposite in the AMD beloved The Division 2, here we got 150/105/55 FPS in 1080p/1440p/4K. An increase of 11% / 10.5% /14.5%. This pattern repeated in Middle Earth: Shadow of War with 144/120/76 FPS, performance increase of 8.2% / 7.1% / 2.7%, and Shadow of the Tomb Raider giving 116/108/66 FPS, increases of 12.6% / 11.3%/ 17.85%.
Hitman 2 gave 83.15 FPS in 1440p and 63.85 FPS in 4K, a 5.4% and 6.6% increase over the RTX3070.
Red Dead Redemption 2 gave 101.35/ 83.70 /53.84 FPS in 1080p/1440p/4K, increasing frame rate from the RTX3070 by 7.5%/ 15% / 5.3%. The 1440p performance increase was not expected.
As a consumer, this is fantastic. While the world is still sorely lacking stock, and even buying any next-gen GPU or even CPU is extremely difficult, AMD has now upped their game immensely, and are giving Nvidia a run for their money. This also means that all three companies will do their best to break the stalemate, bringing even better and cheaper products to market in the years to follow, while trying to assess and prove their dominance in the eyes of the consumers. The total PC world war has broken out - and it's a fantastic time to be a gamer. But for now - we have an "entry" level AMD graphics card that does 4K, rivals and beats the direct Nvidia competition, and is actually possible to buy without having to sell of a kidney and at least one lung.
Loading next content