There’s a new set of rumors around AMD’s upcoming Navi GPUs, though based on their contents and structure, we’d advise you take them with a substantial grain of salt. While they’re eye-opening, it’s not at all clear they’re accurate.
Hot Hardware reports on rumors that the Radeon RX 3080 XT will match the performance of the GeForce RTX 2070, but undercut that GPU on price, coming in at $ 330. This, of course, would match the old price on the GTX 1070, and might be read as AMD “restoring” the GPU market to its original, pre-RTX configuration. The GPU will reportedly be based on Navi 10 and ship with 8GB of GDDR6 memory. The RX 3080 XT is supposedly a 56 CU card; higher-end models with 60 CUs and 64 CUs, respectively, will be reserved for the Navi 20 GPU family, which isn’t expected until 2020. TDP for the Navi 56 GPU is supposedly 195W.
I’m skeptical of this claim for several reasons. First, it implies that AMD made a decision to build two different GPUs around a very narrow difference in core count. A 56 CU Vega 10/RX 3080 XT would have 3,584 GPU cores. A 60 CU Navi 20 (hypothetically branded as the RX 3090) would be 3,840 cores. That’s just a 7 percent difference in core count. Even if AMD goes for higher numbers of cores per CU (say, 128 instead of 64), the percentage gap between the absolute core count won’t change. While Nvidia used separate physical GPUs for the RTX 2070 and RTX 2080, the RTX 2080 has 1.27x more GPU cores than the RTX 2070. It seems unlikely that AMD would build two completely different GPU designs solely on the basis of a 7 percent core difference.
Next, there’s the question of TDP. Except for the Radeon Nano, which offered the same physical configuration as the Radeon Fury but in a substantially improved TDP, GPU TDP typically holds constant or increases as you step up the stack. The Radeon RX 3090 is supposedly a 180W TDP card, whereas this years’ Radeon RX 3080 is a 195W TDP card. This could reflect the fact that Navi 20 might be built with EUV, but Navi 20’s TDPs probably aren’t even known yet, and neither is the question of whether it’s built on 7FF+ at TSMC. Even assuming the chip is built on TSMC’s 7nm EUV process, it isn’t clear if AMD would have silicon back to typify the TDP range at this point in the development process. Assuming Navi 20’s 2020 target is accurate, it’s early to be hearing about formal TDPs.
The price banding on this proposed stack is also odd. If the RX 3080 XT is a $ 330 card, but the RX 3090 has only 7 percent more cores than the RX 3080, then the only way to justify the 1.26x price increase is going to be with a substantial clock leap. GPU pricing is typically commensurate to performance in these ranges, as this chart from Anandtech’s Turing coverage makes clear.
If AMD is going to slap a 1.26x price increase on the RX 3090 (from $ 339 to $ 430), it’s going to have to deliver increased performance. A 7 percent core count increase isn’t going to cut it, which means a 1.2x – 1.3x clock jump (clock speed increases may not deliver a perfectly linear performance improvement). Given that we know Navi is still based on GCN, taking bets on high core clocks is a risky proposition. One of the defining traits of GCN is that it is not a high-clock architecture.
Could AMD have finally managed to solve this problem with Navi and 7nm? Sure. But given that we’ve seen GCN fail to match Nvidia clocks from the Maxwell era (when the gap was relatively small), straight through Fury X, Polaris, Vega, and Vega 20 on 7nm, we’re going to have to see the gains to believe them. It’s much easier to imagine that AMD went wide with Navi, taking advantage of the die shrink to further increase core counts, than to picture the architecture suddenly gaining another 400-500MHz of clock. We know that AMD made gains on die size — Radeon VII is 331mm2, compared with a 487mm2 die size for Radeon Vega 64. In previous conversations with the company, AMD engineers have indicated that the 4096-core count on Fury X and Vega 64 was not an absolute, intractable limit, but partly a function of a desire to constrain die size and continue to fit HBM2 easily on a desired package. This doesn’t mean AMD automatically built a larger chip, but the 7nm die shrink affords them the capability to theoretically do so.
The issue of mismatched pricing to performance gains compounds with the supposed Radeon RX 3090 XT, which is 1.16x more expensive than the RX 3090 but offers only 10 percent more performance. This means the RX 3090 XT would be roughly 10 percent faster than the GeForce RTX 2080 at $ 500, but it also means the RX 3090 wouldn’t bring much in the way of additional performance to the table, though it would represent a substantial price cut over the Radeon VII. Our Radeon VII benchmarks from our review are shown below:
Last year, there was a persistent rumor going around that AMD would bring Navi 10 to market at $ 250, breaking the back of the GeForce RTX 2070 at $ 500. Now, the price has jumped back to $ 330. Price is always the last thing set before a launch, which is why we knew this rumor was wrong back in December. Could $ 330 be the right target? Yes. But given that AMD will supposedly launch Navi 10 at E3, it’s also possible the company is still finalizing its prices. The wild rumors around AMD’s supposed plan to inflate core counts and slash per-core pricing on its third-generation Ryzen CPUs are inaccurate, as we’ve explained before. And these rumors don’t agree with previous rumors as far as TDP or price. The “old” rumor around the RTX 3080 (no XT) suggested a $ 250 card at a 150W TDP against the GTX 1080 / RTX 2070, not a $ 330 GPU competing against GTX 1080 / RTX 2070 at a 195W TDP. That doesn’t mean the new rumors are wrong, but clearly someone is. These rumors don’t make great sense either.
The Question of Price
It also isn’t clear exactly how AMD will respond to Nvidia’s attempts to raise GPU prices in 2018-2019. On the one hand, enthusiasts would obviously love to see AMD restore the old stack and gut Nvidia’s pricing model. AMD has pulled this type of trick on Nvidia before — back when Team Green launched the GT200 family, AMD’s HD 4000 family was such a strong counter, Nvidia had to slash its launch pricing and introduce a new, faster variant of its second-highest-end GPU.
But there are risks to AMD if it takes this strategy that the company will be closely considering. Back in the GT200 era, the only difference between AMD and Nvidia in terms of GPU features was features like PhysX. Nvidia is putting a much heavier push on ray tracing than it did on PhysX, and actively attempting to position the capability as the future of GPU rendering.
If AMD undercuts Nvidia’s GPU pricing and does so with GPUs that lack ray tracing, it could be read as a tacit admission that Nvidia has established ray tracing as a feature that customers will pay more for. When AMD introduced Radeon VII, it deliberately didn’t price that GPU any lower than the RTX 2080, despite the fact that the Radeon VII completely lacks ray tracing. It is possible that the company will do something similar here, or choose to split the difference by pricing below the equivalent RTX GPU it intends to compete with, but not so low as to imply that Nvidia has properly priced in the value of ray tracing. Despite reports that the PS5 will feature ray tracing, we’ve heard nothing about Navi 10 supporting this feature. And AMD has said it wants to wait to introduce RT until it can introduce it at the top to bottom of the stack. That could mean AMD is keeping quiet about ray tracing support on 7nm — or that it doesn’t intend to introduce the feature in 2019.
But as it stands, this rumor is, at best, incomplete. It implies an odd pricing structure that would require AMD to hit much higher clocks on GCN than it has ever demonstrated a capability to hit. The core counts also imply that AMD is relying heavily on efficiency gains to hit its performance targets, but efficiency gains in GPUs have been hard to come by of late. Vega was not, generally speaking, a large efficiency gain over previous versions of GCN. Could Navi change that? Yes. But historically, we’ve seen GPUs gain the most performance either by clock boosts (which GCN hasn’t been very good at) or core count increases (which this rumor implies have not occurred).
If this rumor is accurate, AMD either substantially improved Navi GPU innate efficiency compared with previous iterations of GCN or will content itself with slashing price, but not necessarily driving performance higher, with top-end performance at $ 500 that would still be below RTX 2080 Ti (albeit at a vastly lower cost). The proposed price structure makes limited sense without massive clock increases to drive performance in the upper tier products. And finally, it’s not clear why AMD would build two completely different chips between Navi 10 and Navi 20 if the difference between the two is just a 7 percent core count increase. This is much less of a gap than exists between the various Nvidia GPUs in their respective brackets and custom designs.