Micron is planning an aggressive ramp of GDDR6 technology across the entire GPU business. The introduction of GDDR6 has significant implications for the mainstream PC graphics business, where GDDR5 remains the dominant memory architecture.
GDDR5 has had an exceptional run. First introduced in 2008, it still dominates the PC graphics industry below the $ 300 price point, where the bulk of GPUs are sold. All good things, however, must come to an end. With VR and 4K both pushing into the mainstream, it’s time to boost memory bandwidth and capacities above the level GDDR5 provided. GDDR5X was a short-term solution to increase bandwidth at the high-end, but both AMD and Nvidia need solutions that scale upwards for lower total costs.
The diagram above (PDF) compares GDDR5X and GDDR6. While the data rate is the same between the two designs, GDDR6 has two completely independent memory channels that can read or write to data as necessary. This should improve overall memory efficiency. Access granularity has also improved, from 64 bytes to 32 bytes.
Just as GDDR5 delivered a substantial memory bandwidth improvement at every level compared with older GDDR3 cards, GDDR6 should boost performance of lower-end and midrange cards as well.
“As GPUs crank out more bandwidth over time, the memory needs to keep up or it’s going to get stuck,” Kristopher Kido, Micron’s global graphics memory director, told VentureBeat. “Our partners will decide how fast to run it. But it’s clear that performance has to keep increasing for deep learning, autonomous vehicles, and other workloads.”
Currently, GDDR6 is expected to launch with transfer rates between 12Gb/s and 16Gb/s per channel. That’s significantly faster than GDDR5, though we don’t know when AMD and Nvidia will adopt the memory standard. AMD’s Polaris family is GDDR5-based with no news of a successor, and Nvidia has been mum about its own refresh plans in 2018. As for HBM2, it appears stuck at the high end of the market. Neither AMD nor Nvidia have stated if they will use it for future high-end cards, and Nvidia has never deployed it in mainstream or high-end consumer GPUs as well (high-end, in this case, being GPUs in the $ 400 to $ 600 range).
When AMD switched to HBM, it justified the move due to the difficulty of scaling up GDDR5 to higher clock speeds and the increased power consumption that resulted. This paid off with Fury and Vega — by all accounts, both GPUs consume much less power than they would if they’d used GDDR5 or GDDR5X. At the same time, however, GDDR6 may present a better overall profile for future designs — especially if HBM2 costs can’t be brought under control.