You don’t need 16GB of VRAM. (9060 XT 8GB vs 16GB)
Here is the RX 9060 XT, and the RX 9060 XT, and in this video I want to answer the most important question – which is better, the 9060 XT, or the 9060 XT? In all seriousness, we are comparing these two cards, but besides this little sticker on the back, you’d really struggle to know that these were in any way different. The single difference between these cards is that this one has 16GB of VRAM, and this one has 8GB. Wait, no, this one is 8GB and this one is 16GB. Anyway, what we’re doing here is testing the cards out at 1080p, 1440p and 4K, and monitoring the FPS and VRAM usage, so let’s get into it. Let’s start with the FPS data.
I’ll kick things off with CS2 on low settings. At 1080p, man this thing rips. 532 and 540 FPS for the two models, at 1080p on low settings. That’s excellent, and with such a small difference between the two it’s hard to argue the 16GB card is truly better here. At 1440p that ever-so-slight difference flips, with the 8GB card a couple of FPS ahead. Again, we’re looking at over 500 FPS here so both are more than good enough. At 4K there is, again, a very slight difference between the two. The 16GB card actually finds the extra performance here, although again it’s only a couple of FPS out of 350. Interestingly the 1% lows are what takes a more significant (although far from catastrophic) hit on the 8GB card. It is 30 FPS lower though, which is the first notable difference we’ve seen so far.
Cyberpunk 2077, at 1080p on medium settings is another tie. The 8GB card ekes out a 2 FPS lead, but at 200 FPS that’s literally a 1% change, so I’d call that well within margin for error. At 1440p there is, again, a 2FPS difference, this time in the 16GB card’s favour. Again, this is basically identical. And at 4K, you guessed it, it’s the same! There is just 2 FPS, or just under 2% of difference. I’m sure if you ran this test a thousand times per card you’d get functionally the same average on both cards. There just isn’t much performance difference between them, at least right now.
Shadow of the Tomb Raider has a slightly bigger difference at 1080p on high settings, but it still isn’t exactly amazing. It’s a 3.3% improvement from the 8 to 16 gig card, but that isn’t going to be anywhere near noticeable. It’s 216 FPS vs 223 FPS. Hardly game changing. At 1440p again there’s just a 2FPS gap, or 1.2 percent. Even the 1% lows are within 1 percent of eachother. And at 4K… yeahhhhhh just 0.6 FPS average between ‘em. These cards offer the exact same performance.
Rainbow 6 Siege on medium? Yep. 2 FPS. I don’t know about you but I’m getting bored of this, so I’m gonna whizz through the results until we get to an interesting one. 1440p? Literally 0.2 percent difference, or 2.1 percent in the 1% lows. 4K? 0.4 percent difference on average. The same. Hitman 3 on medium? Well at 1080p it’s A TWO FPS GAP. Ahhhhhhhhhhhhhhh. 1440p is even closer, with a slight advantage to the 8 gig card in the 1 percent lows. At 4K it’s the same, although it’s worth noting here that we are only getting 24 FPS average here. That’ll be important later.
Starfield is the next genuinely interesting set of results, because at 1080p it isn’t interesting, just, well, 2 FPS between them, but at 1440p there is a much more significant difference. There is actually a genuinely noticeable difference here, from 128 FPS to 145 FPS average, and a 10 FPS difference in the 1 percent lows. That’s a 12.6 percent improvement from 8 to 16 gigs, or 15.2 percent if you look at the 1 percent lows. And that continues to 4K, where it’s a little smaller of a gap, but still potentially noticeable at 92.5 FPS versus 99.5 FPS, or 7.5 percent. It sure looks like Starfield, on low anyway, is CPU limited at 1080p, but 1440p and 4K unlocks that.
Helldivers 2 on medium at 1080p is also somewhat interesting, showing a 5 percent difference even at 1080p, that’s just shy of 10 FPS more on the 16GB card. That’s likely a worthwhile difference. The difference actually shrinks at 1440p to, you guessed it, 2 FPS. Amazing. At 4K at least there’s a slightly more significant 4.4 percent difference – that’s 3.5 FPS on average – so as I said, slight.
For the sake of completeness, here’s the 1080p average performance across both cards and all games. The 16 gig card is 1.5 percent faster across the board, or in other words functionally identical. At 1440p it’s just 1.2 percent faster across all the games, which, again, means it’s identical. At 4K it’s a 3 FPS gap, or 2.2 percent. Interestingly it’s the 1 percent lows that take the biggest hit, at 5.9 percent, or 6 FPS. So yeah, these cards, at these settings, regardless of resolution, offer functionally the same performance. Now let’s look at VRAM usage.
I’m going to condense things here, showing all three resolutions at the same time, with the 16 gig card in orange and the 8 gig in green, and showing both the average and peak VRAM usage. This is from HWInfo’s D3D Dedicated Memory usage stat, which looks to be the most accurate measurement on AMD cards. So, starting with CS2, as you’d expect we see a steady progression throughout the resolutions, with 1080p peaking at just shy of 6 GB, 1440p only being a touch more at 6.2 GB, with only 4K being noticeably higher at 7 GB. Neither of these are at or over the 8GB limit, although this is at low settings. I’ve tested this way as that’s what I actually play at when playing these games.
Surprisingly, Cyberpunk used LESS VRAM than CS2, with 1080p using around 5.2 to 5.5 GB, 1440p being between 5.5 and 5.7 GB, and 4K using up to 6.4 GB. One thing that you’ll notice across the board here is that it seems the 16GB card actually, consistently, uses slightly less VRAM across the board. I can’t really explain this one, but in basically every game, at every result, it’s lower. If anyone has an explanation for that one, please do let me know in the comments below. Helldivers 2 has a similar progression, although at 4K it has a higher spike to 7GB or so, although on average it was more like 6GB even at 4K, so again not exactly near the limit of the 8GB card.
Hitman though, this is interesting. See, at 4K we get within a hair of the 8 gig limit, at 7.9GB on the 16 gig card and 7.8 GB on the 8 gig card. Clearly we aren’t OVER the limit here, but at anything higher than medium you certainly would be. Except, remember how we were only getting 24 FPS average on either card? Yeah, so while you COULD change the settings to find a reason to need 16GB of VRAM right now, you’re already at unplayable performance levels, so why would you? You’d turn the settings DOWN here, at 4K anyway, which should decrease VRAM usage. Interesting…
Moving on to Siege, as expected this uses a lot less VRAM. Just 6GB or so at 4K. Again that’s at medium settings, so you could likely pump it up a bit more, but you’d struggle to find a need for more VRAM here, and for a game like Siege, latency and FPS trumps visual quality for sure. As for Shadow of the Tomb Raider, that’s another one that brushes the limit at 7.8GB on the 8 gig card, although VRAM usage is actually pretty high across the board, with 6.7GB of usage at 1080p. That’s likely thanks to running at High settings, but the fact that even at 4K on High settings we don’t actually exceed the 8 gig cap – even on the 16 gig card – is again interesting.
Finally for Starfield, I was expecting much higher VRAM utilisation, but surprisingly, no. Even at 4K, admittedly on low settings to get any amount of playable performance, we are using just 6.7 GB of VRAM on the 16 gig, or 6.3 GB on the 8 gig card. The difference between the average and maximum is also really close on this one – likely because I’m only on New Atlantis during the benchmarking. I’m sure if I was playing loading simulator properly it’d bounce around a bit more – but even then like Helldivers was all on the same mission, planet and area, so who knows.
So, is the 16GB 9060 XT better? Well kinda. On performance alone it’s hard to argue it’s a worthwhile investment, as you’re looking at no more than a percent or two of difference. In terms of VRAM usage, sure, at 4K on higher settings you are likely to run close to the 8 gig VRAM limit, but… these cards aren’t 4K gaming cards. You’ll struggle to get decent performance at 4K on anything above medium settings. Hitman is a particularly bad experience at 4K on medium here, but without upscaling tech anyway, you’ll be limited to 100 FPS or lower on medium to low, and dipping below that as you go up in settings. The argument for the 16GB card gets even harder, because you can get the 8GB card for a full £100 less than the 16GB card – in fact these Gigabyte ones are currently £110 apart – which is 40 percent more money, for at most two percent more performance. Damn that’s a hard pill to swallow, and that’s why I don’t think you need 16GB of VRAM.
Now I know that most people will stop watching there because that opinion is UTTERLY INSANE, because both Steve’s say you very much do, and it might surprise you to hear that I agree with them. The problem here is the context, which is immediately stripped from the conversation into dumb soundbites like “I don’t think you need 16GB of VRAM”, so let me add that context back in. For THESE CARDS, at this point in time, with usable settings you do not need the 16GB version. You don’t – the benchmarks I’ve shown here prove that. But, for a higher end GPU, a 70 class for example, yeah, 8GB would be a limitation, and this is why context is so important. The GPU that the VRAM is tied to matters more than the VRAM amount, because that determines what most people will use it for. If you’re buying a 60 class card, I sincerely hope you aren’t gaming at 4K, so who cares if at 4K on ultra settings a handful of games need 9 or 10 gig of VRAM? No one is actually playing like that – and hey if you are, there’s a 16GB card sitting right here for you. At 1440p, which is where I’d expect most people who buy these will actually play at, none of the games, even the higher settings tests, ran above I think 6GB or so. Of course, this will change over time, and I wouldn’t be surprised if in a couple years 8GB isn’t enough even for 1440p, but that’s kind of the point of saving £100 now isn’t it? Buy the cheaper one that performs identically, and by the time you’re actually running out of VRAM, you can get a two or three generation newer card – especially because for this class of card you’re much more likely to run out of GPU horsepower than you are VRAM. If the price difference was considerably smaller, I would be much more open to suggesting everyone should buy the 16GB card – because, duh, it’s the better card – but for almost 50 percent more money? God no. The futureproofing aspect of 16GB of VRAM is in no way worth that much money. But, to be clear, for the right card, yes, absolutely, you need 16GB of VRAM (or more!), but for these midrange cards? Nah.
