14600K RAM SPEED TESTING FOR GAMING – 6000MT/s vs 5600 vs 5200 vs 4800

This is Crucial’s new DDR5 Pro RAM, specifically for overclocking. There’s no RGB, the focus is all on performance. The kit I have here runs at 6000MT/s, and offers impressive 36, 38, 38, 80 primary timings. It’ll also work with both the latest Intel and AMD chips, and supports both XMP 3.0 and AMD EXPO. Crucial were kind enough to send over two 32GB kits for me to test with here, so I should probably explain what we’re testing! The speed your RAM runs at is a pretty important factor in how games perform. How quickly your CPU can access data stored in RAM can be the difference between a smooth gaming experience, and a more stilted one. Some chips are much more sensitive to memory speed, so let’s see how this i5-14600K does with this high-speed RAM! It is worth noting that by enabling XMP here – or just by manually setting your RAM speed to above 5600MT/s – Intel considered that a warranty-voiding action. Whether or not they’d enforce that or not is up for debate, but according to the letter of the law as it were, running memory faster that 5600MT/s is something Intel considers overclocking, and will therefore void your warranty. 

With that said, this is a review sample chip, so I’ve already voided any warranty it may have come with, so let’s get testing! To keep things simple, I’ll be using the same kit between the tests, and making changes in the BIOS to set speeds. I’ll be using the primary XMP profile for the 6000MT/s test, and the second 5600MT/s profile for the rest, and changing only the transfer speed down to 5200 and 4800 between. The timings are staying the same – and specifically I’m using XMP mode two which uses the full XMP profile, rather than letting the Asus Z790 Strix E board I’m using decide some of the timings. I know this isn’t a perfect methodology, but it’s the best I can do, so let’s take a look! Oh and I’m also testing with a 3070 because that’s the top end card I’ve got, and testing at 1440p as that’s what I expect most gamers with a 14600K and 3070 would be gaming at. 

Right, with all that said, what are the results? Well, in CS2 on low settings, there is a fairly marked difference in performance here. While the differences in the averages are pretty significant – upwards of 35 FPS between 4800 and 6000 – the 1% and 0.1% are what interest me the most. Going from 178 FPS to 191 FPS in the 1% lows is actually pretty significant, and going from 132 to 146 FPS in the 0.1% lows is even more so. As I somewhat expected, in a more CPU limited game like this, memory speed is more likely to be a factor in performance, and one of the big catches is the smoothness – slower RAM can bring more hitching and stuttering. It seems like, if you want to play CS on a modern i5, higher speed RAM is well worth it.

Cyberpunk, however, is a different kettle of fish entirely. All four of the setups performed functionally the same – at least in the averages. Every result is under 1 FPS different, making for one of the tightest spreads in my testing. The 1% lows stray slightly, by a whole 1 FPS, there isn’t much else in it in the 0.1% lows either. My suspicion here is that Cyberpunk is a rather GPU intensive game, and while I’m sure there are some niche circumstances that would mean you’d see a bigger difference here, in this setup, the CPU – and specifically the memory speed – just doesn’t have that much of an impact.  It turns out that’s the same for Shadow of the Tomb Raider too, where every result was within HALF an FPS on average, and just as close in the 1% and 0.1% lows. This really doesn’t care about CPU and RAM speed. 

Fortnite does provide an admittedly slight difference, going from 152 FPS average to an astonishing.. 155 FPS average. The 1% lows do seem to suggest the 6000MT/s setup offers a smoother experience though. There isn’t all that much in it between them though. Microsoft Flight on the other hand shows a bit more of a significant difference, where running at 4800MT/s nets just shy of 100 FPS, versus the full 6000MT/s giving 105. It is only a 6 FPS spread, but that isn’t bad for a touch faster RAM! The 1% lows are equally improved, although I should note that there is functionally no difference between 5600 and 6000 here – in average, 1% lows, or 0.1% lows. 

Hitman is by far the most interesting test here though, as its built-in benchmark provides both CPU and GPU frame time data, and I’m showing you the CPU side here. When we focus more exclusively on the CPU data, we can see how big of a difference is possible. You’re looking at almost 10 FPS on average, going from 156 FPS to 164 FPS, and more importantly it seems there’s a pretty linear scale between RAM speed and more performance. Of course you can go higher, there are 7600MT/s kits available already, but within this spread it seems pretty linear. The same goes for the 1% lows, where at 6000MT/s you get over 100 FPS, versus 95, 94 or 93 in descending order. Even the 0.1% lows show the same trend, albeit with a smaller overall spread of around 6 FPS. 

Rainbow Six Siege, much like CS2, is an esports title that runs at hundreds of frames per second. Because it’s less intensive on the graphics card, changes to the CPU – and in this case memory – can offer more sizable differences. That’s clear from the data, where at 6000MT/s you get 418 FPS average, versus 405 FPS at 4800MT/s. You aren’t likely to notice that difference, and honestly looking at the 1% and 0.1% numbers, you aren’t all that likely to notice differences there either. Still, it gives you the idea – in higher performance titles like this, higher RAM speed can give you more performance. 

Starfield brings us back to the “not much going on” camp, where at least in the average and 1% low results, there really isn’t all that much of a difference. The only noticeable jump is in the 0.1% lows, where we’ve gained almost 10 FPS going from 4800 to 6000 MT/s, making for a smoother experience for sure. But really, there isn’t that much in it. 

If we look at the average of all of these results, you can see that there is a slight advantage to having faster RAM, although there isn’t much in it. The average does improve – by almost 10 FPS going from 4800 to 6000, although there’s only two FPS in it between 5600 and 6000, so not much there. The 1% lows do also improve – to the tune of 6 FPS across these eight games, but really that isn’t much. The same goes for the 0.1% lows too. To view this another way, here is the percentage improvement over the 4800 results – specifically the average FPS. The best case here is you get around 4% more performance, which while that’s nothing to sniff at, realistically you aren’t comparing 4800 and 6000 MT/s RAM, you’re comparing 5600 and 6000, and there you get functionally no usable improvement. On paper there is a difference, but it isn’t what you’d call statistically significant. 

Rounding up then, it looks like, at least with the configuration I’ve been using here, there isn’t all that much of a difference. It is worth getting higher speed RAM, although it seems the benefits are somewhat limited when you go over the rated max speed of 5600 – although if I’m being honest, this doesn’t surprise me much. Your CPU matters an awful lot less than your graphics card when it comes to gaming – sure, a faster CPU will often give you more performance (even out of the same GPU), but it’s quite rare that a single CPU swap would net you a truly massive performance swing. Whereas, swapping your GPU to a tier or two up will almost certainly net you a bunch more performance in a really noticeable way. So, it stands to reason then that the RAM speed won’t be a seismic shift in performance either. That doesn’t mean you shouldn’t get some fast RAM like this – it generally doesn’t cost much more, if at all, to get this sort of speed and latency, and there is performance on the table here – so yeah, get some fast RAM, but know that it isn’t going to rock your world.