Intel Core Ultra 7 265K Memory Testing – CUDIMM vs UDIMM, 6400 MT/s vs 5600 MT/s
|If you want to know more about these new Arrow Lake chips, check out my other video linked in the cards, otherwise let’s get straight into this. This is Intel’s new Core Ultra 7 265K, a 20 core, 20 thread chip that lacks hyperthreading, but gains a tiled layout, an AI accelerator, and most importantly, newly designed performance AND efficiency cores all made with the help of TSMC – a first for Intel’s mainline desktop chips. In this video I’m doing something a little different, considering all the big channels you’ll likely get notifications for first will be thoroughly testing these things against all its rivals, I thought I’d do some practical testing with memory speeds, especially since the new CUDIMM standard is here – video in the cards for that one too. Now I don’t have any crazy high clock speed kits as of yet, although get subscribed if you want to see me test that here too, but I do have a 6400 CL38 kit AND the Crucial CUDIMM 6400 CL52, with the 6400 kit having a secondary XMP profile that runs at 6000 MT/s but at a lower CL36 latency, so I think that should be pretty interesting to look at how both clock speed and latency affect the performance. I’m also using an Acer RX 7800 XT for testing here, partially because I think that’s a pretty realistic card to pair with an i7 or u7 class chip, and because it’s pretty much the highest end card I have access to right now. Right, that’s the preamble, let’s look at some results.
I won’t spend long on the productivity results, but there are some interesting things here so let’s take a look, starting with Cinebench R23. In single threaded work, it seems like the latency is the biggest factor, as the slowest result was the CUDIMM CL52 sticks, then the CL38 6400 profile, which tied with the 5600 CL36 mode, showing the tradeoff between latency and frequency nicely. In multithreaded it’s a pretty clear victory for frequency, with the catch that latency will have an effect as the 6400 CL38 is the fastest, then a drop to the CUDIMM modules with their considerably slower latency, then 6000 CL36. In Blender there’s a massive difference between 5600 and even 6000 which I found surprising – at least in the Gooseberry render anyway. Only the 6400 CL38 dropped a second off the BMW render, and 10 seconds off the Gooseberry render. Interestingly the 6000 CL36 and 6400 CL52 are basically tied, again showing the tradeoff between frequency and latency nicely. Oh and you’ll be happy to know that these things aren’t nuclear reactors when it comes to heat anymore. With a 360 mm AIO and the default 250 watt power limit in place, even under full load the chip ran at around 70°C, with a peak of 75°C. If this were a 14700K this would be on fire by now. Sorry I mean 90°C or higher.
As for gaming, I did test at both 1080p and 1440p, but 1440p really just squashes all the interesting effects, so I’ll stick with 1080p for now. CS2 seems to really like frequency, because both 6400 kits lead the rest, despite the dreadful timings on the CUDIMM kit. The difference isn’t massive – save for the 5600 runs, although we are still only talking about 6.4% from best to worst, so not a massive deal in the grand scheme of things especially since we’re talking about right around 400 FPS and 20 FPS isn’t exactly much there.
Cyberpunk on the other hand seems a bit more conventional, with the balance of timings and frequency being the best mix. The 6400 CL38 kit did the best – only by 1 FPS in both average and 1% lows, mind you – with the CUDIMM kit trailing behind at 6 FPS back from the lead. There isn’t much in this, just 3.6% from best to worst, so realistically you can use any of these kits and be pretty happy with the performance you’ll get, although if you do want the best, a low latency, high speed kit is your best bet.
Shadow of the Tomb Raider shows the same trend, although this time with a touch more spread. The CUDIMM kit comes up last with 219 FPS, versus the OC kit running at 230 FPS, or a 4.9% gap. Interestingly the 1% lows equally suffer, with 154 FPS on the CUDIMM kit, but 168 FPS with the OC kit. That’s actually an appreciable difference that you might feel in game. The difference between 6400 and 6000 – especially with those tighter timings – isn’t anywhere near as big. It’s a 4 FPS gap, and interestingly even at 5600 there’s very little performance drop – only really showing up in slower 1% lows, but again that’s only a 3 FPS difference so not a big deal.
When it comes to Microsoft Flight Simulator, that’s pretty interesting. As you probably expect by now, the spread isn’t super massive, it’s only 6% top to bottom and under 10 FPS, but it seems like frequency is pretty good, although the 6000 CL36 profile hit the sweet spot, reliably outperforming the rest by 1 to 2 FPS on each run. In the grand scheme of things it doesn’t seem to matter too much what kit you get, considering the CUDIMM kit is right up there, but running at the right balance does seem important at least.
As for Rainbow Siege Siege – which now uses Vulkan by default – it’s a really tight race here too. There’s only 2.5% between the top and bottom, with the clock driver DIMM coming in last, but only by around 10 FPS – which when you’re talking about 400 FPS really isn’t significant. It seems even for Siege your RAM kit doesn’t matter too much.
Hitman 3’s built in benchmarks lets me split out the CPU and GPU data, and of course this is the CPU data you’re seeing here. As expected, the balance of frequency and timings makes the biggest difference here, as the CUDIMM kit is 8.1% behind the 6400 CL38 kit, which is a more hefty chunk of performance missing. You’re looking at a touch over 200 FPS and 97 FPS in the 1% lows compared to 186 FPS on average and 88 FPS in the 1% lows, and actually a decent little jump from 6000 to 6400 even with the looser timings on the 6400 profile, meaning at least for Hitman 3, you’ll get the best performance with a high speed, and relatively low latency kit.
As for Starfield, that one is incredibly close, with 6400, 6000 and 5600 all being ostensibly the same result, and only the CUDIMM kit running a tiny bit behind. It’s 4.2% slower than the 6000 run, or 6 FPS on average, and 4 FPS slower in the 1% lows. There really isn’t that much in it, and it seems at least for running around New Atlantis, as long as you get a reasonable kit of RAM you won’t see much of a difference.
As you might expect, the CPU in general doesn’t have the biggest effect on gaming performance anyway, so swapping RAM kits isn’t likely to give you tens-of-percent performance shifts, but the fact we saw close to that in some games is definitely interesting. There isn’t much in it between faster speeds and lower latency – just 1.1% across all games on average between 6400 CL38 and 6000 CL36 – so at least at non-warranty voiding speeds you are looking at very little difference there. When you start going to slower speeds you might be trading a little more performance, and especially compared to the CUDIMM kit with its very loose timings, that’s where you’re going to generally see a little less performance, regardless of the matching high frequency. So, if you are buying Arrow Lake, any high-ish speed, low latency kit is a good choice.