Intel Core Ultra 7 265K vs 14700K vs 7800X3D vs 9700X – IT’S A BLOODBATH – Gaming Benchmarks

I have no other words to describe my results other than – it’s a bloodbath. I have checked and rechecked these figures. I retested with a fresh install of Windows 11 to be sure. Nothing I did made it better – in fact some things made it worse – so to the best of my knowledge, the figures you are about to see are accurate. I should thank Cyberpower for sending over this beautiful system, complete with a Ryzen 9700X and RX 7900 XTX – which is the GPU I’ve done all this testing with, at 1080p – I’ll have a full writeup of this system on my website soon if you want to know more, or there will be a link in the description you can check out. Actually I need to double thank Cyberpower, because they were also the ones to provide the 14700K and 7800X3D for me to test here, all at short notice, so thank you to them – definitely go check them out. Right, without further ado, let the bloodbath commence. 

Right out the gate we will start with CS2. This is at low settings, which if you want to play competitively is where you’ll be at, and somewhat as expected the 7800X3D absolutely runs away with it. It’s over 100 FPS faster than the 14700K, and effectively 200 FPS higher than the 265K. While dropping 200 FPS sounds horrific, 544 FPS average is still perfectly fine, but my god the AMD chips run away with it. The 7800X3D is 34.4% faster than the 265K here, and hell even the 14700K is a full 16% faster than the newer counterpart. This is not a good sign.

Cyberpunk on medium settings is even worse. The Ultra 7 can’t even break 200 FPS, when the 14700K, 9700X AND 7800X3D all can. The 14700K is 12.4% faster than the 265K here, and the 7800X3D is a whopping 40% ahead. It is 70 FPS faster. 70! When you are talking about 180 FPS average, adding 70 more is no small feat. It’s not unnoticeable – especially with a high refresh rate monitor – and neither is the 45 FPS difference in 1% lows. That is a shocking difference to see. I’m honestly gobsmacked at this one. 

Sadly for Intel, Shadow of the Tomb Raider isn’t any better. The 265K nets 241 FPS average, down from 270 FPS on the 14700K, and 331 FPS on the 7800X3D. That puts the 14700K 11.6% ahead of the 265K, and the 7800X3D is 37.5% ahead. At least at 240 FPS you’re unlikely to notice quite as much of a difference, but when even the 9700X is trouncing your brand new chip, and the 9800X3D is looming large, it’s safe to say you’re not in a good place.

Microsoft Flight Simulator on medium settings at least shows the Intel chips weren’t just tested badly, as the 14700K leads the 9700X by 7 FPS, but the 265K is still squarely at the back of the pack, and by a good margin. The 14700K is 20 FPS faster, or 17.8%, and the 7800X3D, which by now is looking like the clearest choice for a new mid to high end gaming CPU, ran almost 40 FPS faster, or 33.5%. At least for this configuration, the 265K isn’t looking like a great option.

Rainbow Six Siege was honestly the biggest surprise for me. At 1080p on medium settings, and using DirectX 12 rather than DirectX 11 as DX12 is now the default (after the removed Vulkan support earlier this year, something I missed in my last video), and the more feature rich option anyway, both Intel chips struggled by comparison, offering around 400 FPS compared to 600+ for the AMD chips. With that said, the 7800X3D did it’s usual party trick of running away with the lead, netting 665 FPS on average, compared to just 402 FPS average on the 265K. Now much like CS2 this likely doesn’t matter all that much in the real world, but my god this isn’t good. The 14700K is “only” 7.3% ahead, but the 7800X3D? That’s 65.4% ahead. Yeah, over 50% faster. That is just insane.

Hitman 3’s built in benchmark lets me break out the CPU and GPU data and obviously this is the CPU data you are seeing, meaning this is as close to an isolated in-game test as I can give you, and this does show the 9700X’s weakness relative to the rest, and shows the 7800X3D as at least a little closer, but it is still 16% faster than the 265K, and even the 14700K is 6% faster – the smallest margin I collected here – meaning even in what is essentially a best-case-scenario for the new Ultra 7 part, it’s still slower than the last gen Intel part, and a decent bit back from soon-to-be AMD’s last generation 3D V-Cache part. Oh boy.

To round us up, Starfield is possibly the most promising result for the 265K, as it does just about beat the 9700X, and the spread is considerably tighter too, meaning the 14700K is only 6% faster, and the 7800X3D is only 12% faster. Only. We are still talking about a pretty appreciable difference though – 144 FPS on the 265K or 162 FPS on the 7800X3D is something you might actually feel, and of course is an indication for the future of the part too. 

At least from my admittedly limited testing, the 14700K ended up being 11% faster on average, and the 7800X3D averaged a whopping 34% more performance across these games. I honestly didn’t believe these results, and I tested and retested to make sure. I did a BIOS update and retested and only lost a bit of performance. This was with APO disabled, especially since none of Intel’s pre-launch materials covered the 265K so I had no idea what to expect or any recommendations, but others have only seen a couple percent improvements in the supported titles, so that doesn’t seem like a big deal. While I don’t think it’s exactly realistic for someone with a 7900XTX to be playing something like Cyberpunk at 1080p, I opted for that as it generally works better for seeing the differences in CPUs – testing at 1440p or 4K will yield less dramatic results for sure as the GPU becomes more of a limiting factor. I should also note that the 265K was tested with DDR5-6400 CL38 RAM – check out my RAM testing video to see why – but since this is the maximum frequency you can use before you void the CPU’s warranty, I think that’s a fair choice.

If you are wondering about power efficiency, being a one-very-broken-man-band and only having access to these chips for a couple days, I wasn’t able to test that yet, but if you want to see that please do let me know in the comments and I’ll see if I can get that done, but at least on the productivity benchmarks side of things I saw a pretty drastic drop in stable power from 264 watts on the 14700K (the power limits are disabled by default, so that’s what I tested with), down to just 184 watts – and a corresponding temperature drop from 89°C to just 70°C, all to get similar or slightly slower performance. Cinebench shows the 265K slightly ahead, but Blender has the 14700K ahead especially in the longer render. Interestingly the AMD chips are considerably behind here, thanks to their paltry 8 cores, compared to the 20 in these i/u7’s – I mean 12 of those are efficiency cores, but they are cores so that’s something. 

In short then, this is a bloodbath for Intel. In my testing anyway, the 14700K is consistently a better choice, and that’s really saying something. The 7800X3D is the clear victor here – only slightly overshadowed by the recently “announced” 9800X3D, I say “announced” in quotes because AMD just said ‘hey this is gonna exist soon’ to usurp Intel’s launch even more – and that isn’t even considering that if you buy an AMD motherboard, a nice B650 for example, that should continue to get new CPUs for years at this point, whereas these new Z890 boards for Arrow Lake have no guaranteed support lifespan. AMD has committed to keep making AM5 chips until I think 2027, whereas Intel wouldn’t say if this brand new socket will even last more than one chip cycle – and even if it does Intel isn’t known for their long-term support. 

And actually speaking of support, this all ignores Intel’s recent crisis with the 13th and 14th gen chips destroying themselves. The problem itself is one thing, but it took Intel months to figure it out, and they spent most of that time blaming everyone but themselves. They’ve only very recently agreed to extend warranties and make things even remotely right, which compared to AMD’s response during their chip-melting issues makes it clear which I’d rather opt for. 

Honestly, it looks like Intel is in a pretty sorry state right now – and that isn’t a good thing. Their brand new chips seem dead on arrival – chips that Intel aren’t even making themselves now, as all of the logic tiles are made by TSMC, with only Intel’s base tile being Intel-made – and they must be struggling on the manufacturing front too. For us consumers this is not good – AMD being in this position is what lead Intel to this very state – so I really hope Intel can pull it out the bag for the next generation. With that said, this sure looks like one to skip if you are building or upgrading your gaming PC.