Unfinished Genius or Fundamentally Flawed? Intel i9-12900K Alder Lake Review
|They finally did it! Intel moved off of their 14 Nm ++++++++++ process node, redesigned their cores AND EVEN ADDED A WHOLE OTHER CPU for good measure. This is such a significant moment for Intel – for the industry in general – so expect this video to be a long one, and there to be countless follow up videos testing everything I can think of including the i5 that I wasn’t able to test for this video – that one will be up in the coming days so make sure you are subscribed with notifications on because you don’t want to miss any of this!
This is the i9-12900K, Intel’s first mainstream chip built on the Intel 7 process node, the one that used to be called 10 Nm Enhanced SuperFin but was renamed to both differentiate between their other 10 Nm nodes and to align with TSMC and Samsung’s “7nm” nodes – despite them being very different. A shrink in the process node means more transistors and less power consumption, so naturally Intel’s approach is to shove essentially a whole other CPU in.
That’s what the “hybrid design” is, you get two CPUs for the price of one. Ok technically the price of one plus about 25%, but it’s not double. What you get is actually nicely displayed on this plaque they sent to us reviewers so let me show you on there. These large blocks are the performance cores – “P cores” as they call them. These are like the ones you got in an 11900K, although they’ve been redesigned with a fair few changes – if you want to know more about that, the E cores and this whole hybrid design stuff check out the video in the cards above I did recently. So that bit is normal, but now look down here. See these clusters of four smaller blocks? Those are the efficiency cores – or “E cores”. Those don’t support hyperthreading, run at lower clock speeds and are designed to be, well, efficient.
So with all these new cores, unsurprisingly there’s a new socket these fit into. They are now rectangular, and have 1700 pads on the back, up from 1200 on the last gen. That means a new chipset too, called Z690. I’ll have a video on those boards soon too, but it’s safe to say they’ve got a load of new features. The biggest change is support for DDR5 RAM – the newest iteration of memory which as always runs higher frequencies but slower timings, and costs an arm and a leg to get some. Interestingly, Alder Lake CPUs like this actually support both DDR4 and DDR5 fully so you will find some motherboards that support DDR4 on sale like Asus’ TUF D4. To make it clear, you can’t run DDR4 and DDR5 at the same time, in the same board or put the wrong type in either slot. The slots are physically different so DDR4 won’t fit in a DDR5 slot, and vice versa.
You’ve also got support of PCIe Gen 5 – I know, we only just got Gen 4 and we’re already on Gen 5! Well, hold your horses. Most motherboards including this premium Z690 Hero I’ve got only support PCIe Gen 5 to the top x16 PCIe slot. None of the M.2 slots support it, none of the other PCIe slots do, just the top one, and it’s worth pointing out that Gen 4 is already pointless for the vast majority of people for both GPUs and storage so these technically supporting Gen 5 isn’t a big selling point, and I’d expect by the time it is a selling point these CPUs will be irrelevant anyway.
Let me give you a quick spec rundown on this i9 – it has 8 P-cores which can boost up to 5.2GHz, two clusters of 4 E-Cores that can hit up to 3.9GHz, 24 total threads, 14MB of total L2 cache and 30MB of L3, DDR5-4800 or DDR4-3200 RAM support (officially anyway, faster works but is considered overclocking) and a 125W TDP – although Intel now quote the “Maximum Turbo Power” which is 241W.
All of these new features, the hybrid cores, the new architecture, the new version of Windows you really do want to use with these, all of that adds complexity, and with complexity comes bugs. Lots of them. Issues with Windows 11 randomly tanking performance, BIOS issues like I’ve had which mean you can’t use XMP or even set memory timings without reducing performance, and if you do use Windows 10 get ready to find some programs get locked to E-cores unless you use task manager (or a tool like Project Lasso) to set the process priority higher. Take all the results you see today with a pinch of salt because I can almost guarantee in a month’s time they’ll be wildly different. With that said, I actually had the most consistent results in Windows 10 so that’s what all my results are based on.
So, how does it perform? In short, it’s a beast! The new core design means in Cinebench R23 this new 12900K is 23% faster in single threaded work than both the last gen i9-11900K and AMD’s Ryzen 5900X. That’s a significant leap, and one that translates to all core workloads too as thanks to that and the extra 8 E-cores this offers 93% more performance over the 11900K, or about 27.5% more than the 5900X. That’s pretty incredible, almost double the multithreaded performance over last gen! While I don’t have a 5950X to test against, looking at publicly available results it seems like the 12900K would pretty much tie in all core workloads.
The story is the same in BMW scene in Blender, it’s almost exactly twice as fast as the 11900K, or about 25% faster than the 5900X – and in the Adobe Suite… it runs away with it. In Premiere it scores just shy of 1000 points, 960 specifically, and the closest to that is the 5900X at 874 – almost 100 points less – and the last gen i9? Well that’s the slowest even being beaten by the 10th gen i9 with two more cores. In After Effects there’s no question which is the fastest. The new chip is 13% faster, and it’s the same story in Photoshop where it’s 19% better than the 11900K and 17% over the 5900X.
Of course, the bit you are all here for, we can’t skip over the gaming results. Testing at 1080p with an RTX 3080 generally speaking the new i9 comes out on top. Watchdogs Legion shows the biggest lead, running 22% faster than the 5900X when run at medium settings. The last gen i9 matches the 5900X, with only the 10th gen i9 running ever-so-slightly slower.
Cyberpunk also shows a healthy lead on medium settings, running at 144FPS versus around 120FPS for the rest of the pack, or 20% faster, and a sizeable improvement in the 1% low figures too going from around 70FPS to more like 95FPS.
Some games weren’t too bothered, like Fortnite at high settings, which while the 12900K is at the top of this chart its lead is just 3% and at 260-270FPS, that’s a paper victory for sure. Some games actually swing in Ryzen’s favour, like CSGO where again it’s very much a paper victory for Ryzen, although it’s worth noting the improvement from the 11th gen to 12th here, around 27% is nothing to turn your nose up at. Then there is Microsoft Flight. For some reason, the 5900X comes out on top here with an 18% lead over the Intel chips. This is something I verified, and happened on both Windows 10 and 11 too.
I did also test at 1440p, but as you’d expect any differences that were present at 1080p are smoothed out by the GPU having to take up more of the load than the CPU, so while I’ll include the results in the written article I’ll leave them out here.
One of the interesting things here is that you can disable some or all of the E cores in the BIOS, meaning you can see the performance offered exclusively from the performance cores – so of course I did just that! Comparing to the 11900K, the P cores are 23% faster Cinebench R20 single threaded. That’s a significant advantage, and again that carries over to multi-threaded where the P cores are 40% faster than the last gen chip. A 40% all core improvement would already be incredible to see, but when you add the E cores that’s just crazy.
Interestingly, in the Adobe suite the P cores on their own score lower in Premiere than with all cores enabled. That makes sense, less cores means rendering or playing back footage has less to work with – I mean it’s still 14% better than the 11900K – but what surprised me is the After Effects and Photoshop scores. In After effects the 12900K improved its score by 3%, assumedly because the P cores had more power budget to work with and could turbo harder for longer – and the same happened in Photoshop, with a 3.6% higher score with the E cores disabled.
In gaming, CSGO saw a significant improvement of 7%, meaning it outperformed the reigning champ, the 5900X. Cyberpunk also saw an improvement, not by much but extending its lead over the rest of the pack by a further 5 FPS, and Microsoft Flight also saw a significant rise, bringing it to almost the same as the 5900X. Watchdogs saw a slight uptick, only by 3 FPS, and Fortnite didn’t notice, offering the same performance as stock.
While you are able to disable all the E cores, you can’t disable all the P cores – at least one needs to stay running constantly regardless of how many E cores are active. That would be a problem for measuring their performance, if not for Windows Process Affinity. In task manager you can select which cores a process is allowed to use, so for every benchmark I disabled core 0 (the one remaining P core) from being used by the program or game. And the result… It’s a 10400F!
Seriously, all 8 E cores offer roughly the same performance as an i5-10400F does. In Cinebench R20 the 10400F only runs about 6% faster single threaded, or about 8% faster all core, which is remarkable considering these are the cores tacked onto an i9! Unfortunately the E cores actually draw more power than a 10400F while in use, peaking at 95.5W, or about 80W stable, versus the 10400F which peaks at just 75W in my previous testing.
What’s interesting is the gaming performance though. CSGO tanked by comparison, netting 366FPS average, versus the P cores on their own sitting pretty at 641FPS. Cyberpunk also dropped pretty hard, from 149FPS on the P cores to 99FPS on the E cores. Still playable of course, but shows the performance delta well. Microsoft Flight also took a hit, although not by much – 136 to 121 FPS so not bad at all. And Fortnite also dips, going from 273FPS average down to 202FPS, again not bad and certainly still playable!
There is a catch to this mental performance though, and that’s mental power consumption. Intel’s new trick with this generation is the new spec I mention earlier “Maximum Turbo Power”. Instead of listing two different power level values, PL1 and PL2, where PL1 is the long term power and PL2 is the short, much higher boost power, these new chips run at PL2 constantly. That means this 12900K runs endlessly at 241W, it never drops down to its “base power” spec of 125W unless it’s thermally or electrically throttling. To put that figure in comparison, AMD’s Ryzen 5900X (and by extension 5950X) are capped at 142W of total socket power as standard. So, at stock, the 12900K will draw 70% more power than the 5900X or 5950X.
Unsurprisingly, running 241W through now absolutely tiny chips isn’t great for thermals. In my testing, and confirmed by Hardware Unboxed and Dr Cutress from Anandtech, this thing pings almost immediately to 100°c regardless of what cooler you’ve got on it – although every PR person I’ve spoken to has recommended a 360mm AIO or higher for this. It sits at 100°c pretty much until it’s finished, then eventually comes back down. Again, for comparison, with the same cooler even last gen’s i9 runs at 88°c at most, and the 5900X peaked at just 71°c and was stable at 62°c.
Taking a closer look at the HWInfo logs, you’ll notice that the CPU Package temperature reading doesn’t match the average core temp reading, and looking at the cores themselves you can see why. The three or four P cores in the middle of the chip are the ones running at or around 100°c, but the ones at the top and the 8 E cores at the bottom? They are all a fair bit lower. The E cores are all at 83°c, and the upper P cores at in the low 90’s. That, I suspect, is another advantage Ryzen has, having their cores split into two physical chunks of silicon under the IHS, rather than one large block here which concentrates heat.
There are also other issues with this new platform. Resizable BAR support, while technically there, Intel recommends disabling it until they can improve their firmware and “future product updates”. That means any gains you’ve seen from enabling it, or AMD’s “Smart Access Memory” are lost for now.
But it’s not all bad news, on certain motherboards (specifically Asus, Gigabyte and ASRock), you’ll find a sneaky little option in the BIOS. If you disable all 8 E cores, you’ll find an option to enable AVX-512 support. I expect Anandtech’s review will include more about this as I played host to a remote benchmarking session which verified not only that it works just fine, but that it’s actually faster than last gen and includes the new Sapphire Rapids instructions as well – make sure to give Ian’s article a read to learn more about that!
I think this video is already long enough so let’s start wrapping up. All in all, the P cores showed excellent performance. They are a healthy improvement over last gen, while drawing just slightly less power at peak, although technically more once the 11900K drops off of boost. The IPC improvement alone should be enough to see generally stronger performance in games, so if they were on their own I’d call that a welcomed improvement.
When you add the E cores though, it gets interesting. Once you get past the hiccups and bugs of an entirely new platform, new OS and scheduler, and new paradigm, you get an impressive chip. I mean it keeps up with a ‘true’ 16 core Ryzen, despite being an 8P + 8E design, so that’s impressive, but I’d be lying if I said I wasn’t at least a little confused. Intel calls these their “efficiency cores”, and yet they are drawing about 10W each under load. Sure, that’s down from about 28W in the P cores, but my napkin maths says Ryzen’s “full fat” cores – the ones in the 5950X anyway – draw around 8W per core, and even the 12 core 5900X sits at about 10W. Apple’s new M1 Max offers 30% more multi-threaded performance but at peak draws 60W, or 6W per core WHILE delivering more performance.
Sure, the E cores are more efficient than the P cores, but that’s like saying a new 5L V8 Mustang gets good fuel economy just because you are comparing it to a new COPO Camaro with a 9.4L V8 – especially when we could all be using a Koenigsegg Germera which is more efficient and faster, and more practical.
To me, this sort of hybrid, big.LITTLE design works really well in phones and mobile devices – stuff with batteries – I mean it makes sense. Use smaller, low power cores for the basic stuff and have some high power muscle on standby until you need it. Saves battery, saves heat output. But in a desktop – or even high power laptop – it just doesn’t make much sense to me, especially when AMD can do so much more with so much less power – less even than these efficiency cores.
To top it all off, while the pricing for the chips themselves is actually pretty competitive, this i9 is currently on preorder for around £600 here in the UK, where the 5950X is generally more like £700, the 5900X is around £480 and the last gen i9 11900K is about the same, but once you realise that you need a new Z690 motherboard, which seem to START at around £200, and for one that’ll handle this i9 with any level of dignity you are probably into the £300 range, you look at the eye-watering price of DDR5 which for this Corsair Vengeance 32GB 5200 kit is a cool £300, and a PHAT 360mm cooler which for the fancy new Corsair one I’ve used for some of my testing is an insane £250, or even the much more reasonable £110 for the ARCTIC Liquid Freezer one I reviewed recently, you’ll realise the cost of entry is a lot higher than it seems. Since a 5950X will run perfectly fine on a £125-150 B550 motherboard, can use £130 DDR4 instead and can use even a fairly basic air tower if you were desperate or a nice £70 240mm AIO from ARCTIC, suddenly the Ryzen option is now between £300 and £500 cheaper AND you’ll have lower energy bills – admittedly you won’t have a space heater for the winter but maybe you can spend the extra on a higher end GPU and make up the difference in gaming performance and heat output that way.
To make it clear: if you buy any of these chips, i9 or not, at launch, you will be a beta tester. Windows 11 updates to the scheduler will be rolling out non-stop for I’d guess a good 6 months before it calms down, BIOS updates will likely come thick and fast too, features will break and you’ll find bugs and just have to deal with them, so be warned.
To answer the question in the title, I’m honestly not sure. I think for mobile and ultra-mobile this design could work well – in a generation or two’s time – but for desktops? I’m not sure this makes all that much sense. It adds a lot of complexity to solve a problem I feel like AMD has already solved: making the cores more efficient. If AMD added two ultra low power cores to their chips, I could maybe understand it as a way to offload background tasks to crazy low power cores leaving more power and thermal budget for the main cores to chew through, but even then on desktop I’d still argue they should probably go back to the drawing board.