Intel Arrow Lake IS HERE! Intel Core Ultra 200 Series Launch

Intel’s newest generation of CPUs are (almost) here, and they’ve changed an awful lot, including the damn names, so let’s start there. These new chips aren’t the 15th generation of Intel’s Core i series of chips, these are Core Ultra 200 series. Specifically, the Core Ultra 9 285K, Core Ultra 7 265K, and Core Ultra 5 245K – along with KF SKUs of the 5 and 7 chips. Don’t ask why these are canonically the second generation of Core Ultra chips, although if you’ll permit me to be a pedant for just a moment… Intel has had a pretty consistent naming scheme for almost two decades now. The second number denotes the class – a 14600K is a 600 class chip. A 14900K is a 900 class chip. That class structure has been pretty consistent too, so to have that shifted down a step – an 85 class chip is an i9, sorry Ultra 9, and a 65 is an Ultra 7? Nah mate. 

Anyway, that’s the least of our concerns. The chips themselves are new too. Here’s the spec list. The 285K is a 24 core, made up of 8 Lion Cove performance cores, and 16 Skymont efficiency cores – and if you want to know more about those cores, or why these chips no long have hyperthreading, check out my full explainer video already live on the channel, that’ll be in the cards above for you. The 265K keeps the same 8 P cores, but loses just 4 E cores, making it a 20 core part, and the 245K drops 2 P cores and another 4 E cores for a total of 6 + 8, or 14 all in. Interestingly there’s no mention of the LPE cores found in Lunar Lake, so this is quite the different SoC tile from that.

The big story here that Intel is pushing is all about power – and not the performance kind. I mean the from-the-wall kind. These Arrow Lake CPUs are meant to be significantly more efficient. Like, the same performance at half the power of Raptor Lake. They do say that they should be faster at full power, but that is merely a byproduct of the increased efficiency it seems. They aren’t even claiming gaming performance leadership – even compared to their own last gen part they say you’ll get a fraction LESS performance, albeit at a considerable power drop – 264 FPS vs 261 FPS, but 80W less power on the 285K versus the 14900K. In fact, looking at the 14 game chart, the majority of games are listed as “PAR” – with the real takeaway being some games experience considerably less power draw on the newer part. 

They even claim that, at least on what I have to imagine are the 7 best games for this point, you can drop the power limit from the stock 250W all the way down to 125W and not lose any performance. Of course for some titles that won’t be the case – something like Rainbow Six Siege, which is included in their 7 game list, really isn’t an all-core load, and so dropping the power limits to even half just means you don’t have as much headroom for other tasks in the background, but the game won’t be affected. Compare that to something like Cyberpunk which is a fair bit more intensive and you might find that a lower power limit will hurt performance. Of course, with that lower power draw, you get a corresponding temperature drop. Intel is claiming a whopping 13°C drop on average compared to the 14900K, at least while gaming anyway. Of course for all core workloads, both chips can suck back 250W, so there isn’t likely much thermal difference in it there. 

The efficiency isn’t the only new thing though, and there are some really interesting gems hidden here, so let me start digging. The chips themselves are the tiled designs first launched on Meteor Lake last year, but this is the first time we’ve seen them come to the desktop market. Long story short – and check out my meteor lake video if you do want the long story – these chips are made up of smaller “tiles” (similar to AMD’s chiplet design) that, unlike AMD, is bonded to another mother die – the base tile. The tiles are the compute tile with all your cores and cache, the IO tile where things like PCIe connect, the SoC tile where you’ll find the memory controller, the NPU and more, and in this case the GPU tile too. Those tiles – well all but the base tile – are actually made by TSMC. The compute tile is TSMC’s N3B process node, the GPU tile is N5P, the SoC and IO tiles are N6 – which is a really big deal. Intel not using their own process nodes, even for the compute tile, is huge. 

Something that isn’t huge though is the chips themselves. They are now 33 percent smaller, which might explain why we need a whole new socket and chipset to go along with these things. This will be LGA1851 – which is kind of interesting because that means more pins, up from 1700 with the 12, 13 and 14th gen chips, and Z890 motherboards. That chipset is actually pretty huge – it has WiFi built in, specifically WiFi 6E built in, meaning all motherboards should be WiFi boards, with the option of upgraded WiFi 7 available, plus Bluetooth built in too. It even has a 1G NIC built in, which is fantastic, and of course the option for 2.5G or higher NICs too. You’ve got 8 SATA ports built in, 14 USB 2, 10 USB 3.2, and 24 PCI lanes from the chipset. The CPU itself has 20 PCIe Gen5 lanes onboard, with the usual 16/4 split for GPUs and an SSD, plus a Gen4 link to the chipset. 

I should mention that there are actually some new details about the cores themselves too. The layout of the compute tile has changed, as the E cores are now intermingled with the P cores, which Intel says is to help hot spots and heat dissipation – basically by spreading the hot P cores out you should get more even temperatures and less hot spots. Sweet. The big change to the arch though is that the L3 cache is now shared not only between all available P cores, but the E cores too. This should help speed up the core-to-core task switching we know these chips do an awful lot. They also get an L3 cache bump from 2MB to 3MB per core – nowhere near AMD’s 3D VCache levels of course, but more does seem to be more better. 

Earlier I mentioned that the SoC tile contains an “NPU” – that’s a Neural Processing Unit – aka an AI accelerator, although it’s worth noting that this isn’t the newest version of Intel’s NPU they include in the mobile-only Lunar Lake chips. This is the now ‘old’ NPU3 from Meteor Lake, which is a 13 TOPS design, meaning it does not meet Microsoft’s 40 TOPS requirement for CoPilot+. They said in the Q&A that this was a deliberate choice, purely based on available die space. They could have fit the newer NPU4 in, which I think offered 48 TOPS, but then they might have had to cut, say, the integrated GPU, which on these non-F-SKU parts is a quad core part, and they were pretty proud of the built in media engine and quicksync for creators, so they opted for the older, and considerably less powerful, NPU3 instead. 

They did also talk about overclocking these things. The biggest topic was memory, which actually has a pretty major change that you’ll be hearing more about from me pretty soon, which is CUDIMMs, that being clock driver unbuffered DIMMs. Basically, those modules have a clock drive built onto them that re-drives the signal from the memory controller. That makes higher speeds considerably more stable, and according to Intel anyway, 8000 MT/s is the new sweet spot for overclocked memory. The new base standard – as in the max speed that won’t void your warranty – is now DDR5-6400, although for much higher you’ll likely want a CUDIMM module to get the stability benefit. Oh, and before you get your hopes up, Intel mentioned that the ECC support is for the workstation parts, not the gaming parts. Shame.

As for overclocking the chips themselves, as mentioned in the Lunar Lake video, they’ve made the core clock more granular at 16.67MHz steps, down from 100MHz, meaning you can more finely control the frequency for eking out the last little bit of performance. The chips now also have dual base clocks – one for the SoC tile and one for the compute tile, so you can OC each independently. They were pretty clear that memory overclocking is likely where it’s at – along with e core OCing, but that the P cores are already really running at the limit so you aren’t likely to get much out there. 

Finally, we have pricing. I’ll stick the slide on screen so you can see them all, but in short they are pretty much the same price as the 14th gen parts, at least at MSRP. As of writing this all the 14th gen chips seem to be on sale – often at considerable discounts compared to MSRP – for example the 14900K is over $100 off the MSRP at $469, and the 14600K is $259, making it a decent bit cheaper than the 245K, but we will have to wait until later this month to run our own tests and see how they stack up, and if it’ll be worth splashing out over the 14th gen, and over Ryzen – especially with the new X3D chips on the horizon too.