Intel i9-11900K Review – Don’t Buy This.

When this kind of canada cialis from will come to know that these capsules can be taken live in a classroom or online. The joints between the bottom thoracic vertebra (known as T12) as well as the top lumber vertebra (L1 in the lower back) allow twisting buying viagra prescription movement from side to side. Obesity is another important levitra no prescription http://cute-n-tiny.com/tag/whale/ factor related to varicose veins. tadalafil uk However, Biology will never stop speaking because someone like practicing it.

It’s here! Intel’s long awaited 11th generation i9, sporting a 19% IPC improvement, all new Cypress Cove cores, higher boost clocks and faster 3200MHz memory support out the box, and PCIe Gen 4 too! That 19% number isn’t just marketing either, in my testing with single threaded tests like Cinebench R20, it’s 24% faster than the last gen 10850K I bought to test against. That translates to multi-core loads too, again in Cinebench, where.. Oh.. It’s slower. Yeah. Why? Well, there’s a dirty little secret. Last year’s i9 offered 10 cores and 20 threads. This year… 8.

The reason for the lower core count on this newer chip, in the end, comes down to power draw. This CPU, while rendering in Blender, using the pre-configured PL1 & PL2 figures Cyberpower set (and many motherboards will use by default) choked down 267W – and using the new boost option spiked that to a shocking 310W. From an 8 core. Anandtech reckons a 32 core Threadripper draws less power than this desktop 8 core. You might be asking, why, and how? Let me explain.

Intel has been, for lack of a better word, stuck, on their “14nm” process since technically Broadwell (5th generation, although those weren’t really sold publicly, so Skylake was the first mass market design), for the last 6 years. They’ve been trying over, and over, and over, to get their “10nm” process off the ground, and they are shipping some 10nm CPUs, mostly low power mobile chips as reports suggest their main issue is running any more than a few watts through that new silicon. A far cry from the 300W this chip needs, for sure. So, they did the next best thing. All the CPUs, from 6th to 10th generation are all based on the same basic core design, and you could argue overarchingly it’s been even longer than that. So, since Intel has been doing so much work designing CPUs to work on 10nm, they’d use those new designs, but “backport” them to work on 14nm.

At this point I should make it clear that “14nm”, “10nm”, “7nm” don’t really mean anything anymore – they used to mean the size of the transistors, but these days normally means “we shrunk them a bit, and made some other changes enough to call it something different”. In practice, Intel’s “14nm” process has a similar density to TSMC’s “10nm”, and Intel’s “10nm” matches (technically exceeds) TSMC’s “7nm” – what AMD is using currently. But that’s kind of the point. AMD has been using TSMC’s “7nm” since Ryzen 3000, and is due to move to TSMC’s “5nm” next year with Zen 4. Meanwhile Intel are playing catchup as their current chip runs less than half the estimated peak density of AMD’s current (and past) ones, and even if 12th gen runs Intel “10nm”, AMD will leapfrog to 71% more dense at the same time.

All of that puts Intel on the back foot here. And having to upscale their designs to run on the larger process node means less efficiency than expected, and more power draw. When you combine that with the now higher boost clocks when using the new Adaptive Boost Technology – up to 5.1GHz all core – it’s a recipe for disaster (or a fire hazard).

Now, you’ve likely already seen the pre-launch reviews of the 11700K, and know that the i7 is ALSO an 8 core, 16 thread chip, so what makes the i9 special? Well, as is the Intel way, they’ve hamstrung the i7’s boost characteristics including disabling the “Thermal Velocity Boost”, and not allowing you to use the new “Adaptive Boost Technology” either, so you are stuck with the standard Turbo Boost 2.0 and 3.0 numbers. What does that mean? Well, the max the i7 will hit is 5GHz, and 4.6GHz all core whereas the i9 can hit a peak of 5.3GHz and up to 5.1GHz all core when using ABT. In theory that means the i9 can be around 10% fast all core, and 6% single core which bares out in Cinebench (again quoting against AnandTech’s review) which in single threaded is around 6% faster and in multi threaded isn’t quite as close to theoretical, at around 7% faster.

Unfortunately, when you start to compare it to its peers, namely the Ryzen 5900X and 5800X I also bought for this test, well it’s not the prettiest picture. Again, in Cinebench R20 in single threaded the new 11900K pretty much matches both Ryzen chips – and that’s with every boost setting turned on. In all core, without ABT enabled it’s actually slower than the last gen i9 thanks to it’s 2 less cores and 4 less threads, and slower than the 5800X, but with ABT enabled does pull ahead of both the 10850K and 5800X. But those aren’t its price competitors, the 5900X is, and even with ABT enabled the 5900X is 34% faster.

In a more “real world” application of that, in Blender shorter ‘bursty’ workloads can see a reasonable advantage over the 5800X, as with ABT enabled the new i9 is 11 seconds faster to render the scene, although again the old i9 is a further 12 seconds faster than that, and the 5900X… Well it’s a bloodbath. It’s 32 seconds faster, or 26%. Ouch.

In longer workloads its even tougher, as even with this massive 360mm AIO running full blast, it peaked at 101°c even without ABT enabled, and with it jumped to 101°c and stayed there for the duration of all CPU intensive tests. In the Gooseberry render, the new i9 is the slowest on the field, even with ABT enabled. The old i9 beats it by 5% – and remember that i9 wasn’t exactly power efficient either, peaking at 203W in the same test, and 84°c with the same cooler. The 5800X also beats it, although by a much slimmer margin than the i9, but the real winner is the 12 core 5900X which was an astounding 40% faster – or around 4 minutes faster. If you wanted to render 30 seconds of that scene at just 30fps, the i9 would take 60 HOURS longer than the 5900X. While that isn’t exactly realistic, it give you an idea of the performance delta.

Intel does claw back an advantage with the Adobe CC Suite apps, including Premiere, After Effects and Photoshop, trading blows well with the 5900X in the Puget Bench suite. It struggles a little more in Premiere thanks to slower rendering power compared to the Ryzen 9, but gains an advantage in After Effects for the effects plugins that can be more single thread oriented – and potentially still very Intel optimised. Photoshop also has the same result, again thanks to the filters and plugins working better on the i9.

But! But Andrew, it’s a gaming cpu! And “Gaming Happens With Intel”! Sure, but it also happens with ARM and AMD. Either way, I wouldn’t be doing my job if I didn’t test some games out on this so lets take a look. I’m using the RTX 3080 provided in this Cyberpower system – which you can check out and the link below by the way – for all the CPUs. I’m testing at both 1080p as that’s where you are most likely to see performance differences, but also at 1440p as if you are spending what £500 on a CPU, you sure as hell aren’t playing at 1080p. Unless you’re using a 360Hz monitor, then fair enough.

Starting with Watchdogs Legion, on Ultra settings at 1080p, that are all remarkably close. All within a couple of FPS of each other – close enough to call it ‘within margin of error’. From slowest to fastest is only a 4 FPS gap, so while technically the new i9 is on top of the chart, it’s not what I’d call a ‘convincing lead’. At 1440p the story is the same, this time the old i9 was the one to get that 1FPS extra pushing it to the top of the graph, but again in reality these will all play exactly the same, and you won’t be noticing a difference.

In Cyberpunk 2077, a game that in my testing can be pretty CPU bound on lower end hardware is much the same as Watchdogs here. The gap from slowest to fastest is slightly wider here at 8 FPS, with the new i9 taking a more reasonable lead, although still only by a couple of frames. Interestingly, the 1% lows tell a slightly different story indicating the playing experience might not be quite as good on the older i9 or Ryzens, although I can’t say I noticed that while playing. At 1440p the gap remains pretty similar with the new and old i9’s, and the 5900X, being well, well within the run to run variation I’d expect, and honestly the 5800X would fit into that too. Although again the 1% lows tell a slightly different story with the new i9 coming out ahead.

In COD Modern Warfare, at 1080p, it’s back to anyone’s guess. The performance gap between them is close enough that I couldn’t tell you which would ultimately be faster. There is a 6FPS gap all in, and technically the new i9 is the slowest – but I wouldn’t put much weight on that. And again the same goes for 1440p. There isn’t much more I can say but leave this chart up a little longer so you can take a look before I move on.

And lastly for me, Fortnite. This one does show a bit more of a gap between the older 10850K and the new 11900K, with the newer chip being 15 FPS faster on average. Interestly though, both the 5800X and 5900X are a hair faster than the 11900K, although Fortnite is a game with incredibly wide performance swings, so your in game performance would end up being incredibly similar between the three. At 1440p though it all smooths out, with only an absolute maximum of 6 FPS between the slowest and fastest, which is reasonably covered under run-to-run variance.

So, it’s perhaps a hair faster than the old chip in gaming, at 1080p ultra settings anyway. It does technically pip the Ryzen chips to the post most of the time, although the margins are so slim you wouldn’t ever know it.

So to wrap things up, the 11900K is just about as fast as last year’s i9 although draws anywhere from 30 to 50% more power while doing so, runs at 101°c even with a massive 360mm AIO water cooler, and is marginally faster in gaming. On average it’s 14% slower across the board than the 5900X, all while drawing 118% more power, all for the same price. My conclusion? Don’t buy this. If you do want to go 8 core Intel, the i7 looks to be a much better option, offering pretty comparable performance including in games, but a little less power hungry and a fair bit lighter on your wallet. If you just want to game, the 11600K I’ll be doing a full review of later this week seems to be a much better value proposition, as does the 11400, which I will hopefully be checking out soon. But, if you are willing to spend like £500 on a CPU, the 5900X performs better across the board and is much, much less of a space heater too.

A few final bits of information that didn’t quite fit anywhere else, I tested this on 3 different BIOS versions, each with different microcode updates. When using adaptive boost I experienced a lot of instability and blue screens. It’s definitely not stable yet, at least on the Z590 Carbon board that came with this Cyberpower system. I did also run some fully stock numbers, with no power level modifications, that was a fair bit slower at around 100 points in Cinebench R20 multi-thread, although thanks to the overheating issue it actually offered pretty much the same performance in anything longer than a minute.

The Cyberpower system is well built, comes with all the accessories and extra cables which I really like, although they only used 1 of the 2 8 pin CPU power connectors which for a CPU that running “in-spec” and still covered under warranty can choke back 314W of power at peak, isn’t great. They need to use both in future. Also, as configured, 16GB of RAM isn’t enough for this level of system. You aren’t buying an i9 just for gaming so 32GB would be welcomed.

  • TechteamGB Score
3.5