THE BEST GAMING CPU?? AMD Ryzen 9000 & Zen 5 Explained – Everything we know about Ryzen 9950X 9900X 9700X 9600X

AMD has come out the gate swinging, claiming their new Ryzen 9950X is the fastest gaming CPU on the market – or will be when it launches in July. This Zen 5 based chip, along with the three other Ryzen 9000 series chips they announced at Computex this year form the lineup that will challenge Intel’s Arrow Lake desktop chips when those launch later this year. If you want to know all about Arrow Lake by the way, I did a full deep dive video that I’ll link in the cards above for you to check out – but this video is all about AMD, and boy there’s a lot to get through, so strap in. 

Starting with the chips themselves, the 9950X is still a 16 core, 32 thread monster, although interestingly it has a slightly lower base clock speed at 4.3GHz, down from 4.5GHz on the 7950X, although it does sport the same 5.7GHz boost clock and the same cache and TDP values. They’re also announcing the 9900X, a familiar 12 core 24 thread chip with again a lower base clock at 4.4GHz, down from 4.7GHz, but the same 5.6GHz boost clock. The 9700X is the 8 core part with a much lower 3.8GHz base clock, but a slightly higher boost clock at 5.5GHz, up 100MHz from last gen, and the 9600X 6 core which had the biggest base clock drop from 4.7GHz down to just 3.9GHz, although again a 100MHz higher boost clock at 5.4GHz. The speculation for the lower base clock is all to do with AVX 512 workloads though, so I wouldn’t be too worried there.

Those chips are all based on AMD’s new Zen 5 architecture, and while AMD didn’t go into too much depth on what’s new with Zen 5, we do have some idea of what’s different. We still have functionally the same design and layout – up to two core complex dies on a given desktop chip with up to eight cores per die, meaning 16 cores total. They’ve apparently improved the memory controller, so now 5600 MT/s RAM will be the new standard – matching Intel actually – and part of that might be thanks to AMD using TSMC’s N4 node for the core complexes. Architecture wise, AMD says they’ve improved branch prediction, widened the instruction pipelines for higher throughput, and increased the window size depth for more parallelism. What that means is the front end instruction bandwidth is doubled, the cache bandwidth – specifically to the floating point units – is doubled, and the AVX 512 throughput is doubled too. 

Rather impressively, AMD is claiming a 16% IPC uplift compared to Zen 4, and what surprised me most here is that unlike Intel, AMD is including gaming in that figure, as they say League of Legends sees a 21% uplift on the new cores, and even Far Cry 6 sees 10% more performance. Plus in more compute workloads like Blender there’s apparently 23% more performance on tap. That’s pretty cool to see! Of course the real elephant in the room is the lack of their 3D VCache CPUs listed here. While that isn’t exactly a surprise, I’d imagine AMD is waiting to see what Intel has to offer with Arrow Lake because dropping the X3D variants to quash any lead Intel may gain.

With these new chips comes new motherboards. X870 and X870E boards will be hitting shelves at the same time, although frankly much like the CPUs there isn’t all that much new here. It’s actually the same chipset dies, although USB 4 is now mandatory, as is WiFi 7 (if the board includes WiFi anyway). Everything else is pretty similar, with the same expected PCIe layouts, obviously things like VRMs, and for the most part IO too. Since these are still AM5 boards, I’d expect the older AM5 chips to work just fine in these, and older X670 and X670E boards will, after a BIOS update, likely support the new chips too – although I’m yet to see that 100% confirmed. Naturally there will be some B series chipsets coming along soon enough too – although not quite at launch.

One rather nice note to add is that AMD is now planning on supporting the AM5 platform until at least 2027, and possibly beyond that, meaning if you buy a CPU and motherboard today, so long as you keep getting BIOS updates, you should be able to keep upgrading CPUs for another 3-4 years, and therefore 3-4 generations, without too much trouble. That’s fantastic to see, especially considering team blue has an annoying habit of killing platforms just to force you to buy the newer one!

And now we get to the point where we all collectively cringe, AI. There’s actually a lot here though, including a new set of laptop CPUs, a new workstation GPU, new EPYC server CPUs, and new data centre GPUs, so hold your nose because we’re gonna dive straight in, starting with… “Ryzen AI 9 HX 370”. This is part of AMD’s new codename “Strix” CPUs – a name I didn’t think I’d see as an AMD internal name, given Asus kinda claimed that like a decade ago… Anyway, they’re announcing two new chips here, the Ryzen AI 9 HX 370, a 12 core 24 thread part with 36MB of cache – that’s around half what the desktop 12 core has, weirdly – Radeon 890M graphics (that’s a 16 CU RDNA 3.5 core), and 5.1GHz boost clock, and the Ryzen AI 360, a 10 core part with an even more confusing 34MB of cache – just 2 MB less than the 12 core – and Radeon 880M graphics. Both chips feature a neural processing unit or NPU though, and that’s what AMD talked mostly about.

The NPU, much like Intel’s NPU4 in Lunar and Arrow Lake, is made up of AI tiles, although AMD – actually following how they do graphics cores compared to NVIDIA – have gone with smaller tiles, but more of them. They have 32 tiles, which combine to offer 50 TOPs – that’s trillion operations per second – which beats Intel’s 48 TOPs, although AMD actually had something to say about that too. See there are two main data types generative AI tools use – 8 bit integers, and 16 bit floating points. 8 bit integers are faster, but less accurate. 16 bit floating point, inversely, is more accurate but is normally half as fast. AMD has opted to support block floating points, which means you don’t need to quantize the models, and you get INT8 levels of performance while retaining FP16 accuracy. Intel’s TOPs numbers are INT8, meaning AMD is not only claiming more operations per second, but more accurate operations too. 

On the GPU front, AMD also announced a new workstation card, the W7900 Dual Slot. It’s $3,499, which is frankly a steal for 48GB of GDDR6 ECC VRAM! It comes with 192 AI accelerator cores built in, and 123 terra-flops of FP16 work, oh an an impressive 295W total board power. That’s kind of insane, and AMD is specifically marketing this as a dual slot card so that you can stick four in a case and let ‘em rip. 

On the software stack front, AMD’s ROCm tool suite has improved a fair bit, with now over 700,000 models on huggingface supporting AMD and ROCm straight out the box. Open AI are supporting ROCm for their Triton tool, which is a pretty big deal, as is PyTorch, TensorFlow and JAX supporting ROCm fully. 

In the data centre GPU world, AMD announced a new MI325X accelerator card, with a staggering 288GB of HBM3E memory with an astonishing 6TB/s of memory bandwidth. That’s twice the VRAM that NVIDIA offers on the H200, meaning you can load double the size models – 1 trillion parameter models are on the cards here, which is frankly insane. That’s coming in Q4 this year – although they teased what’s coming in 2025, which is their CDNA4 architecture, which they claim is an insane 35 times faster in inference compared to the CDNA 3 architecture the MI325 and MI300 are based on. That is insane…

And lastly we get to some more interesting stuff, which is unfortunately still infected with AI… AMD announced a new EPYC server CPU lineup, codenamed “Turin”, which will feature up to 192 core chips, made up of 12 core complex dies, assumedly with 16 cores per die, that’ll use TSMC’s N3 process node, and is actually a drop-in replacement for their Genoa and Bergamo current generation chips. They are coming in the second half of 2024, and while I’ll skip over most of the AI stuff they talked about, I’ll note that compared to two of Intel’s 64 core Xeon 8592+, two of AMD’s upcoming 128 core Turin chips offer not just double the tokens per second thanks to the core count, but 4x the tokens per second in generative AI workloads. That is frankly insane.

They also said that compared to Intel Xeon 8180 based servers – that’s a chip from late 2017 for context – you can replace FIVE dual socket servers with ONE AMD 9654 96 core dual socket server and not only save 80% of the rack space, but you drop the power consumption from 4.4kW to just 1.56kW. That’s frankly insane, and it’s even crazier that you will be able to double that core count to 192 cores per chip, or nearly 400 CPU cores per server! Oh and the last little tidbit of information I found interesting was that AMD claims they now control 33% of the data centre market share. That’s absolutely huge for AMD, and I’m sure with these new Turin chips, and their GPU arm going hard on AI, we’ll see that keep going up. 

So that’s about it for what’s new with AMD. The new desktop CPUs – which actually it’s just occurred to me that they won’t have AI accelerators onboard, while Arrow Lake will… huh that’s going to be interesting. Anyway, those are due out really soon, so we’ll see what they actually perform like in the real world soon, and I’m interested to see when AMD pulls out the X3D trap card too.