Latency in Games – GUIDE – Rainbow 6 Siege Latency Guide

Latency makes a big difference to you winning a game or not. Being the one to shoot first can be the difference between a clutch and a cave. But, odds are, you don’t know what improves latency, versus makes it worse. That’s where I, and my open source latency testing tool which you can pick up at OSRTT.com by the way, come in. In this video I want to take a bit of a big picture look at what settings and tweaks affect latency in games, and specifically in Rainbow Six Siege. I’ll have a complete rundown of what works best for Siege in particular at the end too so stick around for that. Let’s start with the big picture though – what actually affects latency in games?

Starting from the outside and working in, the first thing to think about is your peripherals – namely your mouse and monitor. Your mouse takes some amount of time between you clicking the button and your system registering that click event. Luckily OSLTT can test this with pretty great accuracy, so if we use my Logitech G703 as an example here, this takes just shy of 5 milliseconds on average. That’s pretty good for a wireless mouse, although a higher polling rate mouse like a Razer Viper 8KHz would take under 1 millisecond even in the real world, and wired mice should generally run a touch faster than wireless mice, although most of this delay is actually debounce time, so something like Corsair’s solution of doing the debounce delay after registering the click negates that. Or use optical switches. Both work.

The monitor itself is also something to keep in mind. Most monitors are pretty good for that, offering under one frame worth of latency on average. This AOC Q24G2A as an example averaged 2.7 milliseconds of on display latency, which is less than half the refresh rate window, meaning it’s plenty fast. Some displays have “low input lag” modes, if yours does, definitely turn that on. TVs normally suck for input lag, so again search for a low input lag mode, or at least a game picture mode there. If you are gaming on a laptop there are a few extra steps you might need to consider. This XMG Core 16 performs really well with just 2.7 milliseconds of on display latency – a bit high for a 240Hz panel mind you, but still decent. The thing is, gaming laptops generally have two GPUs – the low power integrated GPU built into the CPU package, and the dedicated GPU normally from NVIDIA or AMD. The way those GPUs are connected varies, either the dGPU’s video signal has to get routed through the iGPU – which adds latency – or if it has a MUX switch it can be connected directly to the display. If it has a MUX switch, you might still find benefit in setting it to run on the dGPU only, which on this laptop is a really marginal improvement, but on the Asus STRIX Scar 16, it’s a much bigger deal. That one performs poorly regardless, but it basically halves the input lag from 14.5 milliseconds to 7.5 milliseconds in dGPU only mode. Lastly, if your laptop doesn’t have a MUX switch, you might find that connecting to an external monitor might improve things – in my case it doesn’t, it actually makes it a fair bit worse at nearly 9 milliseconds, up from 2.7 milliseconds on the laptop’s display, but it’s something to try.

Moving onto the game itself, it turns out that Siege is a fantastic choice to demo how big a difference all these settings can make. To give you an idea, the worst result I got was over three times slower than the best – and that’s just looking at the averages! Let’s start with running the game as normal, and on the Ultra preset – I mean Siege isn’t exactly a super intensive game and ultra still runs at decent FPS, so why not, right? Well that comes out to 16 milliseconds. Hard to judge that with no context, of course, so let’s add some. Swapping to the Low preset – nothing else changed, still 2560×1600 and all – that drops to 12.4 milliseconds. That’s kind of insane right? You get almost a 4 millisecond advantage dropping the settings. Interestingly, medium doesn’t add much on top of low, coming out to 13.5 milliseconds, or 1.1 milliseconds slower than low, or 2.5 milliseconds faster than ultra.

Something you might not realise is that in Siege, the render scale setting is set to 50% for all of the presets – even ultra. That means we’ve been rendering this frame at 1812×1132, not 2560×1600 as we’d expect. Naturally, that means we’ve been getting significantly better performance than you might think, so what happens when we crank it up to 100% render scale? Well, it gets slower. A lot slower, actually. 23.66 milliseconds, up from 16 milliseconds on the same ultra preset, just with 50% render scale. That’s huge! The thing is, framerate alone plays a huge role in latency. Rainbow Six Siege handily has an FPS limiter built in, so let’s run it again, this time on the low preset, but with a 60 FPS cap. Low with no cap ran at 12.4 milliseconds, but adding the 60 FPS cap makes this a frankly insane 35.8 milliseconds average! 12.4 to 35.8. Nearly triple, all from an FPS cap! In short, that comes from the fact that 60 FPS means new frames every 16.7 milliseconds, so events that miss the current frame render have to wait an extra 16.7 milliseconds before they can show up on screen. Interestingly, with a 144 FPS cap we get back down to 18 milliseconds – still considerably higher than the 12.4 we’d have with no caps, but it’s pretty good by comparison. Lastly for this set I thought it’d be interesting to test with VSYNC on – no other FPS caps, so basically this is capping us at 240 FPS as this is a 240Hz display, and somewhat predictably we get a faster result than the 144 FPS cap, but still not quite as good as fully uncapped, running at 16.4 milliseconds, or 4 milliseconds slower than low on it’s own.

Now for a lot of games, that’d be about it for settings to tweak, but Siege.. Siege has so much more. See, in 2020 Ubisoft added a launch option to Siege – the option to use Vulkan instead of DirectX 11. That means we not only need to retest everything we’ve done so far, but it gives us access to a bunch of fancy new features like NVIDIA Reflex and DLSS, and even AMD’s FSR upscaler. So, first thing’s first, what is native low and ultra performance like? Well, low is faster at 11.6 milliseconds, down from 12.4 with DirectX 11, but ultra is actually 0.5 milliseconds higher than before. Not a big difference, but something to consider. Now, sticking with the low preset, let’s turn on NVIDIA Reflex… Oh, it’s.. Worse! What about On plus Boost? Worse again! What?? Well, let me show you the same tests, but this time using the ultra preset. Ahhh, there we go! It drops over 1 millisecond when you turn on Reflex and boost. That makes sense. What about adding DLSS into the mix? On Ultra Performance, no less. Ok, that’s a decent improvement – just under 1 millisecond better… Progress is progress! Hmm, what about swapping to FSR instead? Wow! That makes such a big difference! That’s ANOTHER TWO milliseconds lower! That’s incredible! Here’s something that might surprise you, when using the low preset, DLSS only ADDS latency, but FSR2.0 manages to drop it by almost 2 milliseconds from stock low performance. That’s some special sauce AMD has going on there… 

One thing I want to make clear is the difference between pure FPS related performance, and changes to how the game engine is handling drawing frames. Some settings will just give you more performance, but some settings can help the engine actively skip steps in the render pipeline, or at least organise it so it flows more efficiently. The Ultra, Reflex Boost and FSR2 results show this perfectly. If I add in the frames per second benchmark results here, you can see that by turn on Reflex and Boost, you don’t gain any more performance, but thanks to the optimisations in the background, you get nearly two FPS less latency. Contrast that with the FSR result which is nearly 100 FPS more and you can expect those gains are from the FPS improvement. 

The final thing I want to test is resolution. We’re running at a reasonably high resolution here, although I suppose we’re actually rendering at basically 1080p so maybe it isn’t that high, but I want to see what happens when you turn the res down more. Running at 1920×1200 as this is a 16:10 display, at 50% render scale that means we’re rendering at just 1360×848. Testing with everything we know works, that being Reflex Boost, FSR2 on Ultra Performance, 50% render scale and the low preset, we get our lowest result yet, 9 milliseconds flat. That’s phenomenal – but there’s one problem. The game looks like crap. It isn’t playable like this, so let’s turn FSR off and try again. Without FSR on it looks a lot better – perfectly playable – and remarkably we end up with an ever-so-slightly lower average at 8.9 milliseconds. Fantastic, we’ve found the sweet spot for Siege, at least on this hardware anyway.

That’s the thing with latency, it’s so dependent on what exactly is the bottleneck in the system. A PC with a faster CPU might not respond anywhere near as well as this laptop did to these changes, and vice versa a lower power CPU might respond even more dramatically. The general idea is clear though, increase FPS and decrease settings that slow the pipeline, and definitely enable Reflex if you’ve got the option. I haven’t mentioned the driver-level features like AMD’s Anti Lag and Anti Lag Plus, which save for CS2 where you’ll get VAC banned if you use AntiLag plus, you should pretty much always leave that on. 

So, that’s a general look at latency in games, and a pretty deep dive into what settings work best for Rainbow Six Siege. Let me know what games you’d like to see me test next in the comments below!