This can RUIN your gaming experience – Input Lag Explained
|If you play any kind of fast paced game where reaction times affect your performance, one of the biggest factors you likely haven’t considered but could be making being a 1337 Pr0 more difficult than necessary is input lag. It’s a term you might have heard when watching monitor reviews in particular, but it’s not exclusive to monitors and there are actually two different measurements you should know about – so let me explain.
First, I want to make an important distinction. Input lag, input latency, whatever you want to call it, is not the same as response time. A monitor’s response time is how long the actual pixels take to change colour, once the newest frame is already in the monitor’s controller and ready to be drawn. Input lag is how long it takes to get the frame to start being displayed. Where you measure the start point from though is the key difference between what most reviewers call “input lag”, and what they’d call “total system” or “click to photon” latency. Let’s start with the former.
When a reviewer is talking about a display and they say “input lag”, they generally mean the time from the new frame arriving at the HDMI or DisplayPort port, to the time the change starts being displayed on screen. The reason that isn’t instant is because the digital image your GPU transmits to your monitor is just a whole bunch of 1’s and 0’s, but the panel needs analogue voltages set for each pixel and subpixel, so the monitor’s scaler needs to do that conversion.
Additionally, features like adaptive sync, overdrive and even the input resolution can alter how long the scaler takes to do that conversion. Overdrive in particular can add processing time as the controller needs to look up what overdriven state each pixel should be set to, and for how long, then modify the frame before finally starting to draw it all out. Providing a non-native input resolution can also cause delays, especially with the wrong aspect ratio – ie a 16:9 frame on an ultrawide panel. This isn’t the same as a game’s cutscenes only drawing a 16:9 image though, as your GPU is still sending the monitor a full 21:9 frame, just with black pixels on the sides. Instead, if you actively set your system resolution to say 1920×1080 on a 3440×1440 panel, the monitor will have to take the frame and stretch it, then work out which colours should be split where, all before it can even think about drawing the new image.
The way you measure that type of input lag is fairly simple, either with a device like this Time Sleuth, or with a CRT. The CRT method is one I know from Simon at TFT Central – basically because CRT displays draw the new frame almost instantly you can put a fast clock on both displays, take a picture, and whatever the difference between the CRT and test panel’s clocks show is the input lag. This method isn’t perfect, and can be a little cumbersome to have a CRT just hanging around for this one test.
The Time Sleuth on the other hand is pretty easy – it plugs in via HDMI, it can output up to 1080p60, and has a small light sensor on the bottom which captures the flashing bar on screen and lets it report the input lag with up to two decimal point resolution. Pretty sweet! Sadly the FGPA that powers it is a little underpowered for anything more than 1080p60, it just can’t output much more than that meaning results for higher resolutions – especially different aspect ratios – can be a little off.
The trouble with this measurement is that while it’s a true test of the monitor – and therefore a great metric to compare displays especially in reviews – it’s not a real world figure. Your monitor’s native input lag isn’t all that significant to your gaming experience – it’s only a fraction of the total input lag you will have while gaming.
That’s where the total system latency comes in – that is, as it’s other name the “click to photon latency” suggests, the time between you providing an input like a mouse click, to those photons hitting your eyes, depicting that action. That, then, is measuring not only how long the monitor takes to process the frame, but how long your GPU takes to render it, how long your CPU takes to update the game and tell the GPU to render it, and how long it takes your mouse to transmit that action normally over USB.
Since that is actually what you experience, it’s a lot more of a useful metric for you the end user to know – except it kind of isn’t. See, unlike the ‘on-monitor’ input lag, the total system latency has so many contributing factors that you really can’t compare between results you are seeing on your system, and the results someone else is seeing – even on a similar system and identical monitor.
Because the USB polling delay and mouse debounce delay can be involved, that means you have (on a standard mouse) anywhere between 1 and 8 ms of tolerance straight away, just waiting for the click signal to leave the mouse. Then you have the processing time – depending on everything from the CPU and GPU you are using, to the in game settings, where you are in game, what you are doing, who is around you – all of that factors in for substantially different results.
Take this ultrawide – playing at the full 3440×1440, in CSGO at low settings and uncapped FPS, and using NVIDIA’s LDAT tool on the centre of the display with no bots on Dust II, you get a rough average of around 22ms. Now add in bots, running at high settings, and you get anywhere between 28ms average, to more like 40 or 50 ms when in heavy action. That’s a considerable difference – despite being the exact same monitor, PC, even the same game. That’s why this isn’t a great comparative metric.
That’s not to say it shouldn’t be used or can’t be compared, when you use a consistent platform and test – like us reviewers will do – it’s perfectly fine to use and compare results. The key is that repeatability and consistency. I test with the same system, in the same position in game, with the same settings, the native resolution and refresh rate – basically keeping all variables as close to identical as possible between tests so I’m able to look at the numbers I get from this panel and compare them to its competitors and rivals.
How you test this is much the same as the ‘on-monitor’ input lag, you need a device to measure the light output of the panel as that is the end point still, but now you also need it to output a mouse click so it can time between it sending that click and it seeing the image change. A device like NVIDIA’s LDAT – latency display analysis tool – does just that. It can be configured to output automatic mouse clicks, and capture the light level changes, so it can measure the time between those events. My own open source response time tool (OSRTT) does the same thing, and in general the two provide the same sort of figures.
You can go more DIY though, and just solder an LED to a mouse button, then use a high speed camera – 1000 FPS is great – to record you pressing the mouse button, having the LED turn on or off, then watch for the screen to start changing. That gives you a lot less granular results, with a much wider margin for error, but it’s easy and cheap so I wouldn’t admonish anyone for doing it that way.
So, that’s input lag. It’s fairly self-explanatory, and just to reiterate, it isn’t the same as response time. Lower input lag – both kinds – is always better, although for most people the minor differences in their peripherals, systems and monitors likely isn’t all that big a deal. If you are trying to go pro though, well maybe start tweaking and testing to reduce that latency.