Response Time vs Input Lag – What’s the difference?

In short, the difference between response times and input lag is that response times measure how quickly the liquid crystals in your display’s panel can change colour, and input lag is how long it takes for the monitor to process and start displaying a new frame. That’s it, if that’s all you wanted to know, thanks for watching my YouTube short. If you’d like to know a bit more though, stick around because this is really interesting stuff.

I’ll start with response times, which is a field I’ve come to know fairly well thanks to developing my open source response time tool – which you can pick up at osrtt.com now. The basic idea here is that your display, if it’s an LCD or liquid crystal display, is made up of absolutely tiny light gates made up of – you guessed it – liquid crystals. It’s generally made up of three per pixel, a red, green and blue sub-pixel, and those are basically variable blinds that block some or all of the light coming from the backlight behind the panel. Taking a closer look at one of those sub-pixels, it looks something like this. Actually it looks nothing like this but this is as good as my art skills can provide so I’m sticking with it. The backlight can be shining as much as it likes but you’ll still see black because the liquid crystals are aligned to block the light coming through. Once the crystals start opening though, light starts coming through. A response time measurement is measuring how quickly the crystals can go from one state to another, such as fully closed to fully open – that’d be RGB 0 to RGB 255.

One of the interesting things that can happen is something called overshoot. Let’s say you were asking the display to go from slightly open, say RGB 51, or 20% open, to RGB 153 or 60% open. If you use a feature your monitor offers called overdrive, what that does is set the target to higher that what you asked it for, so in this case it might set it to RGB 204 or 80% open, which makes the crystals react more quickly, then when the next frame comes around it sets it back to what you actually requested, and if it timed it right, it won’t have actually gone past the target. Most of the time though, it does, and any light that makes it through past the target is what we call overshoot.

These figures are often represented as heatmaps like this – this is from my last video on the AOC AG275QX and QXN you can check out after this video! You can see the QX, the IPS model, is relatively fast. The majority of its results are within the refresh rate window, meaning they took less time than the time between new frames, which is great. Looking at the maximum overdrive mode though reveals a bit of a problem. Almost every result went well past the target value, sometimes ludicrously too high.

For some context, here is a graph of the light level output during one of those transitions. See the flat line at the end? That’s the target light level. See how high the spike goes above that line? That’s the overshoot. While we have this graph up, now is a good time to mention that different people measure the response time in different ways. Tim from Hardware Unboxed trims 3% of the RGB values off the top and bottom, then measures only the rising time – as soon as the light level crosses the target that’s it. Personally I prefer using a fixed RGB 5 tolerance, and measuring what I call the “perceived” response time, which includes the overshoot time. That means I keep counting until the change has fully completed (minus the RGB 5 tolerance). The original VESA standard still quotes it as 10% of the light level, and initial time only. That would be here. Yeah, basically none of the transition. Now you can understand why manufacturers can claim every monitor is a “1ms” monitor.

Moving on to the input lag figures, this is generally pretty simple too. There are two different kinds of input latency – on display or total system. Total system is the one you might be more familiar with, which is how long it takes for you to do an action like left-clicking your mouse, and have that action start showing up on screen. This is fine for an end user to test with, or for direct comparative testing, but isn’t great for just quoting figures in a monitor review. On Display latency however is much better, as that’s measuring how long it takes for the monitor to take a new frame in from the graphics card, process it, then start drawing it on screen. The biggest task there is converting the digital stream of RGB values into a matrix of voltages to set each pixel to – which includes any overdrive settings, any colour and brightness settings and a load of other stuff too. Most monitors only take 1 or 2 ms to do this per frame though, which is pretty remarkable.

I think for this video I’m going to leave it there, but I’ve already done much deeper dives into response times and input lag in videos already on the channel so if you want to know more definitely go check those out. If you happen to be a reviewer and want to be able to test both of these metrics, head over to OSRTT.com to pick up an OSRTT Pro unit. They are hand built, validated and shipped by me, and I’ve spent over a year and a half working on this to be the best tool for reviewers like me. I use it in every monitor or laptop review, and I’m constantly adding new features, so definitely go check it out.