Monitor Response Times ARE BROKEN – Response Times Explained

These stem cells are as well found to be exhibiting immune regulatory properties, canadian generic cialis thus help in regularizing inflammatory response of the lungs to cigarette smoking, air pollution, working for many years in a dusty environment, and frequent bacterial infections of the upper respiratory system. The most extreme suggested measurements is 100 mg and it is not endorsed to take more than one estimation for consistently. cheapest cialis australia orden viagra viagra Fact: The so-called withdrawal method can still get her pregnant because your penis leaks semen that contains thousands of sperm even before you blow your load. Hence, it has emerged as the best drug for ED. order tadalafil

Monitor response times are pretty broken. Of the few reviewers who actually test and measure them, they almost always use different methodologies to do so, meaning you can’t compare results between major outlets – you can’t look at Hardware Unboxed’s review of the Corsair Xeneon, see they say the average response time is 8.6ms, then look at TFT Central’s review of the same monitor who says it’s 5.9ms, then RTINGS’ which list the rise/fall time at just 4ms. Which one of those is right? Well, that’s the thing… All of them.

Before you can understand why that’s the case, you need to understand what a “response time” even is. In short, a monitor’s “response time” is actually short for its “pixel response time”, as in how long a pixel takes to change colours. This is typically measured with shades of grey, hence why it is often listed as GtG or grey-to-grey response time. What you are measuring is how long it takes for the panel to go from say this grey, to this one. Those are RGB values. That was RGB 51 to RGB 102, and because you are generally using 8 bit colours, that means there are 256 total shades to choose from – from 0 to 255.

The way you measure those changes is by monitoring how much light the panel is outputting. The more light, the higher the RGB value, and vice versa. Now it’s not a straight line – on darker shades the amount of light output between them is much smaller than when it’s a bright shade. That’s called the gamma curve, and is one of the main settings you can tweak on most monitors. The typical curve is called gamma 2.2 and looks something like this – less difference at the bottom, much more up top. This will be important later – but anyway by measuring the light output over time we get a nice graph like this. This is the same RGB 51 to 102 transition, but viewed as the light output over just 50 ms.

Hardware wise I’m using my open source response time tool, OSRTT, which I’ve got a number of videos on already if you are interested, but any photodetector that is suitably linear, sensitive and fast will do fine. Plus a way to convert it’s output to a digital value a whole load of times per second – at least 20,000 samples per second for good single-decimal-place results, although I’m running at more like 55,000 with this.

Once you’ve got your data, then you can graph it and measure how long the transition took. Easy, right? It turns out it’s about as easy as finding out how long is a piece of string – it depends where you measure from. You might look at this graph and say, “it’s obvious! It starts there and ends there!”, and you’d be right. If we draw a line out from the average of the start and end, those would be where, somewhat imiprically, the transition starts and ends. But what about all that noise? How do you know it ends here and not 0.2ms earlier? What if it’s even noisier, like this result? And, is it actually worth capturing this really slow trailoff, when it’s only a couple RGB steps and virtually imperceptible? That’s why you use a tolerance level.

Instead of trying to find the exact point it first starts changing you can add some bounds and find where it crosses them and mark that as the start and end instead. The VESA standard says 10-90%, which puts those bounds here. That gives us the benefit of both easily ignoring most noise, and snipping off the slow, imperceptible part of the transition that isn’t overly useful to include from a buyer’s perspective. Perfect! Except not quite.

The VESA standard is based on the light level, 10% plus/minus the start and end which is a pretty crazy amount. That snips off a significant portion of the transition which makes it really easy for manufacturers to abuse and get insanely low figures without much if any bearing on reality. On top of that, if the panel overshoots badly – something I’ll cover in detail later in this video – you only measure when the light level first reaches the tolerance level, not when it comes to rest making it even easier for manufacturers to make pointless claims. If you think about it from an RGB perspective, 10% off of 0-255 is actually 25.5-229.5, except that isn’t true either. Remember the gamma curve I mentioned, where at lower RGB values the light level changes a whole lot less than at higher RGB values. Well that applies here.

Looking at this 0-255 transition, 10% of the change in light level comes to around 6000, meaning our starting tolerance is just shy of 8000 and our end level is around 57000. Comparing to our gamma table, well that’s more like RGB 93 to RGB 243. That looks like this – that’s hardly full black to full white, and not what I’d call a representative test.

So, what do we do about that? Well this is where the different reviewers have come up with their own solutions to solve those problems. The first thing that most people have agreed on is to gamma correct the tolerance levels, as in if you were to stick with that 10% tolerance, it would be based on the RGB values and not the light level. So it would be more like RGB 26-230, rather than the 93-243 you get looking at the light level. The main point of difference comes from what tolerance level to pick. Even with 10% of the RGB values, 26 to 230 is hardly full black to full white, although it’s definitely closer.

Tim from Hardware Unboxed uses a 3% tolerance instead, meaning on this 0-255 it would be roughly RGB 8-247, which is much, much closer to what we are meant to be measuring. That would look like this on the graph, it’s so close to the start it’s actually clipping into the noise level, although it does a pretty good job of snipping the very slow trail at the end so you end up with a representative sample of what you’d actually see on screen.

The only catch with this method is the change in tolerance based on the transition size. Sure, at 0-255 the tolerance is 8 RGB, but 0-51 it’s only 1.53 RGB – call it 2 RGB for ease – so the tolerance levels sit here – basically identical to the start and end, and much less wiggle room than the 8 RGB tolerance the 0-255 transition offered (which for reference, would sit here if it was a fixed value).

Simon from TFT Central on the other hand, prefers a fixed RGB 10 tolerance. That means whether it’s 0-255, or 0-51, he’s lopping 10 RGB values off either end. So for 0-255 he is measuring 10-245, which looks like this on the same graph, and for 0-51 that’s 10-41, which looks like this on that graph. This has the benefit of keeping the tolerance level consistent across each result, which can especially help with high noise levels and in theory keeping the measurements consistent across the range.

The catch is that all you are doing is swapping the consistency from taking a consistent portion of the transition, to a consistent tolerance regardless of transition size. Tim’s 3% method is taking the same relative amount of every transition, whereas Simon’s fixed RGB 10 is taking a much smaller part of the smaller transitions than it does from larger ones. That’s not to say either of them is right or wrong, both are perfectly valid options, they just have slightly different priorities.

The other thing to mention with a fixed RGB 10 tolerance is that, at least for me personally, steps of RGB 10 are visible to me. Showing me RGB 0 and RGB 10, I can just about discern a difference, which I think somewhat defeats the point of these tolerance levels. The whole point is to trim off the imperceptible trail towards the end, but if I can perceive it, it’s no longer imperceptible. Personally, I’m a fan of a fixed RGB 5 tolerance. I think it strikes a balance between generally being clear of noise, while still being actually imperceptible to me and being tough on manufacturers. For context, here is all three of those options on the 0-255 graph, with light blue being Tim’s 3%, Dark blue being Simon’s fixed RGB 10 and orange being my fixed RGB 5, and the same again but on the 0-51 graph instead.

There is one problem with this whole methodology I’m yet to mention, and that is what happens when the panel overshoots. But to understand that problem, you need to understand overshoot so let me explain. Overshooting is when the panel misses its target and ends up going further than it’s meant to. On the graphs that looks like this, where on this one in particular while the target value is RGB 153, it peaks at more like RGB 167, or about 18% higher if you base it on the light level alone. The same happens on falling transitions, this 153-102 dips all the way to RGB 83, or 26% lower than the target light level.

The reason this happens is mostly thanks to a feature monitors offer called ‘Overdrive’. In short, the monitor increases or decreases the target to make the liquid crystals in each pixel respond faster. Say it’s aiming for RGB 153, coming from RGB 102, a darker shade, it might start driving the pixel as if it wanted RGB 200 instead, then after a millisecond or two it changes the target down to RGB 153 where it’s meant to be. Because the pixels were rushing trying to make a bigger change they responded faster than if they were only going for the expected target.

It’s kind of like if you are driving, you can either very gently accelerate up to the speed limit making sure you never go over at the cost of taking longer to get there, or you can mash the throttle and try and lift off at the right point to not go over, but most of the time you’ll mess up the timing and go over. Alternatively you can think of it like some vertical blinds, where you can carefully, gently open them up, or quickly twist them open but have them swing past where you set them and bounce around for a few seconds before coming to rest in place.

Generally, the different overdrive settings are a tradeoff between fast initial response times, and overshoot, with real top quality options reducing the weight of that tradeoff by getting the timings as close to correct as possible so even if it does overshoot a little it doesn’t end up being visible. But when it is visible what you end up with is often called “inverse ghosting” or “coronas” – no not that kind. Like this. See the copy of the UFO that looks like it’s a literal white ghost? That’s overshoot, where the panel tried to get to the green background but missed and ended up showing the UFO again but with much brighter and more obvious colours.

So going back to the graph – to measure the traditional response time, even with a new tolerance level, you only take the time the light level first crosses the boundary, not when it finally comes to rest. For particularly bad overshoot, that’s a bit of a problem, as claiming this transition is ‘complete’ here, at around 2ms, well that just clearly isn’t the case. If you put the same tolerance level on top of the end as well as under it and pick the point where it finally reaches within the tolerance on either side you’ll find the transition takes more like 8ms, a considerable difference for sure.

Measuring that way, including the overshoot but still using a tolerance level, is what I’m calling the “Perceived Response Time”, and the traditional first-past-the-post style is the “Initial Response Time”. The way I see it here, you have three options. The OG “Initial”, the more encompassing “Perceived” and the statistically accurate “Complete”. Many already report more than one style in one way or another – RTINGS list both the “rise/fall time” (initial response time) and the “total response time” (more like complete) in full, and Tim lists initial fully and an average complete response time stat in the table below the heatmaps. If nothing else I’m hoping to just standardise the naming so prospective buyers can know what it is they are looking at, even if no one changes how they test or what figures they show.

One new metric you might have seen in my OSRTT results is the third heatmap, what I’m calling the “Visual Response Rating”. This one is still somewhat in development so I’ll go into detail about how it’s calculated once that’s sorted but for now I can explain what it’s aim is anyway. In short, it’s aiming to describe how a response actually looks to your eyes. One that’s fast with little overshoot is going to score highly, while responses that are outright slow, or that are still fast but with horrendous overshoot are going to score poorly, and responses that are still fairly fast but have some overshoot will be somewhere in the middle. That’s the idea anyway – it’s also a calculated metric rather than a specific measurement.

One final thing you should know about is the overshoot results – I’ve already covered what overshoot is, but not how it gets reported. It’s a metric that doesn’t normally get included in a manufacturer’s spec sheet, but it’s really a really important piece of context for how a monitor is performing. Most reviewers traditionally quote overshoot as a percentage based on the light level. Some quote it as a percentage above the end light level, so for this 102-153 transition would quote it as 22000 minus 19000, divided by 19000, times 100, or around 16%, and some quote it based on the transition range, so 22000 minus 19000, divided by 19000 minus 10000, times by 100, or 33%.

The trouble with those options of course is the same as the response time, the light level change doesn’t correspond to the gamma curve which means it’s not a good representation of what you will see as a viewer. Luckily, most reviewers have switched to using gamma corrected measurements, although Tim still uses a percentage whereas Simon reports how many RGB values above or below the target it went. While a percentage is likely easier for anyone to understand, because of its variability it can potentially distort how noticeable that overshoot is.

Phew, that’s a lot. If you are still watching this far along, thank you. I hope this helps explain a bit about monitor response times and how other reviewers test for them. I still have a whole load of work to do on my response time tool, but I’m slowly shipping out units as I build them between complete mental breakdowns, so if you’ve asked for one already, my apologies for the delays, I’m hoping to get them out soon. Sadly being a very non-functioning human with many responsibilities slows things down a whole lot.

If you’d like to register your interest in getting a kit from me feel free to ping me an email with the link below, if you’d like to support me and the project you can check out the links in the description below, become a Youtube member or just hit the subscribe button.