Open Source Response Time Tool – Monitor Response Times are Complicated…

One research found that individuals with glaucoma who http://deeprootsmag.org/page/80/ generic cialis mastercard took ginkgo had improvements in their vision. In joint conditions it is crucial to stop and wonder why these medicines are so low in price. uk levitra This arises only when the person buying cheap cialis makes love. Jeff and buy levitra online http://deeprootsmag.org/page/390/ Leena were really waiting for their death because nothing was their life, what they should think about.

Monitor response times are a complete mess right now – manufacturers are claiming every single monitor is a “1ms monitor”, reviewers are all using entirely different methodologies meaning you can’t directly compare results between them, and actually most reviewers aren’t even testing for it at all. That means when you are searching for your next multi-hundred pound/dollar monitor, you are less equipped to make the best decision possible and can end up with a product you’ve spent a significant chunk of change on that you aren’t too happy with. I’m hoping this can help change that.

This is my Open Source Response Time Tool, or OSRTT for short. As I mentioned in the last video on this – I’ll link that in the cards above if you haven’t seen it – I’ve spent the last 6 months (going on 7 now…) building this thing so that other reviewers and enthusiasts can use it to actually test monitors with some level of consistency, accuracy and detail. It’s open source, both the hardware, firmware and software, so if you want to build one yourself, modify the test or processing or just see exactly what it’s doing, you can. I am going to be making pre-built kits available – I’ve already had a number of pre-orders but if you want one use the link below to send me an email, or hit me up on Twitter, and I’ll add you to the list.

This video is all about explaining the challenges I faced in building this and explaining what I did to solve those problems – both in hardware and software. To understand the solutions, you first need to understand the problem – what is a monitor response time and how do you measure it? In short, the response time is how long it takes a panel to change colour. This is often measured as “grey to grey”, because the test basically involves swapping different shades of grey – not 50 though, sadly – on screen, and using a photo sensor to effectively “watch” and record the light level changing as the colours get brighter or darker. An example graph looks like this – it starts off low because the screen is on black (or as black as an LCD can get anyway), then steeply climbs until it reaches the end colour, in this case full white.

So, what this little box is, is basically that photo sensor I mentioned – sort of. At its core, it’s the same basic design A5hun from Aperture Grille detailed in his excellent video I highly recommend you go and watch. The actual sensor itself is a Melexis MLX75305, a “light-to-voltage sensor” which takes power in, then varies what voltage it outputs based on how much light is hitting the sensor itself. It’s a great fit as it’s decently fast at responding to changes in the light level – between 10us and 22us – it’s also impressively linear meaning the more light you shine the higher the output voltage will be, and that graph is a nice straight line. And finally, it’s also pretty sensitive – no not about it’s size, it knows it’s how you use it that counts, no I mean it can detect incredibly small changes in light level. Now it’s no photodetector – those are incredibly expensive, but also incredibly fast with often nanosecond rise times – but it’s good enough for this type of measurement.

Turning that analogue voltage into a digital signal is the job of the microcontroller, the SAMD51 32 bit ARM Cortex chip with two 1 MSPS 12 bit ADCs. That’s onboard the Adafruit ItsyBitsy M4 Express, and it’s an incredible bit of kit. It’s also basically the same as A5hun’s, just a smaller version. Now I am using both ADCs here, one for the sensor signal but also one measuring the 5V USB power rail for both noise and level. That’s done through a simple potential divider to drop the 5V from the USB port down to 2.5V, which is under the 3.3V maximum this chip can handle. That lets me account for the USB voltage and flag if it’s too low, as well as checks how noisy the power going to the sensor is going to be.

Now I said this chip’s maximum input voltage is 3.3V, but this is a 5V sensor… how does that work? Basically it’s the same as the USB output, it goes through a potential divider, although in this case it’s a potentiometer, a digital potentiometer no less. This lets me account for a lot of stuff, including the USB voltage differences, and the monitor’s brightness level not being absolutely spot on.

Just to make sure I don’t damage the microcontroller, I also added a voltage limiter. I tried a whole load of solutions, but none worked how I wanted or they were tens of components and I wanted to keep it simple. So, here’s my solution. I’m incredibly pleased with it, it’s basically an “active clamp”, but with a potentiometer feeding the gate 2.89V. Why 2.89V? Well I found that this PNP transistor, when provided the “standard” 3.3V, didn’t start clamping the voltage until something like 3.6V and ended up still peaking at something like 4.5V. At 2.89V it starts clamping at exactly 3.3V and peaks to 3.6V when the “signal” line inputs 5.3V. I’d call that a win!

Noise is also a problem, both from the USB 5V rail, and from the sensor itself. Both can be helped with capacitors! Specifically, the sensor’s datasheet suggests 1uF and 100nF capacitors be connected across its power and ground pins, and recommends a smoothing capacitor for the output, which I’ve gone with the absolutely tiny 10pF and 4.7pF ceramic capacitors. They smooth the output without affecting the timing – anything larger and it starts to slow the capture of the transition. I’ve also got a 10mH inductor for filtering high frequency noise from the 5V rail, and that is fed from the “USB” pin on the ItsyBitsy M4, not the VHi pin as that one I think has a switching regulator which introduces noise on it’s own.

Finally on the hardware side, there is the case. The button is on it’s own PCB that is well reinforced so it won’t fatigue and wear over time. The case is something you can, in theory, dismantle after assembly. There is no glue, I designed it to just slide together, but I should make it clear if you do try to dismantle it and break it (the plastic is pretty thin and can be brittle) I’m not going to be liable for fixing it. I can print replacement parts though, which will be available for purchase should you need them. I’ve also designed a pointer into the lid so you know where the sensor is on the bottom. This isn’t so important right now, but when I get round to doing input lag testing with this too it will be helpful.

Ok so that’s the hardware, but what about the software? Honestly, that’s been the most complicated bit. I’ve tried to account for as many edge cases, irregularities and even potential user errors as possible, but it’s important I make it clear: this isn’t perfect. I will have missed some issues. I’m doing an entire company’s worth of design, development, testing and manufacturing in my spare time while producing 3 tech videos a week, a car video a week, and managing Locally, while being pretty severely mentally and physically disabled. If you find a bug, please report it, ideally on GitHub, but please understand I can’t guarantee I can get it fixed instantly. It might take days, weeks or months depending on how much of a challenge it is to get sorted, and, you know, if I can walk, eat or function that week.

With that said, the main aim for me on this project has been transparency. I want you to not just trust me and my work, but actively trust the results you are getting. That’s why all of the processing is done in the desktop program, and the raw data from the board is saved to file after every run. That means that you can import the raw data again at any point to re-process the results, either with different settings, just regenerate the final files, or after an update – for example if I were to include a new metric like the Cumulative Deviation stat Tim from Hardware Unboxed uses. It also means you can stick that raw data in the graph view template and manually check over what it is the monitor is doing – which should help you catch things like the recent Samsung Odyssey G9 variable refresh rate issues.

I know many who will use this are people who haven’t tested response times before, so I’ve set it up to run with a default list of settings that I think are the right mix of easy to understand, technically accurate, and will help push the industry forward – and make it easier for the average buyer to actually know what they are buying. I’m planning a full video explaining that side of the topic, but in short I’ve currently settled on what I’m calling “Perceived Response Time”, as in the response time, factoring in any overshoot, but removing a tolerance, specifically RGB 5, so as to not capture the completely imperceptible trails towards the end of the transition. Then, the overshoot amount, specifically how many RGB values higher it goes than the target end level, and finally what I’m calling the “Visual Response Rating”, which describes the relationship between how quickly the panel can get to the correct colour and how long it takes to actually settle at that colour. Specifically, if it’s a slow VA panel that doesn’t overshoot but just takes its sweet time, versus an IPS panel with strong overshoot but is much faster to get away from the previous colour, albeit missing the target colour and taking time to come back down.

Of course, if you would rather report different metrics, like the more traditional “Initial response time”, or even use a different tolerance, like the traditional 10-90% based on the light level, or Tim’s 3-97% setup based on the RGB values, or even have the overshoot reported as a percentage or just based on light level instead of RGB value, all of that is there under the “advanced settings” option.

Either way, that data gets spit out into an excel file with the heatmaps pre-generated for you – it even captures what refresh rate the monitor was set to and pre-sets that in the file for you. But, the thing is, this process isn’t perfect. The monitor can have some off results, the processing might mess up one transition so it’s not perfect. That’s why I have multiple run averaging set up. By default it’s set to run the test 5 times, but currently you can have it run up to 10 times if you want the added accuracy. The program will even do outlier rejection before averaging to discard and erroneous results – on a result by result basis meaning it won’t just throw out an entire run, it just won’t include that one value.

If you do choose to use the gamma correction, or if you pick the “save gamma table” option, you’ll be greeted with the dedicated gamma test, which I’m still currently tweaking, but basically it just captures 50ms of each RGB value, then uses Cubic Interpolation to generate every in between RGB value from 0 to 255, following a nice curve too. It’s a natural spline, using the excellent ScottPlot library – well a couple of files from it, although I think I’ll be making use of it in full for the live view I want to add in.

One thing I’ve added in that isn’t quite what I wanted but will do for now is the system uptime checker – that checks how long your system has been running. Why, you ask? Well, in general a liquid crystal, much like your car’s engine, takes some time to heat up and doesn’t perform all that well when it’s cold. Everything is stiffer, and in the LCD’s case, slower. You really need to let the monitor warm up for 20-30 minutes before doing any testing. The uptime checks if your system has been on for more than 30 minutes, and if not gives you a little warning. Ideally this would be looking at when the monitor you’ve selected to test is initialised but I couldn’t figure out how to find that from C# so that’s a battle for another day.

When it comes to actually collecting the data, the first thing you need to work out is when the transition starts and ends. It’s pretty easy for a human to look at this and go, “uh there and there. Done.”, but for a computer… it’s less obvious. I ended up taking a couple hundred samples at the start and end and building a min/max, then loop through the values looking for where the data spikes outside of those bounds. I’ve got some additional checks to make sure that it won’t return a false positive and pick a result too early, and if it fails those checks it will update the min/max if it needs to. I do that from the start of the list, and from the end, which gives me a very accurate “complete response time” figure.

To get the trimmed measurements, I basically do the same thing, but look for where it crosses the threshold rather than the min/max. Overshoot is just taking the min or max within the complete response time measurement and comparing that to the end average, and if it is over or undershoot then you can calculate the value either by RGB value or by the light level itself.

Some monitors have backlight strobing or dithering permanently on – this isn’t quite the same as black frame insertion (ULMB) but it’s a massive pain to try and get a usable measurement out of this madness, which is why I’m using a filter to smooth out the noise – without affecting the actual response time curve though. At the time of writing and filming this, I’m using a standard moving average function, with a variable window size ranging from 10 by default to 50 for really bad noise. I say ‘at the time of writing’ because I’m planning on testing out a Savitzky Golay filter very soon which may be even more accurate, so I may end up using that but since I haven’t implemented or tested it yet I can’t say.

Finally, we have the problem of keeping this thing up to date. I’m using AutoUpdater.Net to be able to push automatic updates so once you have it installed, it will prompt you when a new version is available and automatically download, unpack and install that version with a single click. I’ve also bundled the firmware updates in with that too, and the program itself, using the arduino-cli I’ve bundled in the installer, can update the board on its own again with a single click.

Ok, that’s what I’ve got done, what’s left to do? Why haven’t I shipped these to all the people who have pre-ordered? There are a couple of reasons. I’ve only just got the adaptive smoothing working which was a big challenge to test and calibrate the settings and confirm it wasn’t affecting the results, and I’m still tweaking things like the detection point code to fix outlier results. I’m also fixing bugs so that it’s as easy and smooth to use as possible and as accurate as possible too.

I’ve also been working with Simon from TFT Central who has been testing this against his equipment and has helped get the results accurate and find a bunch of bugs – some of which I’m still working on.

And, if I’m being honest, I’ve also been having a pretty rough time over the last month or two. Mental and physical health… challenges… have made things pretty difficult, and this has had to take a back seat while I struggle through my “main” work making these videos and honestly just surviving at some points.

With that said, I’m feeling pretty confident this is actually close to being ready and I think I’ll be starting to build some units late next week, and in theory, all things being well, I’ll have units in hands at least before the end of January, but ideally much sooner than that. For those that are already on my list, I’ll get in touch when I have a unit ready to ship to you, and for those that want one and haven’t reached out, please do and I’ll add you in.