OSRTT User Guide & Explainer – Open Source Response Time Tool
|Intro & Disclaimer
This video is intended to be an explanation of how to use the open source response time tool and a brief look at results options – but first I need to make a couple things clear. This is still very much a tool for professionals. While in most cases I expect it to be a pretty simple “one click” type device, the results it provides should not be taken as gospel. I’ve done my best to account for edge cases and the wacky and wonderful things monitors do but there is no way I have been able to fix everything and it’s still your responsibility to verify everything is accurate before you publish any data. To give the legalese version, I am providing this kit and software “AS IS” and offer no warranties or guarantees of their accuracy or reliability.
Hardware
I’m providing two things in the box with the prebuilt kits – the unit itself and a 5m USB cable so you can easily have it connected directly to your system and attached to the monitor. The unit has elastic to secure it to the display, along with a tensioner if you are using it with a smaller display. Generally speaking, I attach it horizontally with the USB cable coming out of the bottom and the button facing upwards, although for larger displays you may find attaching the elastic vertically may be required.
When connecting the USB cable, of course connect it to the device itself, but for the host side I would recommend connecting it directly to your system. That gives the most accurate input lag measurements, and generally avoids the issues with the USB voltage being out of spec that you can find on some hubs, especially non-powered ones.
Finally, the button on top is what you’ll use to run any tests, both input lag and response time measurements. It won’t activate until the test program is up and selected, and currently is only used to trigger the start of the test.
If you run into any issues, unplugging and reconnecting the USB cable at either end will reset the device and should allow it to work normally again. If you run into the need to reset it, please do leave a bug report explaining what steps you took to trigger that.
Software
The software interface itself is fairly basic. At the top you have a number of menu options I’ll explain in a second, below that in the main window you have a board connection status indicator, a drop down that will be loaded with any monitors you have connected (and the display 1 pre-selected), then on the left the response time testing section and the right for input lag, and a button to open the results folder.
Hitting the “Analyse Results” item in the menu up top reveals a new section which will let you import either a single “RAW” file, or import an entire folder which will re-process every “RAW” file then average them to get an updated heatmap file.
The “Results Settings” options are the main bulk of your settings. The “Output Settings” options include saving the final results file as pre-generated heatmaps in an Excel file, saving the raw data to an Excel file with premade graphs and a manual results calculator (this slows the test down considerably though and may make the program unstable at the moment), a ‘Verbose output’ option to include a load more data in the individual “FULL” files, save the full calculated gamma table, save the smoothed raw data to make it easier to manual verify noisy data, and save raw input lag data for manual verification.
By default, the “Recommended Settings” option is set for the response time measurements. I outlined what those settings are and why in the last video on the tool, and I’m still planning a full video on explaining all about the different options soon, but if you would rather use your own collection of settings you can check the “Advanced Settings” option which will enable the menus for both response time and overshoot settings.
Under the response time settings you can pick from, effectively, 6 different options. Either using an RGB 5 or RGB 10 offset, or a 3% or 10% tolerance instead, plus you have the option to either base that percentage on the light level, or what is arguably more accurate, which is the RGB value. 10% non-gamma corrected is the very traditional method, Simon from TFT Central uses an RGB 10 offset and Tim from Hardware Unboxed uses a gamma corrected 3% tolerance.
As for overshoot, this too can be gamma corrected, measuring the RGB value difference rather than the raw light level. You can also report it as a percentage, either based on the final light level or RGB value alone, or based on the transition range – ie RGB 102 to RGB 153 is a range of 51 RGB values, so if a panel overshoots to RGB 170, that would be 17 RGB values too high and would be 17 / 51 * 100, rather than the end level alone which would be 17 / 153 * 100.
Moving over a tab to “Program Settings”, you’ll find the About Program option which displays what software version and (if the board is connected) current firmware version the board is running too. You can also report a bug, set the program to minimise to the system tray instead of staying on the taskbar, suppress any error boxes that may pop up, and ignore any mid-run errors that would normally cancel the test. The latter is enabled by default as I personally value getting a copy of the data to understand what went wrong.
You do also have some debugging options, like the debug mode and the option to save some data on the USB power stability, which is something I’d likely ask for when troubleshooting.
Response Time Testing
When it comes to response time settings, you have three options. The framerate limit is what FPS cap you want the test program (a UE4 project) to run at. I’ve set up as many options as I can for now, although I’m sure this could be improved or altered at a later date if needed. Default is 360 FPS, the maximum is 1000 FPS although not many systems I’ve tried can hit that very often, and it goes down to 60 FPS. You might want to set this to lower than the monitor’s refresh rate to test adaptive sync as just changing the refresh rate isn’t quite the same.
The number of cycles is how many times the test will run before averaging everything together. The default is 5, the current maximum is 10 although I’ve made changes that mean in theory this can be as high as you like so that may be changed by the time you use this, and of course the minimum is 1 although I can’t recommend anything less than 3 for even remotely accurate data personally. The test only takes around a minute or so, so you aren’t exactly adding hours to your testing to run it a couple more times.
Finally you have the capture time setting. This is how long the sensor captures data for, by default it’s set to 50 ms, but if you have a slower display such as a TV or something with a slow panel or long input lag, you have the option to increase that up to 250 ms in 50 ms increments. The longer you set the more data it saves and the longer it can take to process that data, so it’s up to you what you need for a given display and your preferences.
All of those settings, in fact all the ones I’ve described are all saved so when you close the program and relaunch they’ll still be there for you. They should also persist through an update too.
Running the test
To actually test response times, pick your settings, select which display you are testing on, then attach the device to the monitor so its base is flush to the monitor. That may mean tilting it backwards. Make sure any backlight strobing modes are turned off, often called something like “motion blur reduction”, like “ULMB”, “ELMB”, “MBR” or Gigabyte’s “Aim Stabilizer”. Make sure the display is warmed up – it should have been on for at least 30 minutes before testing, and ideally been somewhat active during that time too. Once that’s all sorted, hit “Start Response Time Testing”. That will launch the brightness calibration, position the window under the sensor, then adjust the display’s brightness until it reads “perfect”. The “Raw Result” it’s looking for is between 61500 and 64000 – once it’s around there the continue button will be enabled and you can press it.
That will launch the test program, the UE4 project, and should launch on your selected display. You can then press the button on the top of the device and it should flash the screen to white to do it’s fine tuning calibration, then will cycle through the 6 test RGB values to build the gamma table, then will cycle through the full test pattern as many times as you’ve set in the cycles counter.
When it’s done it’ll close the UE4 project and launch an explorer window to that run’s results folder. The final processed file is the Excel file that includes the monitor’s name, refresh rate and connection type. You’ll also find “RAW” csv files which includes all the raw data captured for each run, including the gamma test at the top of each file, and the “FULL” csv files which are the processed output for each run.
Opening the final Excel file should bring you to the “recommended” presentation page, which includes three heatmaps, stats tables for each, and a couple extra stats. On the left is the “Perceived Response Time”, as in the complete start-to-finish time with the tolerance subtracted, in the middle is the overshoot, and on the right is what I’m calling the “Visual Response Rating”, which scores how quickly the panel can get to the desired colour, and how long it then takes to come to rest there. You also get an averaged input lag figure, and an “OSRTT Score”, which is, if I’m being honest, a blatant copy of RTINGS’ system, using the same weights as well. If you want to use the more ‘traditional’ initial response time you can see that on the “View Heatmaps Here” sheet, or see all the processed data on the “Paste Data Here” sheet.
Finally, I just want to mention that you can customise these templates – make your changes to the “Results Template.xlsx” file in the results folder, just make sure the first sheet is the data sheet, and the refresh rate figure on the second sheet remains at L4 as both of those are hard-coded into the program for now. You can also make use of the “Graph View Template”, by opening a raw file, pressing CTRL + A, CTRL + C, then opening the graph view template, enabling the macros, then deleting any data already there and pasting your new data in, then look at the second sheet for the graphs. Use the dropdown to select the graph you want, then if you want it to calculate the response time manually enter the start and end positions as you see them and it’ll tell you that along with the over/undershoot too.
Input lag testing
When it comes to testing input lag, you currently have two settings. How many clicks and how long between them. Default is 20 clicks with 0.3 seconds between. Hit “Start Input Lag Testing” to launch the test program – currently I’d recommend pressing “Q” a couple times to get the game to register key presses properly – that’s something I’m working on – then hit the button on the device and let it run. That should then close the game and process the data and again open an explorer window to the results where you should see a RAW and processed file. The processed file contains results for each shot, where it lists the time taken for the click to be registered by the USB controller, the time between then and the display reacting at all, and those figures combined. It also has an average, minimum and maximum at the bottom for each of those values. In future I’d like to include things like processing time too but I’m not there yet.
Outro
So that’s it, for now. I want to add a live view and a load more stuff so stay tuned. Any questions, please do leave them below and I’ll do my best to get back to you – and if you find a bug please do submit a bug report on Github with plenty of data, pictures and descriptions.