EyeLab: An Automated LED display analyzerAbstractA system to autonomously monitor test equipment was designed, developed, and tested. First, a digital video camera is used to “watch” the particular test equipment, presumably while an experiment is performed. Next, the video is processed by a choice of three algorithms, to obtain a numeric output corresponding to the digits that appeared on the screen at the time the equipment was being “watched.” One algorithm, dubbed the “smart-strokes” approach, “blindly” segments the screen into seven segments (corresponding to the standard segments on a seven segment display) and tests to see which segments are filled. This is fed into a simple nearest-neighbor classifier to determine which digit was being shown. The second algorithm employs cross-correlation of the DCT of a normalized version of the image with the DCT of a normalized training pattern, which is generated in situ from the data. The third and final algorithm takes a series of typical “image features” and feeds them into a neural network, again with training patterns chosen from the data. The three algorithms were tested on computer-simulated digits (at different simulated camera angles), as well as on actual data recorded from test equipment. Different video resolutions were tested to determine the robustness of each approach. The mean squared error (MSE) in identification of the simulated data was used as a metric for determining algorithm performance, as well as a qualitative examination of the performance of the algorithms on the real data. The results indicated that the cross-correlation method is the best in terms of MSE, but the “smart-strokes” algorithm achieves reasonable performance with a much quicker compute time. |