|
|
EE 362 Final Project Color Balancing: The Battle of the Algorithms
Laura Diaz Jessica Heyman Gustav Rydbeck
|
||
| Introduction |
Comparison Methods Subjective Comparison In order to see how well the algorithms performed we devised a couple of performance measures. Though not very scientific given that there were only the three of us doing the comparison test, the resulting images were subjectively compared on a calibrated monitor. All of the resulting color balanced images were lined up side by side. This was done for each image under each illuminant for a total of 27 panes of images. An example of this is shown in Figure 3. Each image panel was shown one by one and each of us independently ranked our top three choices. The rankings were recorded and then weighted by their placement, 1st by 3 points, 2nd by 2 points and 3rd by 1 point to give us a final score for that image panel The final scores were then added for all the illuminants for a particular scene, so that we got one winning algorithm for each sample scene. This winning algorithm was determined to be the optimal image and was then used to compare the other algorithms to using the mathematical comparisons. This seems like an unconventional method; however, when taking a picture you have some idea of what the result should look like. It therefore makes sense comparing all other algorithms to the image that was perceived as the best.
Top : Input Image, Gray World (GW), White World (WW), Scale By Max (SBM) Middle: Mean/Std (MS), GW/Std, Standard Deviation (Std), SBM/MS, Bottom: MS/SBM, WW/MS, MS?WW, WW/Std
Figure 3. Sample of images resulting from processing
Mathematical Comparison Mean Square Error in CIELAB Color Space Our subjectively optimal image was compared with all of the others for a given illuminant and scene using a mean square error in CIELAB color space. This comparison was done in the following way:
Weighted Mean Square in CIELAB Color Space Taking into account the fact that our eyes detect differences at high frequencies less readily, we also implemented a weighted mean square error. This was done in the following way:
* If the image width and/or height were not divisible by 8, it was truncated since a few pixels on the edges was not going to make a big difference in the MSE comparison.
A weighted MSE was then calculated in CIELAB color space, in the same way as specified above, but giving different weights to different blocks of the deltaE matrix.
Mean Square Error in RGB Color Space Similar to the MSE in CIELAB color space, we implemented the same steps but in RGB color space, comparing the optimal image with all of the others for a given scene. Though in practice it is not preferable to compare two colors in RGB space since in a constant shift in chromaticity does not correspond to a constant shift in visual color, we were more interested see how much these numbers would differ from the other measurements. Hence, the MSE in RGB color space was only implemented to see if the conversion to CIELAB made a difference for the result. |