|
|
EE 362 Final Project Color Balancing: The Battle of the Algorithms
Laura Diaz Jessica Heyman Gustav Rydbeck
|
| Introduction |
Results Subjective ComparisonSurprisingly, we tended to agree to a high extent on the same images during the subjective ranking of the algorithm performances. For the colorful, dark and specular images, the algorithm with some kind of mean and standard deviation adjustment tended to do best, whereas algorithms that did not adjust the standard deviation did better for bright images and images with big contrasts. All the results are shown in Appendix I.
Mathematical Comparison Mean Square Error in CIELAB Color Space The CIELAB comparison was very consistent with the results of our subjective comparison. Interesting results included that of the Macbeth color checker (results in Figure 6) which had an optimal image produced by the Gray World/Standard deviation algorithm. It turned out that all of the algorithms that modified both the mean and standard deviation matched the optimal image very closely. The algorithm that adjusted only the standard deviation, however, performed much worse in all lightings except the flat PSD. This was due the fact that the image got a yellowish cast under the fluorescent and the tungsten lighting . The result was that the means on the red and green channel were too high and just spreading out the values over a wider spread of pixels did not improve the image quality. Consistent with the idea that adjusting the mean was key, Gray World and the other basic algorithms that somehow shift the mean also performed fairly well.
Figure 4. Macbeth MSE comparison
Images that the Gray World and Scale By Max algorithms seemed to work the best on included those that had a lot of dynamic range or those having objects that were nearly the same color. This can be seen from for example the color balanced images of the white flowers, shown in Figure 5. The results in Figure 6 indicate that the basic algorithms such as Gray World and Scale by max perform the best. This is due to the fact that the image contains most of the same color. The algorithms that adjust the standard deviations spread the intensities over a wider range of values, thus introducing dark colors, resulting in an unnatural looking image.
Top : Input Image, Gray World (GW), White World (WW), Scale By Max (SBM) Middle: Mean/Std (MS), GW/Std, Standard Deviation (Std), SBM/MS, Bottom: MS/SBM, WW/MS, MS?WW, WW/Std Figure 5. White flowers with Fluorescent Illumination
Figure 6. White Flowers MSE comparison
Frequency Weighted CIELAB The resulting MSEs from the frequency weighted CIELAB algorithm followed the un-weighted CIELAB results. This was a surprising result since we had anticipated that taking the spatial frequency content into account would yield slightly different results. A complete listing of all the MSE graphs and associated images can be found in Appendix I.
Mean Square Error in RGB Color Space Although the RGB MSE comparison was anticipated to yield the least amount of useful information regarding performance, it was surprisingly consistent with a lot of the results from the more reliable CIELAB MSE. One example of this was for the image of the stuffed animals shown in Figure 7.
Top : Input Image, Gray World (GW), White World (WW), Scale By Max (SBM) Middle: Mean/Std (MS), GW/Std, Standard Deviation (Std), SBM/MS, Bottom: MS/SBM, WW/MS, MS?WW, WW/Std Figure 7. Stuffed Animals with Fluorescent Illumination The resulting CIELAB and RGB space MSE comparisons for this image are shown in Figures 8 and 9. It is apparent that the RGB comparison has the same general shape as the CIELAB, a surprising result. The same kind of consistency was observed in 5 out of 9 of the comparisons, making the RGB comparison match the CIELAB one in more then 50 % of the cases. However, for 4 out of 9 cases, the RGB MSE values differed considerably from the CIELAB MSE values.
Figure 8. Stuffed Animals RGB MSE
Figure 9. StuffedAnimals CIELAB MSE |