EE 362 Final Project

Color Balancing:

The Battle of the Algorithms

 

Laura Diaz

 Jessica Heyman

Gustav Rydbeck

 

Introduction

Background

Algorithms

Test Images

Testing Interface

Image Comparison
Methods

Results

Conclusion

Possible Extensions

References

Appendix I

Appendix II

 
Background


Light
The light that comes into our eyes is a combination of the reflection and refraction properties of the objects we are looking at and the spectral power distribution (SPD) of the light that illuminates the object of interest. This light has been scattered on an object in two ways: it has either been reflected in the surface in what is called interface reflection, or it has entered the object and bounced back and forth between the particles until it eventually exited the object at some angle, which is called body refraction. Taking all this into account, the SPD coming from an object c(l) into our eyes can be described as:

 


where b(l) is the body reflectance, i(l) is the interface reflectance and e(l) is the SPD of the illuminant. The interface reflection is almost uniform for all incident wavelengths, meaning that the light reflected in an object’s surface has the same wavelength distribution as the illuminant. Hence, i(l) is a constant – let’s call it i – and the formula becomes:
 


Human color vision
The SPD of the light from an object as shown above is clearly a function of the illuminant, meaning that the light coming from an object under one illuminant will not have the same wavelength distribution as the light coming from the same object under a different illuminant. Still, we say that an orange is orange, no matter if we are looking at the object in an indoor environment under incandescent light or outside in bright daylight.
The explanation to why we always perceive orange as orange is that our visual pathways have an amazing ability to correct for the illuminant. Two different SPD’s cause different amounts of isomerizations in our three types of cones and thereby give rise to different L, M and S color signals. These signals are interpreted as being the same thanks to a process called approximate color consistency. Our brain gets information about the illuminant from all the light that comes into our eyes and uses this information as a decoding scheme for the color signals, a scheme that compensates for the illuminant.



Digital images
The sensor of a camera, or any other type of digital image acquisition device, is reached by essentially the same light as an eye in the same position. The photons reaching the sensor will induce a response which carries information about the energy (or wavelength) and the number of photons. Hence, the sensor’s response to light is similar to the human cones’. However, as stated above, this color information is a combination of the illuminant and the color of the objects the light is coming from, and the sensor itself – as our eyes – has no way of correcting for the illuminant. Therefore, the raw data from camera sensors often give images a color cast or include colors that do not look natural. To try to correct for this misrepresentation, we apply post-processing algorithms that use the information available in the image to remove the color cast and make the colors appear natural to a human eye. Some of these algorithms find an estimate of the illuminant and then apply a color transformation that approximately divides it out (eg white balancing), others work in a more indirect manner by making assumptions about the color distribution that the image should have (eg grayworld, scale by max). This process of trying to correct for different illuminants and various camera color distortions is called color balancing.