Reading the Data

The first question to be answered when beginning this project was how to read the raw sensor data from each of the two cameras being studied.

Nikon D70: Nikon stores raw data in its own proprietary NEF format, which contains four color channels (RGBG). Its sensor outputs 12-bit data, but this is compressed to 11 bits before the NEF is written. The Stanford Center for Image Systems Engineering (SCIEN) already had an established protocol for reading data from this NEF format into MATLAB using a Windows DLL, rawCamFileRead.dll. In addition, Moh, Low, and Wientjes (PSYCH 221 students from Winter 2005) have published a MATLAB function called nefRead to perform the proper processing to get the data into standard RGB format in the expected 3008x2000 image size. This existing code was used to read all data from the D70 into MATLAB for analysis.

Canon 300D: Canon stores its 12-bit raw data in the CRW format, which is also proprietary. This data was read using Dave Coffin's dcraw, an open source command-line utility which reads 'raw' image file formats from a variety of manufacturers and camera models. As per Mr. Coffin's instructions, the following call to dcraw returned the raw image data in PPM (portable pixmap) format, scaled to 16 bits but without gamma correction, black subtraction, color balance, conversion to sRGB, or any other alteration of the data (besides Bayer interpolation):
dcraw -4 -m -k 0 -r 1 1 1 1
The *.ppm files output by this utility were then read into MATLAB using the native imread function. See Mr. Coffin's website for detailed documentation of dcraw's functionality and a link to a source code download.

Once the data was in MATLAB, processing simply followed standard techniques and was the same for both cameras.

Verification of Sensor Linearity

Before moving on to more sophisticated questions regarding color, sharpness, and noise, it was first necessary to verify that the pixel values being read from both cameras were linear with respect to illumination level. It is worthwhile to note that, in practice, this does not really mean verifying the linearity of the sensor alone -- the likelihood of significant nonlinearities in the sensor's behavior are small. Rather, looking for the expected linear relationship between sensor output and illumination level was a way to confirm that the data was being read correctly without being transformed in some unexpected way. Indeed, significant time was devoted to exploring this issue before it was finally resolved.

Previous work by Moh, et al., had already verified the linearity of the data being read from the Nikon D70. Therefore, this study only needed to confirm the linearity of the Canon 300D. The method adopted for this was simple: illuminate the room with a tungsten lamp, point the camera (without lens) toward a white wall, and capture a series of images at increasing exposure times at the same ISO setting (ISO 100). The R, G, and B channels from these images were then plotted to confirm the system's linearity.

Spectral Sensitivity

The standard way to characterize spectral sensitivity of a sensor is to expose that sensor to a series of narrowband illuminations ranging over the spectrum of interest (in this case, the visible region) and record the sensor's response to each of these inputs. With the RGB output of the sensor and spectral power distributions for the narrowband sources, singular value decomposition can be used to calculate the sensitivity of each pixel type to each wavelength of light

The Wandell Group's lab at Stanford has a software-controlled monochromator with a high-power tungsten light source; this was used as a narrowband source. The spectrum emitted by the monochromator for each of these bands was recorded using a spectroradiometer. Then, both cameras were used to image a series of wavelengths ranging from 400nm to 740nm. For each wavelength, care was taken to expose the sensor for the longest time possible without cause saturation, with the pixel value data then being normalized to exposure time; this allowed for the maximum possible signal-to-noise ratio (SNR). For the Nikon D70, this best-exposure-finding was automated using software created specifically for this camera by the Wandell lab. For the Canon 300D, for which no such software was available, a 'best guess' was made for each wavelength; the image histogram was used along with trial and error to get the best data possible.

Once this data was collected, it was read into MATLAB and processed using a function called rgb_D70, written by Moh and modified by the author of this study. A function called estimateSensorSpectralResponse, written by Feng Xiao, et al., is used in that code to apply singular value decomposition to determine the spectral sensitivity of each pixel type using the image data and the spectral input data from the radiometer.

Resolution

The typical method of characterizing the resolution of an image system is to measure its modulation transfer function (MTF). This was done in this study by using the two cameras to image a standard ISO resolution target under tungsten illumination and calculating contrast ratios in MATLAB at a number of spatial frequencies using the images on the target. In order to avoid any possible confusion caused by chromatic aberration (which was isolated and studied independently, as described below), the G channel from the photo was used by itself for these calculations.

Chromatic Aberration

One way in which chromatic aberration (which occurs because a lens system has different powers for different wavelengths of light) can be observed is by variation of the MTF with wavelength. An ideal, full characterization of the chromatic aberration in these cameras' lenses would require finding the MTF throughout the visible spectrum for each lens. This would, however, be extremely time-consuming, and finding narrowband sources bright enough to permit such an experiment is nontrivial. Instead, it is reasonable to simply measure the MTF in a few representative regions of the spectrum -- in this case, one red, one green, and one blue -- and compare the lenses using this information. This can give a good idea of which lens tends to be best in what types of color situations, and perhaps which is most likely to display color fringes when imaging extremely colorful, high-contrast images.

The method for collecting this data was essentially identical to the data used to characterize resolution as described above, except that the target was illuminated with three bright LED flashlights -- one red, one green, and one blue. Care was taken not to allow the camera's autofocus to refocus the image, which might partially compensate for the aberrations; focus was determined under white light, and then the images were captured with the color lights. Although it is unlikely that these LEDs (especially the blue one) are particularly narrowband in their spectral power distributions, for the purposes of comparing these two camera systems in a rough way, this approach was more than adequate.