EE362 Final Project Write-up
Motion Blur Reduction Techniques for Liquid Crystal Displays
Moshe Malkin
Introduction
In recent years Liquid Crystal
Displays have become ubiquitous consumer products. They are slowly but surely replacing the
CRT based monitors. While
LCD’s have many advantages in one important aspect they are inferior to
CRT based systems; in video motion blur.
There are two main causes for this motion blur problem. One is the nature of the liquid crystal,
which takes time to change its state to desired brightness. The other cause is the hold type
rendering in LCD systems. The
CRT’s can be classified as an impulse based display and as such do not
suffer from this motion blur degradation.
Fundamental advances in device technology will slowly eliminate this
problem but in the meanwhile innovative solutions must be found. In this short report I will review LCD
technology, discuss the motion blur problem, and examine in detail the Motion
Compensated Inverse Filtering solution.
LCD
Technology
Liquid crystal was discovered
by the Austrian botanist Fredreich Rheinizer in 1888. "Liquid crystal" is neither
solid nor liquid (e.g: soapy water). In the mid-1960s, scientists showed that
liquid crystals when stimulated by an external electrical charge could change
the properties of light propagation through the crystals. The early prototypes were unstable
for mass production and a stable liquid crystal (biphenyl) was found by British
researchers in late 1960’s.
A liquid crystal display (LCD)
is a thin, flat display device made up of any number of color or monochrome
pixels arrayed in front of a light source.
Each pixel consists of a column of liquid crystal molecules suspended
between two transparent electrodes, and two polarizing filters, where the axes
of polarity are perpendicular to each other. Without the liquid crystals between them
or with liquid crystal in “relaxed” state, light passing through
one filter would be blocked by the other.
The liquid crystal twists the polarization of light entering one filter
to allow it to pass through the other.
The twisting is accomplished because the molecules of the liquid crystal
have electric charges on them. By applying small electrical charges over each pixel,
the molecules are twisted by electrostatic forces and the polarization is
changed.

The color filter of a TFT LCD
consists of the three primary colors - red (R), green (G), and blue (B) - which
are placed on the color-filter substrate.
See next figure. The
elements of this color filter line up TFT pixels. Each pixel in a color LCD is
subdivided into three subpixels, where one set of RGB
subpixels is equal to one pixel.

LCD
Refresh Rate & Response Time
Response Time is an attribute
that applies to LCD monitors. It translates to the amount of time it takes for
a liquid crystal cell to go from active (black) to inactive (white) and back to
active (black) again. It is
measured in milliseconds (ms), where lower numbers mean faster transitions and
therefore less visible image artifacts.
LCD Monitors with long response times would create a smear or blur
pattern around moving objects, making them unacceptable for moving video. However current LCDs
monitors have improved to the point that this is rarely seen. The viscosity of
the liquid-crystal material means it takes a finite time to reorient in
response to a changed electric field.
Another effect is that capacitance of LC changes based on the molecule
alignment. As such, as the LC
changes, the voltage also changes and we will not get the brightness we were
hoping for
The typical response time for
a CRT display is in microseconds.
For an LCD display a figure of 35-50ms for rise and fall times is
typical. Response times significantly greater than 50 ms can be annoying to a
viewer depending on the type of data being displayed and how rapidly the image
is changing or moving. Because the
Liquid Crystal molecules respond slowly to image changes, smearing/blurring
affect takes place, with gray to gray (‘mild’) transitions being
the worst in this regard.
Refresh rate is the rate at
which the electronics in the monitor updates the pixels on the screen
(typically 60 to 75Hz). For each
pixel, the LCD monitor maintains a constant light output from one refresh cycle
to the next.
Motion
Blur
The visual effect of motion
blur is a smearing/local distortion of the image. A slow pixel response time will
obviously cause this problem.
However, motion-blur also comes from the 'sample-and-hold' effect: an
image held on the screen for the duration of a frame-time blurs on the retina
as the eye tracks the (average) motion from one frame to the next. This results in a “point
spreading” and loss of sharpness for fast moving images. It is important
to emphasize that hold type blur DOES not take place on the LCD screen but in
the human eye – and so in fact cannot be “captured” by a
still image – it is a physiological phenomenon. By comparison, as the CRT electron beam
sweeps the surface of a cathode ray tube, it lights any given part of the
screen only for a small fraction of the frame time. More generally, CRTs have
“impulse” type displays where the response time is on the order of
microseconds – so these motion blur problems are not present for CRT TVs
Also, the motion blur problem
only exists for “fast” moving images. For a relatively static, slow changing,
video sequence, there will be no noticeable motion blurring. The faster the images the more motion
blur degradation it is going to suffer.
For slow images, the sample-and-hold and response times are
“fast” enough to capture the changes for the slow moving object in
video and no distortion in retina reconstruction takes place.
So, we see that there are two
causes of motion blurring. We have seen two causes of blurring. The first is the response time –
during which pixel does not change - and the other is he Hold type temporal
rendering. According to studies
[Pan I], first factor contributes about 30% to motion blur problem while nature
of hold device contributes the other 70%.
First one can mostly be eliminated nowadays using 'overdrive', where
device tries to pre-compensate for this effect by applying, even for mild
transitions (‘gray-to-gray’), maximum voltage for a short period of
time to boost response speed.
Next section will detail
methods to eliminate 2nd cause.
Possible
Solutions
In general, improvement in
basic material technology will make the problem slowly go away. However, even better materials will not
alleviate all problems. Problems
have to do with active-matrix principle itself and the sample and hold
characteristic. Some
“low-tech” fixes are:
Data Insertion
Backlight Flashing
Frame Rate Doubling
MCIF - Motion Compensated Inverse Filtering
The Data insertion method
inserts a black frame between every two actual frames so real data and black
data will each occupy 50% of frame.
So “hold” time will be effectively reduced and this technique
is predicted [Pan I] to reduce motion blur by ~ %50. However, it will require LCD to finish
two transitions within a frame which is currently impractical as that will
require LCD with twice refresh rate, which is currently impossible. Also, might cause "ghosting artifact“where shadow image legs behind truly moving
image. Since inserted black data
changing with previous displayed value it can not change states quickly within
half a frame and so it will look like there’s another
“shadow” image on screen, trailing original image.
Backlight Flashing turns off backlight
every frame cycle, which is much more feasible than LCD as turning on/off black
light is much easier (eg. LED’s) than making
LCD rate twice as fast. Also, one
can play with % of time on/off to optimize performance given LCD
characteristics. This method will
require that writing data on LCD must be synchronized with backlight duty
cycle. Once again, needs fast LCD
to avoid ghosting affect, which is caused by luminance slope in backlight-on
periods.
Frame rate doubling Similar to
previous solutions as it keeps same hold type rendering but raises frame
rate. Uses motion estimation to
interpolate frames and can only reduce motion blur by ~ %50 [Pan 1]. It requires fast LCD's and also requires
accurate motion estimation so new frames can be interpolated. But, does not require fast backlight and
has none of the detriments other methods suffer from, such as ghosting.
MCIF
Finally, method I will detail
is Motion Compensated Inverse Filtering (MCIF). The derivation of this method
uses frequency domain analysis and unlike other approaches, it tries to take
into account the Human Visual System (HVS) by considering the eye tracking of
the viewer. The mathematical
derivation is presented in the appendix attached. The surprising conclusions is that the end
distortion, taking into account the video signal sampling, the LCD device, and
the HVS, is simply a directional low-brick wall low-pass
filter. The filtering direction is
the direction of motion. The filter
is purely a function of the velocity vector – so one needs both motion
direction and speed in order to calculate the distortion (and compensating
filter) correctly. As such, to undo
motion blur distortion, we must filter with an inverse (or approximate inverse
for MMSE criterion) along the direction of motion, and take into account speed
as well.
This algorithm results in a
relatively low-complexity operation as opposed to some other proposed deblurring methods which require the expansive deconvolution operation. The algorithm heavily depends on good
motion estimation methods and good device characterization.
Here are some results that
illustrate what algorithm can do when needed.
This image represents an
approximation of the distortion caused by motion blur:

The next image represents what
could be done if motion and LCD device filters are known accurately.

I will discuss the
simulation/algorithms in more detail in following sections.
Summary
LCD’s have come a long way
in terms of offering a complete solution but the sample-and-hold mechanism of
the device can still result in motion blur for fast moving objects. These problems will be solved with
fundamental advances but in the meanwhile a few practical solutions were
presented, and one in particular, the IMCF, was derived in detail and simulated
in matlab.
The other solutions all seem to have required extensive processing
(frame rate doubling) or really stretch current limits of technology. However, the ICMF method offers good
tradeoffs in terms of complexity and performance. However, it does make some very broad
generalizations and I am not certain this will really hold in practice.
I learned from this project
the important of modeling. In our
little labs, we can come up with many sorts of structures and algorithms to
solve a specific problem but because of the physiological aspect of human
vision and imaging, it is very difficult to actually figure out which
algorithms will do the job and which just look good in theory. There is no doubt that extensive human
testing is required, in my opinion, to lend credence to any of these
algorithms.
Methods
In my matlab
simulations, I “cheated” by knowing the exact details of the
motion/device. The ICMF approach
must have a good model of the device and the HVS mechanisms in order to work
well. To implement it in practice
the filter will have to be adaptive/tuned to specific device (assuming the
general HVS modeling is good enough).
However, in practice, this is not going to be the case.
As mentioned before, I used matlab to carry out simulation of IMCF and worked with YUV
files (slight variant of YCrCb). My code goes through the video signal
frame by frame, estimate motion vector, estimates motion blur resulting from
motion, and filters along direction to simulate distortion.
I used a simple algorithm to
find the motion velcoty vector: assuming constant
motion vector for all pixels in frame, finding the phase difference in the
frequency domain between two consecutive frames will result in an exponential
function with linear phase over bandwidth.
In time domain this translates to a discrete delta at position
indicating velocity vector. In
practice, however, things are not perfect so choose velocity vector corresponding
to position with maximum value in time domain. Skipped some frames to make results more
palpable. The mathematics of this
algorithm were left to the appendix.
The matlab files were also attached.
References
Pan
H., Feng
X., Daly
S., ”Quantitative Analysis of LCD Motion Blur and Performance of
Existing Approaches”, SID Symposium Digest of Technical Papers, May 2005,
Volume 36, Issue 1, pp. 1590-1593.
Pan
H., Feng
X., Daly
S.,”LCD Motion Blur Modeling and Analysis”, Image
Processing, 2005. ICIP 2005. IEEE International Conference on Volume 2,
11-14 Sept. 2005 Page(s):21 – 24.
Wandell B.,
Foundations of Vision, Sinauer Associates Inc., 1995.
Brown L. G., “A Survey of Image Registration Techniques”, Computing Surveys, vol. 24(4), December 1992, pp. 325-376.
Websites:
http://en.wikipedia.org/wiki/LCD
http://en.wikipedia.org/wiki/Motion_compensation
http://scien.stanford.edu/labsite/scien_test_images_videos.html
[Video Source]
http://www.netbored.com/classroom/what_is_tft_lcd.htm