4. Implementation


    The simulation environment( object image generation ) and how the algorithm was implemented in MATLAB is described. Also some issues regarding implementation is also listed.
 

    The algorithm was thought of with high speed algorithm in mind such that it alleviates correspondence and constant brightness constraint problem. In order to implement and demonstrate the proposed algorithm, synthetic image was created instead of high speed images which are not easy to download.

    First, a checkerboard pattern background was created. This checkerboard pattern was chosen to account for the fact that a background does not have uniform brightness levels. And after that, objects like square, circle, diamond were added to the image. Occlusions are also handled by the order of adding each objects.

    Up to now, we have generated a scene. To simulate bluring by area integration of pixels in cameras, the scene was locally averaged and subsampled yielding an image captured by a camera. The capturing area and size was made flexible to simulate camera translation and zooming effect. Because of this sub-sampling, simulation of object motion could be achieved with sub pixel accuracy. In the image that was created for implementation of the algorithm, the subsampling ratio was 4, so that we can simulate object movement up to quarter-pixel accuracy.

    Matlab codes- generation.m | background.m | capture.m | circle.m | ob-square.m | square.m
 



< Figure: Before zooming >                                                             <Figure: After zooming >

    One of the motivation for the algorithm was low complexity. Just innocently calculating the number of calculations, the computational complexity is very low. This is true because we only calculate the average and variances where the edges occur. The number of operations for this is much lower than most optical flow/motion estimation algorithms.

    However, as I started coding this algorithm in MATLAB, it was a nightmare. Even though the number of operations needed for this is small, addressing was a major disaster. So, the overall complexity for this algorithm is not so low as I expected. Since objects have a random shape, it is very hard to link all the edges so that MATLAB understands the set of edge point as an object.


    < top.m & region.m >                 < seg.m >                   < marking.m & link.m >           < featuremap.m >

 

    To run this algorithm, instead of calculating 3 bit images from the 8 bit image, just obtaining 3 bit images directly from the  image sensor would be more desirable. Especially if the adc architecture is a bit serial,  this approach makes a lot of sense. For example, you can sample several 3 bit images in between 8 bit images to obtain motion information. Although the motion difference between the two 8 bit images could be big, if we sample several 3 bit images in between, we can derive the motion between the 8 bit images.

    This algorithm is also very well suited for pixel level processing. Since the communication between pixels are limited to the nearest neighbors, the communication overhead is very small. Pixel level processing makes thing highly parallel so that it can be applied to high speed imaging. One problem occurs when we want to extract feature values from the edge pixels.