Font Size: a A A

A fast and robust framework for image fusion and enhancement

Posted on:2006-10-04Degree:Ph.DType:Thesis
University:University of California, Santa CruzCandidate:Farsiu, SinaFull Text:PDF
GTID:2458390008956682Subject:Engineering
Abstract/Summary:
Theoretical and practical limitations usually constrain the achievable resolution of any imaging device. The limited resolution of many commercial digital cameras resulting in aliased images are due to the limited number of sensors. In such systems, the CCD readout noise, the blur resulting from the aperture and the optical lens, and the color artifacts due to the use of color filtering arrays further degrade the quality of captured images.; Super-Resolution methods are developed to go beyond camera's resolution limit by acquiring and fusing several non-redundant low-resolution images of the same scene, producing a high-resolution image. The early works on super-resolution (often designed for grayscale images), although occasionally mathematically optimal for particular models of data and noise, produced poor results when applied to real images. On another front, single frame demosaicing methods developed to reduce color artifacts, often fail to completely remove such errors.; In this thesis, we use the statistical signal processing approach to propose an effective framework for fusing low-quality images and producing higher quality ones. Our framework addresses the main issues related to designing a practical image fusion system, namely reconstruction accuracy and computational efficiency. Reconstruction accuracy refers to the problem of designing a robust image fusion method applicable to images from different imaging systems. Advocating the use of robust L1 norm, our general framework is applicable for optimal reconstruction of images from grayscale, color, or color filtered (CFA) cameras. The performance of our proposed method is boosted by using powerful priors and is robust to both measurement (e.g. CCD read out noise) and system noise (e.g. motion estimation error). Noting that motion estimation is often considered a bottleneck in terms of super-resolution performance, we utilize the concept of "constrained motions" for enhancing the quality of super-resolved images. We show that using such constraints will enhance the quality of the motion estimation and therefore results in more accurate reconstruction of the HR images. We also justify some practical assumptions that greatly reduce the computational complexity and memory requirements of the proposed methods. We use efficient approximation of the Kalman Filter and adopt a dynamic point of view to the super-resolution problem. Novel methods for addressing these issues are accompanied by experimental results on simulated and real data.
Keywords/Search Tags:Image fusion, Resolution, Robust, Framework, Methods
Related items