Font Size: a A A

Trans-Scale Registration And Fusion For Motion Images From Multiple Sources

Posted on:2016-07-31Degree:DoctorType:Dissertation
Country:ChinaCandidate:Q P LiFull Text:PDF
GTID:1108330482957708Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
As an important manifestation of redundancy and complementary information, multi-source motion images have played an important role in image processing and analysis in satellite remote sensing, aerospace, robot vision, and other fields. Due to the effect of different settings of many factors, such as spectral band, mode, collection location, spatial resolution, contrast of source sensors in multi-sensor system, there exist many differences such as the relative position shift, rotation, scaling changes changes among multi-source motion images which obtained from multiple sensors that monitoring the same target or the same scene. This makes the multi-source motion images and the target information included in them also have big difference, and therefore seriously affect the precise pose estimation, recognition and tracking of moving targets. Therefore, we need to research effective algorithms to do registration and fusion on these multi-source motion images, comprehensively utilize the redundant information contained in multiple sensors. By this way, the fusion images can contain more detail information. This dissertation aimed at the feature of multi-source motion images in multiple scales, and studied the trans-scale registration and fusion algorithms of motion images from multiple sources. The major contribution and innovations in this dissertation are:(1) To address the problem that the existing image registration algorithms will have a significant increase in computational complexity when improving the registration accuracy in feature point matching, a trans-scale registration algorithm using SIFT feature and local ternary patterns descriptor (SIFT-LTP) is proposed. This algorithm not only keeps the advantage of SIFT which can accurately extract feature points from motion image; it also has robustness and high efficiency in feature description by using local ternary patterns. In the algorithm, coarse matching is used to eliminate the mismatched points, then applying RANSAC method to perform refined match and generate transform matrix. Finally, bilinear interpolation technique is used for fusion the registrated images. In this way, the proposed SIFT-LTP algorithm can improve the efficiency of feature points matching in registration. Considering the changes of rotation, size, brightness, noise, etc of the images to be registrated and the efficiency of the algorithm, the proposed SIFT-LTP algorithm not only keeps the advantage of SIFT in feature extraction, but also has feature registration accuracy and high efficiency for the using of LTP descriptor.(2) Aiming at the problem that the texture detail information of motion image is considered insufficient in fusion process, a novel image fusion algorithm based on local fractal dimension and discrete wavelet framework transform (LFD-DWFT) is proposed. This algorithm takes full use of the image texture feature and the inherent advantages of fractal theory; by using the local fractal dimension, image texture characteristics are utilized and can result in more comprehensive consideration of image information in the fusion process. What’s more, the DWFT is used to decompose the source images; it can avoid the shift variance and aliasing effect by omitting the downsample step. Experimental results show that the proposed LFD-DWFT algorithm works well on multi-focus images, multi-exposure images and visible-infrared images. In addition, for the applying of local fractal dimension, the proposed algorithm can extract more important information from source images and transfer them to the fusion result, and therefore improved the performance of transitional DWFT based fusion algorithms.(3) To address the problems that traditional motion image fusion algorithms only consider the motion information in target detection, but still use individual-frame-based rules in fusion stage, and therefore makes the temporal motion information can’t be fully utilized in fusion procedure, we proposed a fusion algorithm based on Uniform Discrete Curvelet Transform and spatial-temporal information (UDCT-ST). When using UDCT to decompose images and reconstruct them into a composite fused image, more important information can be transferred for fusion, and the effects of imperfections of source images can also be suppressed. What’s more, different from traditional methods that only consider individual frame, the proposed method considers the current frame and its adjacent frames. It uses local spatial-temporal information to guide the fusion which extends the fusion from 2 dimensions to 3 dimensions, therefore makes full use of information on temporal dimension and can generate better fusion results that is with temporal stability and consistency. The experiment results show that the proposed UDCT-ST algorithm works well; visual and objective evaluations also show that UDCT-ST outperforms comparison individual-frame-based methods in terms of temporal stability and consistency as well as spatial-temporal information extraction. The good performance of UDCT-ST benefits from both of UDCT and the applying of spatial-temporal fusion rules.(4) To address the problems that existing motion images fusion algorithm can not be applied in the situation that source video sensors have very different view angle, distance, and illumination condition and existing confidence function of source sensor can not always be employed in most of practical video surveillance situations, we proposed an feature level adaptive video sensor fusion method based on decentralized Kalman filter (ADKFF). In the fusion procedure, a sensor confidence function is taken into account to evaluate the performance of the sensor in targets detection, and then the confidence value is used to automatically adjust the measurement noise covariance matrix of the local filters and therefore adaptively determines weight of each video sensor more correctly in the fusion process. What’s more, the applying of DKF can make full use of redundant tracking data from multiple video sensors and give more accurate fusion results. In this manner, the position errors due to inaccurate target tracking and position projection can be reduced. Visual and objective experimental results showed that the proposed ADKFF algorithm works well on the real-world video sequences and gives promising performance than that of single sensor, comparison non-adaptive algorithms and adaptive algorithms.(5) Synthesizing the proposed SIFT-LTP, LFD-DWFT, UDCT-ST and ADKFF algorithms, a trans-scale registration and fusion system based on multisource motion images is designed and implemented. The system includes three logic levels:data acquisition layer, logic layer and user level. The logic layer is the core of the system, it includes four function modules:preprocessing module, registration module, pixel level fusion module and feature level fusion module. Among them, preprocessing module is mainly used to preprocess the source motion images, such as image rotation, resizing, blurring and adding noise, the preprocessed images are used to do functional verification of other modules. Registration module implements the registration of source motion images which in different condition, such as rotation, resizing, blurring, noise, etc. There are two functions in pixel level module, fusion function based on local fractal dimension and fusion function based on spatial-temporal information. These two functions are all implemented in pixel level, the input source images must be perfectly registrated. Feature level fusion module implements video sensors fusion with different view angle, distance and illumination. Test results show that the system can provide a good user interface; users can easily verify the algorithms proposed and the corresponding comparison algorithms. It can provide the motion image fusion with higher visual quality, richer information and more comprehensive for our tasks.
Keywords/Search Tags:motion image registration, motion image fuse, local fractal dimension, local energy of spatial temporal information, video sensors fusion
PDF Full Text Request
Related items