Font Size: a A A

Pixel Level Multi-source Image Fusion Methods And Applications

Posted on:2022-01-23Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y Y LiFull Text:PDF
GTID:1488306533465164Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
In recent years,the extensive use of digital imaging equipment makes the digital image data explosive growth.The single imaging equipment is restricted by the deimaging mechanism,the exposure time,the focal length and other factors,which has been difficult to directly meet the needs of people's production and life.How to synthesize multi-source images effectively,so as to improve the information richness of single image has become an important research topic.Pixel level image fusion technology is to use image processing algorithm to generate a new fusion image which can describe the scene more fully and detailedly than any single source image,so as to enhance the visual effect and subsequent processing ability of the image.This technology is widely used in consumer electronics,medical imaging,national defense and military,remote sensing image and other fields.This dissertation focuses on several core issues in pixel-level image processing: multimodal medical image fusion,multifocus image fusion,infrared visible image fusion and multi-exposure image fusion,and proposes four fusion methods in transform domain and spatial domain,hoping to contribute to the development of image fusion.This dissertation proposed four pixel-level image fusion methods in both transformed and spatial domain.An image fusion framework based on multi-scale transformation and sparse representation is proposed.The framework uses nonsubsampled contourlet transform(NSCT)waves to perform multi-level and multiscale filtering on source images,and divide source images into high-and low-frequency parts.The low-frequency part adopts an image fusion method based on sparse representation.This method first divides the image to be fused into 8×8 image blocks,and then uses the principal component analysis(PCA)self-learning method to construct a sparse-representation dictionary,next uses the L1 maximum method to perform sparse vector fusion,and finally obtains the low-frequency fusion result.The high-frequency part uses the fusion method based on the maximum Laplacian energy sum to obtain the high-frequency fusion result.NSCT inverse transformation is applied to the low-and high-frequency fusion results to obtain the fused image.Compared with the traditional image fusion methods based on multi-scale transformation and sparse representation,the proposed method has a certain improvement on multi-modality and multi-focus image fusion.A multi-modality medical image fusion framework based on deep convolutional neural network and multi-pyramid transformation is proposed.As the core goal of medical image fusion,this method aims at obtaining high-quality medical images.This paper uses the Siamese convolutional neural network and both positive and negative sample images before and after Gaussian blur to construct a weight map generation mechanism for medical images as the backbone of image fusion.In the fusion process,the trained network is used to construct the weight map,and the Gaussian pyramid is used to decompose the weight map into the corresponding high-and low-frequency parts;the contrast pyramid is used to decompose the input images into the corresponding high-and low-frequency parts.The final image fusion result is obtained by using the high-and low-frequency parts of the input images and the high-and lowfrequency parts of the weight map.The experimental results confirm that the proposed method has a certain improvement in both details and contrast compared with the comparative medical image fusion methods.A multi-exposure image fusion method and its application based on the adaptive image block size selection and optimization of structure decomposition are proposed.This method uses the image texture entropy to evaluate the local image information,analyzes the coupling relationship between the image texture entropy and image block size by using the corresponding fusion effect evaluation indexes,and achieves the adaptive selection of image block size.Different image block sizes are selected for the fusion of multi-exposure source images in different types.After source images are divided into image blocks,the structure of image blocks is first decomposed,and then the decomposed components are merged to obtain a preliminary fused image.Next,the preliminary fused image is optimized according to the structural similarity index.Finally,the optimized fusion result is obtained.The results of multi group comparison show that the proposed method can obtain high-quality HDR images,which has better visual effect and more detailed details.In addition,this chapter also verifies the application performance of the proposed method in image defogging through image fusion.The experiment results confirm that the proposed method can effectively achieve the defogging of real-world foggy images.A spatial-domain multi-exposure image fusion method and its application based on image decomposition and color prior are proposed.This method first applies the fast guided filtering to the two-scale decomposition,which divides source images into base and detail layers.Then,according to the color prior,the difference between brightness and saturation is used to determine the degree of image exposure,and the difference between brightness and saturation and image contrast are combined to calculate the weight of multi-exposure image fusion.Finally,the guided filtering is used to optimize the weight maps of both base and detail layers,respectively.Both base and detail layers are first obtained by the weighted fusion,and then both the base layer and enhanced detail layer are fused to obtain the final fusion result.In one word,the proposed methods have a number of improvement of image fusion performance and enough novelty in proposed fusion method.
Keywords/Search Tags:image fusion, pixel level fusion, multi-scale transformation, spatial domain fusion, fusion defogging
PDF Full Text Request
Related items