Font Size: a A A

The Fusion Model And Algorithm Based On Vatiational And Multiscale Decomposition For Multi-modality Images

Posted on:2022-10-07Degree:DoctorType:Dissertation
Country:ChinaCandidate:Q X WangFull Text:PDF
GTID:1488306755959739Subject:Mathematics
Abstract/Summary:PDF Full Text Request
The research of image fusion is a mixture of sensor,image processing,computer and artificial intelligence.The concept of Image fusion was first introduced in the late 1970s.Pohl et al.give a definition for image fusion,i.e.image fusion is the process of combining two or more input images to generate a new image by performing a special algorithm.Because the multisensor information is redundant and complementary,the image fusion technology can be used to combine the multisensor images which are captured from one scene at the same time or at different times,in order to obtain a fused image which describes the scene comprehensively and accurately.For example,in medical area,the fusion of multi-modality images can provide more information than any of the single modality image,and is helpful for increasing the clinical applicability of medical images for diagnosis and assessment of medical problems.Recently,the image fusion technology has been applied for several areas,including medical,aerospace,military and topography.However,because of the limitations of imaging conditions,there still exist many problems in source images,such as image noise and motion blur.In addition,multi-modality images have different grey and texture information,resulting from the different imaging mechanism.All of these problems bring great challenges to image fusion.Researchers would pay more attention to improving the fusion quality and reducing computational complexity in the upcoming years.The main work of this thesis are as follows:In the first part,we propose a variational image fusion approach based on TGV and local information.In the proposed model,we use the total generalized variation as the regularization term.As the generalized definition of total variation,the total generalized variation has high-order information,and can reduce the staircase effect caused by the total variation.We use the L2norm as the fidelity term,and extract the local gradient information to calculate weight maps.The weight maps can be used to get a fused image with stronger and clearer detail structures.The proposed model can be solved by the primal-dual algorithm.In the numerical experiments,the proposed method and the comparison methods are performed on multi-focus images as well as medical images.The objective and subjective assessment results show that the proposed method can preserve more clear edge and texture information.In the second part,we propose a variational fusion model based on feature extraction and anisotropic diffusion for medical CT and MR images.In the proposed model,we use the L2norm as the fidelity term to retain the salient intensity from the CT image.The regularization term is a L1norm,and it is used to constrain the approximation of gradient information between the fused image and the MR image,and keep the fused image smooth at the same time.Then,we discuss the convexity of the proposed model,and proof the existence and uniqueness of solutions.And the first-order primal-dual algorithm is used to solve the variational model.Through the experimental comparisons,the proposed method can preserve the intensity bone structures from CT images and the high resolution soft tissues from MR images.In order to preserve the salient intensity information form both CT and MR images,we propose a novel two-stage fusion framework.Firstly,the saliency detection method and structure tensor are used to extract salient intensity information and geometry structures of medical images,respectively.The extracted intensity and structure features are used to construct weight maps.An initial fused image is obtained with prior information.Then,a variational model is proposed to optimize the initial fused image.In the numerical experiments,compared with seven state?of?the?art fusion methods,the proposed method shows a comprehensive advantage in preserving the salient intensity features,as well as texture structure information,not only in visual effects but also in objective assessments.In the third part,we discuss the nonsubsampled shearlet transform based multiscale image analysis.A fusion framework based on feature extraction and nonsubsampled shearlet decomposition is proposed.Firstly,the contrast features are extracted from the input images by a saliency detection method in order to calculate weight maps.An approximation image that preserves the intensity information of source images can be obtained through weighted fusion process.Then,the nonsubsampled shearlet transform is applied to decompose the input images into a series of high-frequency subimages with different scales and directions.These subimages can be used to generate a detail image.The final fused image is obtained by fusing the approximation image and the detail image together.Experimental results show that the proposed method can get higher quality fusion images than the comparison methods.
Keywords/Search Tags:Image fusion, image analysis, variational problem, primal-dual algorithm, feature extraction, multiscale analysis, nonsubsampled shearlet transform
PDF Full Text Request
Related items