Font Size: a A A

Research On Multi-modal Brain Image Fusion Method Based On Sparse Representation

Posted on:2019-07-21Degree:MasterType:Thesis
Country:ChinaCandidate:X DongFull Text:PDF
GTID:2348330545991875Subject:Engineering
Abstract/Summary:PDF Full Text Request
With the rapid development of medical imaging and computer science and technology,multi-modal medical imaging based on computer-assisted diagnosis and treatment plays an indispensable role in the clinical diagnosis of modern medicine.Since various types of imaging devices have their own characteristics such that the types of medical images acquired by different imaging devices are different and the content of the images are complementary,so it is an urgent task to integrate the medical images of different modalities and apply them to clinical medical diagnosis and minimally invasive surgery.Medical image fusion can provide great help for accurate diagnosis of diseases by registration and integration of multiple images of single or multiple imaging modalities to improve the quality of imaging and reduce the randomness and redundancy.In this paper,we deeply study the brain multi-modal image fusion theory and sparse representation theory,aiming to promote the theoretical research of medical image fusion technology and its wide application in clinical medicine.The primary contents and novelties of this dissertation are listed as follows:Concern the problem that the dictionary training process is time-consuming,and it is difficult to obtain accurate sparse representation by using a single dictionary expresses brain medical images currently,which cannot get satisfactory results,a CT/MR brain image fusion method via improved coupled dictionary learning was proposed.Firstly,the training set was regarded as the CT and MR image pairs in the method of this paper,and the coupled CT and MR dictionary were obtained through joint dictionary training based on improved K-SVD algorithm respectively,then the features of training images were considered as the atoms in CT and MR dictionary,and the feature indicators of the atoms of dictionaries were calculatedby the information entropy,then the common features were regarded as the atoms that the difference between the feature indicators is small,the innovative features were considered as the rest of the atoms,and a fusion dictionary was obtained by using the rule of “mean” and“choose max” to fuse the common features and innovative features of the CT and MR dictionary separately.Secondly,the registered source images were compiled into column vectors and subtracted the mean value,and the accurate sparse representation coefficients were computed by the CoefROMP algorithm under the effect of the fusion dictionary,then the sparse representation coefficients and mean vector were fused by the rule of “2-norm max”and “weighted average” separately.Finally,the fusion image was obtained via reconstruction.The experimental results demonstrate the proposed method can effectively improve the quality of brain medical image fusion and the time efficiency of dictionary training.The adaptivity of global training dictionary is not strong for brain medical images currently,and the “max-L1” rule may cause gray inconsistency in the fused image,which cannot get satisfactory results.A multi-modal brain image fusion method based on adaptive joint dictionary learning was proposed to solve this problem.Firstly,an adaptive joint dictionary was obtained by combining sub-dictionaries which were adaptively learned from registered source images using improved K-means-based Singular Value Decomposition(K-SVD)algorithm.The sparse representation coefficients were computed by the Coefficient Reuse Orthogonal Matching Pursuit(CoefROMP)algorithm under the effect of the adaptive joint dictionary.Furthermore,the activity level measurement of source image patches were regarded as the “multi-norm” of the sparse representation coefficients,and the sparse representation coefficients were fused by the unbiased rule that combining “adaptive weighed average” and “choose-max”.Finally,the fusion image was reconstructed according to the fusion coefficient and the adaptive joint dictionary.The experimental results show that the fusion images of the proposed method have more image detail information,better image contrast and sharpness,clear edge of lesion,and higher objective evaluation index,showingits consistency potentials in the aspect of subjective and objective evaluation.Aiming at the characteristics of brain multi-modal images,a multi-modal medical image fusion system was developed based on the Matlab platform on the basis of the study of pixel-level image fusion hierarchy.The main functions of the system includes: input the registered source images,select fusion methods,output fusion results,and save the fused image.The fusion methods include the discrete wavelet transform,the laplacian pyramid transform,the non-subsampled contourlet transform,the global training dictionary based on K-SVD algorithm,the CT/MR image fusion method based on improved coupled dictionary learning,and the multi-modal brain image fusion method based on adaptive joint dictionary learning.The system has good stability,reliability and real-time.It can be applied to clinical diagnosis and adjuvant therapy.
Keywords/Search Tags:medical image fusion, K-SVD, CoefROMP, sparse representation, dictionary learning
PDF Full Text Request
Related items