| The diversity of medical imaging mechanisms has promoted the development of medical research and clinical applications,and this has become an important part of modern medical systems,while the function-limited single-modality imaging is increasingly difficult to meet the needs of complex disease diagnosis processes.Multimodal medical image fusion technology can significantly reduce data redundancy by fusing complementary information about the same scene into a single image,and effectively improve the stereoscopic perception of body tissue pathological information.This has the potential to improve the efficiency and precision of complex disease diagnosis.Therefore,medical image fusion has become an important auxiliary method for clinical research and disease diagnosis,and has important theoretical research value and broad application prospects.This dissertation focuses on the research of sparse representation(SR)-based multi-modal medical image fusion.The conventional SR has problems such as easy to introduce visual artifacts,weak local patch consistency and high algorithm complexity in the progress of multimodal medical image fusion.To this end,two novel sparse representations,separable dictionary learning and convolutional SR are used for the fusion research.The main innovative researches are summarized as follows:1.Multi-modal medical image fusion based on the separable dictionary learningCompared with the conventional SR,the separable dictionary learning makes a compromise between the computational efficiency of analytical dictionary and the flexibility of learning dictionary,and the superiorities are as follows: Under the premise of no increasing dictionary redundancy,sub-dictionaries take sparse representation from multi-dimensional directions in the form of matrix rather than atoms,and the obtained sparse matrix can represent richer structural texture features,which not only preserves more inherent structure and texture correlation information of source image,but also increases the flexibility of dictionary application.However,due to the characteristics of multi-modal medical images and the separable dictionary learning is mainly aimed at the process of small targets,it is easy to cause texture information loss and the spatial inconsistency of fusion result when using the separable dictionary learning algorithm for image fusion directly.Aiming at the drawback of texture information loss due to the single activity level measure when the separable dictionary learning is used for multi-modal medical image fusion,a fusion method based on texture contrast and sum of sparse salient features(SSSF)is proposed by combining spatial saliency and transform saliency.The texture contrast uses a combination of luminance contrast and directional contrast to characterize spatial saliency,and the SSSF can make more significant sparsity coefficients participate in the construction of activity measure through the difference of adjacent regions.Experimental results show that the proposed fusion method can retain more complete texture detail information.In view of the drawback of spatial inconsistency of fusion result due to the sliding window technology in the separable dictionary learning,a fusion method based on separable dictionary learning and energy guidance is proposed.Gabor energy is used to construct fusion weights of low frequency subbands,and this increase the proportion of insignificant textures in the flat area.Experimental results show that the proposed fusion method can reduce the effect of spatial inconsistency effectively.2.Multi-modal medical image fusion based on separable dictionary learning and guided filteringEdge preserving filtering is a fast implementation technology of image fusion.In view of the problem that the separable dictionary learning does not focus on the edge feature extraction,which is easy to cause the edge blur of fused image,we combine the guided filtering with good edge retention performance to improve the performance of edge feature extraction with the separable dictionary learning,and its good noise suppression performance can improve the sharpness of fused image.However,due to the construction of saliency map and the inherent limitations of the linear model of filtering algorithm in the guided filtering based image fusion,the shortcomings such as spatial inconsistency and edge halo artifacts are existed in the fusion results.Aiming at the fact that the construction of saliency map in guided-filtered image fusion tends to reduce the accuracy of base-layer fusion weights,resulting in spatial inconsistency of fusion results,a fusion method based on separable dictionary learning and energy-guided guided filtering is proposed,the Gabor filter is used to construct the initial weight map of guided filter.Experimental results show that the proposed fusion method not only maintains clear edge features,but also improves the spatial inconsistency of flat area.Aiming at the problem that the partial linear model of guided filtering is easy to cause halo artifacts at the edge of fused image,we propose a fusion method based on the optimized separable dictionary learning and gradient guided filtering.Among them,the Gabor energy map combined with the first-order edge perception constraint of the gradient guided filtering can extract edge features more accurately,and separable dictionary learning uses FISTA algorithm to replace OMP for sparse approximation,which improves the efficiency of sparse coding.Experimental results show that the proposed fusion method has advantages in terms of texture sharpness and edge preservation.3.Multimodal medical image fusion based on adaptive convolutional sparse representationCompared with the multi-value problem of SR such as separable dictionary learning in image reconstruction,the fusion strategy of convolutional sparse representation(CSR)has the advantages of single value and global optimization,which is more conducive to the preservation of texture information in the fusion result.On this basis,we propose an adaptive CSR based fusion method.Among them,NSCT plays the role of feature classification of the convolution sparse coefficient maps,which enhances the correlation of the corresponding sparse sub-bands,and this is beneficial to the subsequent sub-band correlation measurement to exert better performance.Sub-band correlation uses the adjacent global sub-bands to replace the partial window based method used in conventional CSR-based fusion,which further improves the robustness of the fusion result to mis-registration.In addition,the fusion target in above fusion framework is used as training sample and testing sample simultaneously,which reduces the uncertainty of fusion performance.Experimental results show that the proposed fusion method has advantages in terms of structure and texture details preservation. |