With the rapid development of medical imaging,medical images have attracted wide attention,the usage of medical images for lesion location and analysis has gradually became an important effective method in medical diagnosis.Multimodal image fusion technique is capable of combining image features from different input images to generate a high-resolution composite image that accords with human visual system.Medical professionals will no longer be required to separately analyze medical images from a single imaging device,which improves the accuracy of lesion analysis and clinical diagnosis.Thus,the research on multimodal medical image fusion has comparatively high practical value.Multimodal medical image fusion also has widely application in remote sensing,video surveillance and photography.Nowadays,multimodal image fusion has developed to a high level.However,some existing problems in this research field have not been addressed well.The research of this paper mainly focuses on the medical image.The main research of this paper can be summarized as follows:(1)Aiming at the solution of the problems from general sparse representation-based method with abnormal artifacts and loss of details,a multimodal medical image fusion method using multiscale edge-preserving decomposition and sparse representation is proposed.The proposed method contains the following steps: First,the source images are decomposed utilizing an edge-preserving filter to obtain the smooth and detail layers.Then,an improved sparse representation fusion strategy is employed to fuse the base layers,in which a block selection-based scheme is proposed to construct the dataset for training the joint dictionary,and a novel multi-norm-based activity level measurement method is introduced to select the sparse coefficients,while the detail layers are merged by an adaptive weighted local regional energy fusion rule.Finally,the fused smooth layer and detail layer are reconstructed to obtain the fused image.The comparison experiments are conducted on the medical images from three different imaging modalities,the experimental results demonstrate that the proposed framework preserves more salient edge features with the improvement of contrast and achieves better fusion performance than other algorithms in terms of both visual effects and objective evaluation.(2)Considering the difficulty in simultaneously extracting the salient features from the source images using multiscale transform-based fusion methods.This study presents a twoscale fusion framework for multimodal medical images to overcome the aforementioned limitation.In this framework,a guided filter is used to roughly decompose source images into the base and detail layers to roughly separate the two characteristics of source images,namely,structural information and texture details.To effectively preserve most of the structural information,the base layers are fused using the combined Laplacian pyramid and sparse representation rule.The detail layers are subsequently merged using a guided filtering-based approach,which enhances contrast level via noise filtering as much as possible.The fused base and detail layers are reconstructed to generate the fused image.The comparison of the fused results in terms of visual effect and objective assessment demonstrates that the proposed method provides better visual effect with an improved objective measurement because it effectively preserves meaningful salient edge features and image energy without producing abnormal details.(3)Considering the difficulty in extracting the structural and functional information using multiscale transform-based methods.This paper proposes a novel two-scale framework utilizing the interval gradient and robust Retinex model for the fusion of multimodal medical images.In the proposed method,at first,two basic characteristics of the source image,namely structural information and texture details,are completely separated by a pre-designed model with interval gradient to achieve the structure-texture decomposition of source images.Then,the structural information is merged by using an improved sparse representation rule,in which a novel global dictionary is trained from the dataset containing patches divided from multiple medical images and natural images.Next,a detail enhancement scheme with a robust Retinex model is employed to highlight the remaining texture details.Finally,the fused image is obtained by integrating the merged structural information and texture details.Experimental results indicates that the proposed method achieves more competitive fusion performance in two pseudo-color medical image fusion problems. |