With the continuous development of medical imaging,medical images have become an effective tool for disease diagnosis,and each modality of medical imaging has its own advantages and disadvantages.For example,Computed Tomography(CT)images can display skeletal information well,but cannot clearly characterize structural information such as soft tissues.Magnetic Resonance(MR)images can display soft-tissue information adequately,but they have significant shortcomings in the representations of skeletal information.Positron Emission Tomography(PET)images can provide rich clinical information on human metabolism,but the resolution of images is low.Therefore,integration image information from multiple modalities into one image to accomplish multi-modal image fusion can provide complementary information from different modalities.Multi-modal fused images retain the features of the original images while compensating for the limitations of single-modal images,which can help doctors diagnose diseases accurately and efficiently.Through researching multi-modal medical image fusion and deep learning theory,this paper analyzes and improves the problems of existing multi-modal image fusion methods as follows.Existing multi-modal image fusion methods utilize multi-modal images as input that require multiple imaging of patients causing harm to patients’ bodies and large costs,moreover,image fusion needs a large number of registered images which is time-consuming and difficult to get,and has unclear texture and structure of the fused images.Therefore,a weakly supervised medical image fusion method with modal synthesis and enhancement based on Cycle GAN and Octopus Net is proposed.In modal synthesis,a weakly supervised approach is used to train the model to decrease the requirements of registered images,and MR images are used as input to synthesize CT images through a deep-structure and shallow-detail generator by training to reduce the required input modal and make the texture and structure clearer.In image enhancement,MR images are passed through a trained generator to generate enhanced MR images which enhance the texture and structure of the MR images.And then using the synthesized CT and enhanced MR images together with the original PET images as input to achieve tri-modal image fusion.Compared with 13 state-of-the-art modal synthesis and image fusion methods on the same datasets,the performance of the proposed method on 7objective evaluation metrics is significantly improved.The subjective visual effect and objective evaluation metrics of our method are better than those of the compared image fusion methods.Aiming at the problems that semantic lost in the fused images obtained by existing multi-modal medical image fusion methods,in order to make the fused images retain richer semantic information,we propose a multi-modal medical image fusion method based on CDDFuse and GNN.The method is based on the Correlation-Driven Dual-Branch Feature Decomposition for Multi-Modality Image Fusion(CDDFuse)dual-branch encode-decode network framework.This method trains the model in two stages to obtain multi-modal fused images by GNN Dual-branch Semantic Encoder,Fusion layer and Decoder.Among them,GNN Dual-branch Semantic Encoder contains GNN O-Semantic Encoder,GNN L-Semantic Encoder and Shared Encoder.The GNN O-Semantic Encoder captures image context semantics using the relational modeling capability of graph neural networks and extracts richer overall semantic information of images.The GNN L-Semantic Encoder enhances the semantic information of lesion regions using the relational modeling capability of graph neural networks and the lesion region identification capability.The Shared Encoder refines the image global information.Compared with 6 state-of-the-art image fusion methods on the same datasets,the performance of the proposed method on 4 objective evaluation metrics is significantly improved.For the two improved image fusion methods proposed in this paper,a multi-modal medical image fusion system is developed based on the Front-End VUE framework.The system contains three modules: Login,Weakly Supervised Medical Image Fusion with Modal Synthesis and Enhancement based on Cycle GAN and Octopus Net,and Multi-modal Medical Image Fusion based on CDDFuse and GNN.Among them,the Home shows the overall structure of this multi-modal medical image fusion system and the method overview of the image fusion methods.Both of the Weakly Supervised Medical Image Fusion with Modal Synthesis and Enhancement based on Cycle GAN and Octopus Net module and the Multi-modal Medical Image Fusion based on CDDFuse and GNN module all implement the display of image fusion results of this method and different comparison methods under the same source image,and calculate the evaluation metrics and compare the image fusion results of each method.This system aims to assist the medical clinical field by comparing and selecting high quality image fusion results from subjective visual and objective evaluation metrics values. |