| Multimodal medical images can make up for the limitations of various imaging forms,so as to provide a reliable basis for doctors to quickly diagnose and reasonably diseases.However,it takes a lot of time and money to acquire medical images of different modalities at the same time clinically.Medical image synthesis and fusion is a feasible way to obtain multimodal medical images,therefore,it has attracted the attention of many researchers.Aiming at the problem that the existing medical image synthesis algorithms need to build models for different input modalities and lack versatility and flexibility,an invertible and variable augmented network(i VAN)is proposed,which is suitable not only for the synthesis of single-modal inputs,but also for the synthesis of multi-modal inputs.In order to better learn from the bidirectional mapping between source and target images,and to effectively combine multiple input modal information,we propose to build an i VAN model using the variable augmentation technique and the affine coupling layer.In addition,bidirectional training with forward loss and reverse loss is designed to optimize model to avoid blurred target images.The proposed model is also suitable for medical image fusion.Specifically,firstly,the image stack to be fused is used as the input of the network,and then the fused image is obtained by the trained model.The model proposed is evaluated on two medical image data sets.This improvement increases further when multiple input modalities are used,demonstrating that the model can effectively explore the complementary information of different modalities to improve the synthesis performance.In addition,compared with several classical medical image fusion methods,the model outperforms other medical image fusion methods in both objective image quality indicators and subjective visual effect. |