| The fusion of multi-modality medical images can provide complementary disease information,such as computed tomography(CT),magnetic resonance imaging(MRI),and positron emission tomography(PET).However,in actual clinical analysis or computer-aid diagnosis,some essential modalities are often lacking because of high imaging costs,radiation hazards,lack of imaging facilities,and so on.Since it can be used as a low-cost source of sufficient multi-modality medical images and provides great value for clinical research or computer-aided diagnosis,medical imaging crossmodality synthesis has become one of the mainstream researches in the field of medical imaging analysis.But it also faces many challenges.The rise of deep learning provides new solutions to solve these challenges.Therefore,the cross-modality synthesis of medical images using deep learning has great research prospects and research value.Because the performance of traditional machine learning methods largely depends on the quality of hand-crafted features and the prior knowledge of domain experts,its application is very limited.In this paper,an adversarial U-shaped neural network is designed for the medical imaging cross-modality synthesis.It has a strong adaptive feature extraction and learning ability.The skip connection between the expanding subnetwork and the contracting subnetwork can ensure that the network has a larger receptive field.The application of an adversarial training strategy can effectively improve the quality of synthetic target images.On the other hand,this paper discusses the performance of different normalization methods in image synthesis,using instance normalization instead of batch normalization to achieve better results.Compared with other convolutional neural networks,the generative adversarial network has superior generative ability.However,considering that the traditional generative adversarial network is easy to ignore the hidden layer vector input in the absence of prior guidance and the limitations of the two-dimensional synthesis model in three-dimensional medical image synthesis,this paper proposes a three-dimensional bidirectional mapping generative adversarial network to further study the crossmodality synthesis of medical images.Firstly,the bidirectional mapping mechanism between latent space and target images is introduced in the generative adversarial network to enhance the accuracy and feature matching level of cross-modality mapping.Secondly,a generator architecture composed of a three-dimensional densely connected U-shaped network is constructed to best preserve the spatial structural features of threedimensional medical images.Thirdly,a hybrid loss function composed of adversarial loss,Kullback-Leibler divergence constraint,reconstruction loss,and perceptual loss is designed to improve the training efficiency of the proposed three-dimensional model.Finally,many ablation experiments and comparative experiments demonstrate the excellent performance of the proposed method.And then the good effect of synthetic images in Alzheimer’s disease classification experiment and data enhancement experiment further proves the superior performance of the proposed method in the research of medical imaging cross-modality synthesis. |