Alzheimer’s disease(AD)is one of the most common neurodegenerative diseases,which seriously endangers the life and health of patients.Magnetic resonance imaging(MRI)and positron emission tomography(PET)can provide structural and functional information of the brain,respectively.The latest research at home and abroad shows that the combination of MRI and PET images of the same subject will help to improve the accuracy of AD auxiliary diagnosis.However,in the regular collection of MRI and PET images,due to the high cost and high radiation of PET images,most of the subjects only have MRI images,no PET images,or only a small number of PET images,resulting in a large number of PET images missing in the AD data set.This makes the data available for training ad auxiliary diagnosis models insufficient,resulting in poor performance of the AD diagnosis model.Therefore,in this paper,we first use the deep learning method to complete the missing PET images from the MRI data set and then use the multimodal learning method to fuse the MRI and PET images of the same subject for AD auxiliary diagnosis.Authoritative studies at home and abroad have confirmed the feasibility of using MRI images collected in the same period to generate missing PET images of the same subject.For example,there are relevant authoritative papers in SCI journals such as MICCAI conference and IEEE Transactions on medical imaging.The Institute of Computing,Chinese Academy of Sciences,University of North Carolina,and other scientific research institutions are also carrying out relevant research,and research results have been published.The specific research of this paper is as follows.(1)Aiming at the problem that a large number of PET images are missing in the current data sets related to the AD,and the data samples of traditional AD auxiliary diagnosis methods are small.This paper proposes a method of PET image generation based on deep learning,which uses MRI images collected in the same period to generate PET images missing from the same subject at that stage.Specifically,this paper improves the cycl-GAN network generator,introduces the idea of the U-Net network into the generator,and adds the Attention mechanism.After introducing the Attention mechanism into the generator,the generator can pay more attention to the location information between different regions of MRI and PET images and the regions that are difficult to generate,and generate high-quality PET images.(2)At present,the feature fusion of MRI and PET images is divided into two independent states,which leads to the poor performance of the AD diagnosis model,and the traditional multimodal feature fusion method can not learn the potential complementary features between MRI and PET images.In this paper,a multimodal learning-based AD diagnosis method is proposed,which uses MRI and PET images of the same subject for AD diagnosis.Specifically,the method adopts the strategy of hierarchical decomposition and integrates the feature fusion of MRI and PET images of the same subject and AD auxiliary diagnosis into a unified model through the deep non-negative matrix decomposition method,to learn the potential complementary features of MRI and PET images.Besides,to solve the problem of PET image missing label,this model uses the real MRI image of the same subject to label the generated PET image,which further improves the accuracy of AD auxiliary diagnosis.(3)In view of the problem that the traditional AD auxiliary diagnosis methods use MRI and PET image slices for AD auxiliary diagnosis,which leads to the poor accuracy of AD auxiliary diagnosis.In this paper,3D MRI and PET images of the same subject are used for AD auxiliary diagnosis.3D MRI and PET images can provide more comprehensive AD disease information,better mine the potential complementary features between MRI and PET images,and help to improve the accuracy of AD auxiliary diagnosis.A series of experiments on the international open authoritative data set ADNI show that the proposed method has good performance in PET image generation and AD auxiliary diagnosis. |