Multimodal fusion refers to the joint analysis of multiple datasets that provide different views of the same task.In general,it can extract more information than can be analyzed individually.Single-modality biometric identification alone is not enough to meet many practical needs,and the fusion features of multiple modalities can reveal some information that is missed in single-modality imaging analysis.Resting-state and structural magnetic resonance images have been shown to be beneficial for the study of brain pathology,mainly because of the complementary spatiotemporal resolution of these neuroimaging modalities.This paper mainly implements single-modality feature extraction and multimodal fusion of resting-state functional and structural magnetic resonance images,and uses multiple datasets to test the generalization ability of the model.The specific research contents are as follows:(1)In view of the problem that the differences in the time domain do not appear in the spatial domain synchronously,this paper introduces machine learning methods such as independent component analysis,sliding window correlation,and k-means clustering to obtain the time domain of resting-state functional magnetic resonance images.The results show that the selected temporal features have a good classification effect.Through further analysis of time domain information,it is found that there are some significant differences in brain networks(normal males and females,autism patients and normal control groups,methamphetamine abstainers and normal control groups).(2)For the traditional processing of structural magnetic resonance images,the workload and process are complex,and researchers are required to have certain prior knowledge.In this paper,independent component analysis is used to extract the load factor of structural magnetic resonance images,which is a data-based method.After using the classifier to verify the reliability of the loading coefficients,further analysis of brain network differences was carried out.(3)For the identification of multimodal biometrics,using an appropriate fusion method can make full use of the deeper information between each modal feature.In order to verify that multimodal data has the advantage of information complementarity,this paper uses a low-coupling adversarial auto-encoder network as the fusion model,and defines the fused feature distribution as common and unique features,representing shared and complementary between modalities,respectively.Finally,the classifier network is used to test the classification results of fused features.Compared with single-modality,the classification accuracy of multi-modal features is higher,which also shows that the fused features contain more difference information.This paper mainly uses the method of machine learning and deep learning to achieve the extraction of the characteristics and through the analysis of the results back to the brain networks for classification,which can be used for medical analysis and diagnosis.Potential related features have been identified to further promote research on the operating mechanism and pathological features of the human brain. |