| ASD(Autism Spectrum Disorder,ASD)is a complex neurological developmental disorder.Its clinical diagnosis mainly depends on the measurement of the quantity table and the doctor’s assessment of patient behavior.This method is time-consuming and lacking.Although traditional machine learning methods can use MRI data to identify ASD,they need to focus on collecting a large number of patient privacy data for model training.This process will leak patients’ privacy.Therefore,this study proposes the use of Federal Learning to identify ASD.This method can achieve better classification effects through training distributed neural network models without concentrated collection or collecting ASD data.The main contents are as follows:This study proposes an ASD classification method for the multi-site ASD recognition problem which is named Fed CPK(Federated Distillation Combining Consensus and Personalized Knowledge,Fed CPK).The FedCPK uses the customer-server architecture to train distribute model,so that the data of each client is stored locally without sharing,avoiding the problem of a large amount of data sharing in traditional machine learning methods,and reduced the risk of privacy leakage.In response to the decline in model performance caused by the heterogeneity of multi-site scenarios,Fed CPK uses knowledge distillation to perform model fusion to allow the server side model to learn consensus knowledge.Then retain the BN layer parameters of each client model to local,so that the client model will learn personalized knowledge.By combining two kinds of knowledge to relieve the problems of data heterogeneity.This study uses an ABIDE dataset for5-Fold Cross Validation experiments.The experimental results show that Fed CPK is better than six existing methods.This study optimizes the knowledge distillation process in the FedCPK method,and proposes an ASD classification method Fed DFK(DataFree Knowledge Distillation Federated Learning,Fed DFK).Fed DFK trains a generator model on the server side.This generator can learn the overall data distribution of the client and generate enhanced data with the consistency of the client model prediction collection.Therefore,the enhanced data can be used to replace the distillation data to complete the knowledge distillation.Fed DFK solves the problem of dependence on distillation data sets during the distillation of knowledge,expands the scope of application of methods,and further strengthens the privacy protection of client data.The classification performance of the verification method on the ABIDE dataset is compared with the three centralized machine learning ASD classification methods and three Federated Learning methods. |