Font Size: a A A

EEG Classification Model Combining Metric Learning And Autoencoders

Posted on:2024-04-01Degree:MasterType:Thesis
Country:ChinaCandidate:Y Y ChenFull Text:PDF
GTID:2530307100489154Subject:Electronic information
Abstract/Summary:PDF Full Text Request
Electroencephalogram based brain-computer interface(EEG-BCI)can be widely used in medical,entertainment,military and other fields.The classification of EEG data is one of the key steps in EEG-BCI applications.Using deep learning to analyze and classify EEG data is one of the hot research directions nowadays.This paper explores the use of Multi-Task Learning(MTL)strategy to improve the classification performance of EEG classification models based on deep learning,and proposes an EEG classification model combining metric learning and self-encoder.First,the study uses a multi-task learning strategy to improve the classification performance of the deep learning-based EEG classification model,which is divided into three steps:(1)constructing a depthwise separable convolution neural network(DSCNN)as the base model of this topic;(2)introduce depth metric learning method into the DSCNN as an auxiliary task to help the model learn more discriminative features to improve the classification performance,and obtain a DSCNN combining depth metric learning;(3)based on the model obtained in the second step,continue to introduce the self-encoder structure,coding reduction as an auxiliary task to enhance the model’s ability to learn the compressed representation of data,and further improve the classification performance,forming an EEG classification model based on metric learning and self-encoder EEG classification model(MTLNN).Experiments show that the classification performance of MTLNN is improved compared to DSCNN when using fixed task loss weights.Then,we investigate the use of dynamic weight adjustment strategy to reduce the training difficulty of MTLNN and improve the classification performance of the model.When using fixed task loss weights,MTLNN has the problem of unbalanced loss for each task,which leads to the difficulty of model training and unsatisfactory classification performance.The solution to the above problem is divided into two steps:(1)introducing the gradient normalization algorithm(Grad Norm)to MTLNN to obtain Grad Norm-MTLNN(GMTLNN),which solves the problem of high model training difficulty;(2)improving the Grad Norm algorithm to solve the problems of gradient disappearance due to too small weights and large fluctuations in losses for a single batch lead to a lack of confidence in the metrics,improve the classification performance of the model,and obtain MTLNNN with improved Grad Norm(IGMTLNN).k-fold comparison experiments show that in the intra-individual experiments of P300 and BCI4_2A datasets,compared with the best method,the average accuracy of IGMTLNN The average accuracy is improved by 1.21% and 3.48%,respectively,which proves that the EEG classification model based on metric learning and self-encoder proposed in this paper is effective.
Keywords/Search Tags:electroencephalography, brain-computer interface, multi-task learning, gradient normalization, deep learning
PDF Full Text Request
Related items