Font Size: a A A

Application Of Deep Learning Attention Mechanism In EEG Classification

Posted on:2022-03-09Degree:MasterType:Thesis
Country:ChinaCandidate:J Y SunFull Text:PDF
GTID:2480306494986439Subject:Neurobiology
Abstract/Summary:PDF Full Text Request
Commonly used deep learning models for Electroencephalogram(EEG)classifi-cation include convolutional neural networks,recurrent neural networks,deep belief networks,and fusion networks.The attention mechanism in the Transformer model is superior to other models in terms of long-sequence feature correlation calculation and model visualization and interpretability,but there is no research to use this mechanism to build EEG classification models.Based on the characteristics of EEG signals,this study uses Transformer attention mechanism to construct 7 models and analyze their performance and visualization results.This research first constructed Transformer models based on the attention mod-ule from spatial and temporal dimensions separately.The spatial model calculates the attention weights between channels based on the time series of each channel,and the temporal model calculates the attention weights between times based on the channel characteristics at different time points.In order to solve the problem of the large amount of calculation caused by the long EEG sequence and the weak ability of the model to extract local features,pooling layers and convolution layers were added.The pooling model and convolution model in spatial and temporal domain are constructed separately to improve performance.Finally,a fusion model is constructed by integrating spatial and temporal features.In the study,these 7 models were tested using the public Motor Imagery dataset and compared with previous studies.Using 3s data,the spatial convo-lution model has the highest classification accuracy in 2 class classification,which is 83.31%,while the fusion model has the highest accuracy in 3 and 4 class classification,which are 74.44%and 64.22%,respectively,these results are 0.88%,2.11%and 1.06%higher than the state-of-the-art.Using 6s data,the spatial convolution model has the highest accuracy,which are 87.80%,78.98%and 68.54%,respectively.The accuracy in 3 and 4 class classification are 2.37%and 2.81%higher than the state-of-the-art.Model visualization was also carried out in the study.As the number of attention modules increases,the focus area of the model changes from local to global,and the extracted features are more comprehensive,which is more conducive to classification.Differ-ent heads of the attention module assign different weights to each electrode.Most of the weights are concentrated in the motor cortex,which is the same as the biological findings,which proves the effectiveness of the modelBased on the above model,this study also explores the the impact of the number of attention modules,and different position coding methods on classification accuracy As the number of attention modules increases,the accuracy first rises and then stabi-lizes.Therefore,the number of modules needs to be set reasonably when constructing the model.Using positional encoding method will provide positional information for the position-insensitive attention module,but different encoding methods have little dif-ference on the result.Finally,a new model structure is proposed,which adds a trainable vector to the inputs and uses it as the classification feature.The classification accuracy rates are similar to the previous model,which proves the classification ability of this type of model.These studies laid the foundation for model construction and application in brain-computer interfaces.
Keywords/Search Tags:EEG classification, attention mechanism, Transformer model, motor imagery
PDF Full Text Request
Related items