Brain-computer interface is a technology that directly establishes a communication channel between the brain and external devices without relying on human nerve and muscle tissue.With the continuous development of electronic information technology and computer technology,brain-computer interface technology will have good application prospects in the fields of military,medical,artificial intelligence and the field of emotion recognition studied in this thesis.Emotion recognition is a technology in which the computer analyzes and processes the signals collected by the sensor to obtain the emotional state of the person.Unlike traditional emotion recognition methods based on facial expressions or speech voice,electroencephalogram signals are not easy to be disguised,so emotion recognition based on electroencephalogram signals is more authentic and reliable.The collection period of electroencephalogram signals is long,the cost is high,and it is difficult to obtain a large amount of data from the same subject,and it is a non-stationary signal that does not have all kinds of ergonomics.The traditional electroencephalogram recognition algorithm based on machine learning destroys the spatial information between electrodes,and must meet the condition that the training data and the test data are distributed in the feature space.These problems limit the performance of electroencephalogram signal emotion recognition algorithm,which is not conducive to practical application.In view of the above problems,this thesis attempts to improve the accuracy of emotional electroencephalogram signal recognition from the perspective of deep learning.The research contents of this thesis are as follows:This thesis improves the preprocessing method of electroencephalogram signals,and designs the corresponding deep convolutional neural network.This thesis first analyzes the four commonly used features in the field of brain computer interface,and selects the differential entropy feature as the basis for subsequent research.In order to preserve the spatial information between the electrodes,this thesis uses polar coordinate projection.After the distribution of the electrodes is obtained by projection,the electroencephalogram signal is converted into a twodimensional spatial representation using cubic interpolation.In order to fully use the spatial information contained in the two-dimensional spatial representation to extract efficient brain spatial features,this thesis designs a convolutional neural network based on the characteristics of the representation.The single subject emotion classification experiment designed on the emotion electroencephalogram dataset,SEED,shows that the introduced spatial information and the convolutional neural network designed in this thesis can effectively improve the accuracy of emotion classification of single subjects.Aiming at the problem that the features of actual training data and test data are distributed differently,this thesis designs a deep transfer learning framework based on the convolutional neural network.In the training process of the convolutional neural network,the difference between the training sample and the test sample is limited,so that the network can act on the test set with a different distribution from the training set.In this thesis,two difference loss functions are used,which are the correlation alignment loss designed according to the correlation alignment algorithm and the multi kernel maximum mean difference loss function.A cross-subject and cross-time electroencephalogram emotion classification experiment were designed.Experimental results show that the method in this thesis can effectively reduce the difference between training data and test data,and improve the performance of emotional classification of electroencephalogram signals across subjects and across time. |