| Object:Emotion is the mental process involved with feeling,perception,thought and behavior,which is critical for the survival and development of individuals.Many mental disorders and psychosomatic diseases are relevant with maladaptive emotion responses and inefficient emotion regulation strategies.Thus,understanding the mechanisms underlying emotion,and modulating emotions effectively are of great significance.With the development of artificial intelligence and human-computer interfaces,the computing of affects has emerged.Emotion recognition,which refers to the inferring the emotion states based on changes of physiological signals,is an elementary part of affective computing.Emotion recognition has great applicational potential in the field of psychiatry.For example,to monitor the affective symptoms of patients with depression disorder,to aid autism spectrum disorder patients in understanding others’ emotions,et al.Emotion recognition is also the foundation of real-time emotion regulation system.We seek to establish an affective dataset which includes physiological responses of positive and negative emotions,to explore the electroencephalogram(EEG)features that can differentiate emotions,and with the machine learning methods,to develop an emotion recognition model based on four electrophysiological signals.Methods: 49 healthy participants were involved.Four emotions,including neutrality,happiness,sadness and fear were elicited with eight videos from the Chinese Emotional Video System(CEVS).Meanwhile,the electroencephalogram(EEG),facial electromyogram(EMG),electrocardiogram(ECG)and galvanic skin response(GSR)were recorded during baseline state(resting)and emotional state(watching emotional videos).As each video ends,the participants were immediately asked to rate the intensity of happiness,sadness and fear,and the valence,arousal,dominance of each emotion,as well as their liking and familiarity level of the video.The collected data was preprocessed to reduce the influence of individual variation and eliminate noise interference.Afterwards,features were extracted from the data.The differences of EEG midline frequency power were analysis.The primary components of EEG were selected.Neural Networks was adopted to distinguish three emotions.Results: The videos that aim to induce happiness,sadness and fear all achieved the hit rate that is higher than 90%,and the average intensity score of target emotions are all higher than 5(mediate intensity).2.Comparing with the sad video,when watching the happy video,significant lower power of theta,alpha,and beta bands in midline was observed.Comparing with the fearful video,midline theta,alpha and beta bands power was significantly higher than that during the happy video.The mean power value of alpha and beta band in the middle line is significant stronger during the sad video than the fearful video.3.The accuracies for the categorization of the three emotions based on.EEG only and the four signals were 62.48% and 66.67%,respectively.4.The gender difference in target emotion intensity rating is statistically insignificant.The EEG-based ternary emotion classification reached accuracies of 53.22% and 68.54% in male and female participant,respectively.Conclusion: 1.The current research successfully induced target emotions,and established a physiological dataset for positive and negative emotions.2.The EEG average power of theta,alpha,and beta bands can distinguish happiness/sadness and happiness/fear.The average power of alpha and beta bands can distinguish sadness/fear.EEG midline power may be a potential feature for emotion recognition.3.The emotion recognition based on four biophysiological signals in this dataset reached a classification accuracy of 66.67%.4.Comparing with males,emotion elicitation in females may be easier,and female may show stronger emotional responses.Consistently,the EEG-based ternary emotion classification accuracy in female participants is higher than male participants. |