Font Size: a A A

Research On Emotion Recognition Using Eye Movement Signal In A Video Learning Environment

Posted on:2021-07-03Degree:MasterType:Thesis
Country:ChinaCandidate:S X LiuFull Text:PDF
GTID:2518306473964549Subject:Master of Engineering
Abstract/Summary:PDF Full Text Request
In the online learning environment,due to the separation of teachers and students in time and space,there is a general lack of emotion.Learners' learning effect will be affected by learners' emotional state.When learners are in a positive emotional state,it will promote learners' learning,while in negative emotional state,it will reduce learners' learning efficiency.In the online learning environment,accurate identification of learners' emotional state,when learners have negative emotions,giving emotional support will effectively improve the learning effect and learning experience of learners.Therefore,in the online video learning environment,it is particularly necessary to study a method to accurately identify learners' emotional state without interference.This thesis studies the emotion recognition based on eye movement information in the video learning environment.Around this theme,we have done the following work:(1)Design and implement the emotion and eye movement data acquisition scheme,and collect the required experimental data.Eye movement information was obtained by eye tracker and divided into different time windows to explore the influence of different time window size on the results.Preprocess the data to remove outliers and default values.When using deep learning method,the data is processed by adding noise and flipping to expand the data set.(2)Machine learning algorithm is used to construct classifier to classify eye movement data.First,set the data according to the time window of 5 seconds,10 seconds and 15 seconds.By analyzing the correlation between eye movement information and emotional state,the most relevant eye movement information was obtained,including fixation,saccade,blink and pupil diameter.Finally,27 eye movement features were obtained and labeled.Using support vector machine,random forest and naive Bayes algorithm to learn the data,the experiment found that in 10 seconds time window,support vector machine achieved 72% classification results.(3)Improved a network FCCNs based on sub-network block stacking to classify eye movement data.The sub network consists of four parallel channels.The first channel is a pooling layer with maximum pooling and a convolutional layer to obtain low-frequency features of the image,and the second channel is a pooling layer with average pooling and a convolutional layer to compress and reduce model parameters.The third channel is a convolutional layer and the fourth channel is two convolutional layers,which can also reduce model parameters.The output of the fourth channel and the output of the compressed subnet are fused by contat operation,and the shallow features and deep features are fused,and then the four outputs are fused into a sub network that can be stacked by a concat operation.Add a maximum pooling layer after each sub-network to generate a layer.Finally,a 5-layer network with such structure is designed.(4)Transform the numerical data of eye movement into pictures across modalities,and the data is enhanced by adding noise and flipping.Neural networks with different depths are used for experiments.The network structures of Alex Net,VGG-16,Goog Le Net and Res Net-34 are used.The experimental results show that the classification accuracy of our FCNNs network is 91.62% in 5 seconds time window.
Keywords/Search Tags:Online Learning, Eye Tracking, Machine Learning, Deep Learning, Convolutional Neural Networks
PDF Full Text Request
Related items