Font Size: a A A

Research On Physiological Perception System And Emotion Recognition For Virtual Reality Environment

Posted on:2024-02-11Degree:DoctorType:Dissertation
Country:ChinaCandidate:C T WanFull Text:PDF
GTID:1520307079950759Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
With the development of Virtual Reality(VR)technology,using VR as emotion induction material,physiological data and machine learning methods to recognize emotions has attracted great attention from the industry and become a hot research topic.In addition,emotions is a high-level cognitive process.How to effectively excavate the emotionrelated information from multi-modal physiological signals and improve the effect of emotion recognition has become a common topic in computer science,bio-engineering and cognitive neuropsychology.Based on this,this dissertation explores and researches several key issues in emotion recognition based on multi-modal physiological signals in VR environment.The main research contents are as follows:(1)In view of the current situation that VR Head-Mounted Display(HMD)lack physiological perception function and cannot support the research of VR emotion recognition,this dissertation proposes a multi-modal physiological signals acquisition method for VR HMD.This method synchronously acquire 2-channel Electroencephalograph(EEG)signals and 3-channel peripheral peripheral physiological(PPS)signals by deploying electrodes and sensors on the contact area between HMD and human face.The emotional physiological perception HMD designed by this method has the characteristics of integration,low power consumption and high performance.The developed supporting humancomputer interaction interface,in addition to displaying the real time waveform of physiological signals,also provides both the experimental paradigm setting interface and emotion recognition modeling functions for VR emotion induction.In addition to the real-time waveform display of physiological signals,the developed supporting human-computer interaction interface also provides the experimental paradigm setting interface and emotion recognition modeling functions for VR emotion induction,so as to realize the whole process support from emotion induction to multi-modal physiological signals acquisition and VR physiology-emotion modeling.(2)In the process of physiological signal acquisition using emotional physiological perception HMD,the wearer’s body movement,head swinging,eye blinking and other irregular movements make Photoplethysmographic(PPG)sensors contract to skin with interspace,resulting in PPG signal affected by strong motion artifacts.To solve this problem,two algorithms are proposed in this dissertation to eliminate motion artifacts in PPG signals,with peak tracking and verification methods to estimate pulse rate.The first algorithm(WTD-PRLS)combines the advantages of wavelet threshold denoising and parallel Recursive Least Squar(RLS)adaptive filtering,and significantly enhances the spectral peaks corresponding to the real pulse rate through logic fusion.The second algorithm(DRS-RLS),based on the WTD-PRLS algorithm,further proposes a RLS adaptive filtering algorithm for dynamic reference signals.In this algorithm,the acceleration component having strongest correlation with PPG signal is dynamically selected as the reference signal for RLS adaptive filtering in each time window to eliminate motion artifacts in PPG signal.In order to verify the performance of the algorithm in practical applications,this dissertation uses the emotional physiological perception HMD to acquire 2-channel PPG signals,tri-axial acceleration signals and ECG signal on the forehead,and then constructs a forehead PPG exercise dataset in VR scene.The verification results on both self-built and the public datasets show that the WTD-PRLS and DRS-RLS algorithm achieves the lower pulse rate estimation error and less running time.(3)In view of the current lack of emotion recognition dataset based on multimodal physiological signal under VR environment,this dissertation first designs and selects 13 VR scenes that can correctly induce target emotions through evaluation,and then records the multi-modal physiological signals of 30 subjects based on the designed emotion induction experimental paradigm,at last forms a sophisticated VR physiological-emotion dataset.This dataset provides a well data support for emotion recognition based on multimodal physiological signals in the subsequent VR environment.(4)In this dissertation,emotion recognition based on multi-modal physiological signals in VR environment is studied.Firstly,a VR physiological-emotion recognition model is proposed.By preprocessing,feature extraction and feature standardization on multimodal physiological signals and selecting time window and classification algorithm,the model achieves the accuracy of binary classification 87.59% and 88.31% in valence and arousal and,four classification 82.76% in valence-arousal,respectively.Secondly,this dissertation selects the optimal feature subset by using two methods: emotion recognition rate of single feature and correlation matrix analysis between features.The optimized model can achieve the accuracy of binary classification 89.86% and 90.11% in valence and arousal,and four classification 84.93% in valence-arousal,respectively.The results show that the VR physiology-emotion model proposed in this dissertation achieves high emotion recognition accuracy using fewer physiological signals channels,which provides a novel solution for applying wearable devices to emotion recognition.In addition,this dissertation also explores the redundancy of EEG channels and the enhancement of peripheral physiological signals.The results show that it is feasible to use partial EEG channels to replace the whole brain EEG channels for emotion recognition and to use three-channel peripheral physiological signals to enhance the two-channel EEG signals in the hairless forehead region for emotion recognition.To address the issues of domain knowledge requirement and time-consuming feature extraction in manual feature engineering,this dissertation proposes a physiological-emotion recognition deep learning model EPDCANet,which integrates a convolutional neural network with a dense co-attention.The model extracts two-channel EEG signals features through EEGNet and three-channel PPS signals features through PPSNet,and then extracts co-attention features between the two types of features through a dense co-attention for emotion classification prediction.The results show that compared to existing deep learning models proposed in literature,the EPDCANet model achieves higher emotion recognition accuracy in both valence and arousal binary classification and four-class classification.
Keywords/Search Tags:Emotion Recognition, Virtual Reality, Multi-modal Physiological Signals, Motion Artifacts, Adaptive Filtering
PDF Full Text Request
Related items