Font Size: a A A

Research Of Multi-modal Emotion Recognition Based On Physiological Signal

Posted on:2020-12-22Degree:MasterType:Thesis
Country:ChinaCandidate:W P ZhaoFull Text:PDF
GTID:2428330578471474Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Emotional computing is a bridge to achieve human-computer interaction.and emotion recognition is a crucial step in the process of emotional computing.Emotion recognition can be divided into speech-based behavior and non-verbal behavior based on different emotion expression methods.In terms of speech-based behavior,researchers mostly focus on using the expression and speech to analyze the elotional state of the user.In terms of non-verbal behavior,the physiological state also contains a large amount of emotional information,and it is not effect for some personal reasons,such as many people deliberate hide their negative emotions.Therefore,emotion recognition based on non-verbal information has received more and more attention.Emotion recognition based on non-verbal information can be divided into based on single-modality emotion recognition and multi-modal emotion recognition.Because multi-modal emotion recognition can utilize multiple signals to identify user emotions from multiple aspects,the results are more objective and accurate,multi-modal emotion recognition has become the focus of researchers'exploration.In multi-modal emotion recognition,since the EEG reflects the emotional changes of the user's central nervous system,the peripheral physiological signals reflect the emotional response of the user's autonomic nervous system,so these two signals are widely used.However,there is a real problem in research.EEG signals require professional equipment to collect,so the acquisition process is difficult and expensive.For the above reasons,EEG signals can be used as auxiliary information to improve the performance of emotion recognition.Therefore,the paper utilizes EEG signals assist peripheral physiological signals to finish the process of multi-modal emotion recognition,based on canonical correlation analysis(CCA),discriminative canonical correlation analysis(DCCA),kernelized discriminative canonical correlation analysis(KDCCA)and deep discriminative canonical correlation analysis(DDCCA).During training phase,the peripheral physiological signals and EEG signals are extracted firstly.Using EEG features as auxiliary information,a new emotional discriminative space is constructed utilizing a variety of correlation analysis techniques,and then machine learning approaches are utilized to build emotional model;During testing phase,only the peripheral physiological signals were used.In this paper,two public,datasets DEAP datasets and DECAF datasets,are utilized to experiment.The first part implements datasets preprocessing and feature extraction,and the second part completes model training and experimental results analysis.The statistical analysis of multiple experimental results on two datasets demonstrate that using EEG features as auxiliary information of peripheral physiological signals by DDCCA achieves better recognition performance than other multiple feature fusion techniques and according to many experiments,the results are comparable.Compared with the previous method,the accuracy is increased by 9.31%.and the FI score is increased by 0.1467.Our method achieves better emotional recognition performance,and the method can achieve the goal of emotion recognition under the nonlinear problems in real life.
Keywords/Search Tags:Emotion Recognition, Canonical Correlation Analysis, Discriminative Canonical Correlation Analysis, Kernelized Discriminative Canonical Correlation Analysis, Deep Discriminative Canonical Correlation Analysis, Peripheral Physiological Signals
PDF Full Text Request
Related items