Font Size: a A A

Emotion Recognition Based On Fuzzy Integral Fusion Of Multi-modal Physiological Signal

Posted on:2022-05-30Degree:MasterType:Thesis
Country:ChinaCandidate:J GaoFull Text:PDF
GTID:2518306491985739Subject:Engineering · Software Engineering
Abstract/Summary:PDF Full Text Request
At present,emotion recognition based on physiological signals has become a research hotspot.Because the physiological signals are extremely complex and non-stationary,and are significantly affected by individual differences,the emotion recognition models constructed based on single-modal physiological signals have problems such as poor generalization performance.Considering that human emotional response and emotional regulation will simultaneously trigger changes in a variety of physiological responses,such as shortness of breath and sweating on the skin,the performance and generalization ability of emotion recognition models can be improved by fusing multiple types of physiological signals.In the fusion modeling of multi-modal physiological signals,because the decisionlevel fusion can effectively utilize the complementarity between the modalities,give full play to the advantages of single-modality,and is not sensitive to data synchronization,it has become one of the main strategies in the integration hierarchy framework.In decision-level fusion,integration is generally performed by assigning certain weight values to different modals.Its effectiveness usually depends on the solution of two problems:(1)How to determine the optimal weight values of different modalities to maximize the advantages of multimodal data?(2)How to set and adjust weights to alleviate the problem of inconsistent sample weights caused by individual differences?In response to the above problems,this article mainly explores and studies the static optimal weight setting and dynamic weight adjustment,and proposes three weight learning and setting methods based on fuzzy integral.The main work and innovations of this paper are as follows:(1)Aiming at the optimization of different weights of modals,a multi-modal fusion method based on the Choquet integral for weight setting is proposed.Use the confusion matrix generated by training samples to judge the performance of different modalities in the sample,use the fuzzy measure table in fuzzy integral to digitize the weight of the modal,at last use the Choquet to calculate the integral of the decision value DP matrix of the test sample with the fuzzy measure table,then get the final decision value for the emotion category.This method takes advantage of the error correction of fuzzy integral in signal fusion,and assigns different weight values to the multi-modality by comparing the performances of the modalities in the samples,which can optimize the weights more comprehensively and specifically.(2)Aiming at the problem of inconsistent modal weights of different samples caused by individual differences,an adaptive weight setting method based on Choquet integral is proposed,the DP matrix generated by the test sample is used to judge the contribution of different modalities to the separability of the sample,and the information entropy is used to measure,write the metric value into the fuzzy measurement table and calculate the Choquet integral with the DP matrix of the test sample to obtain the decision value of the emotion category.From the perspective of the separability of modals,this method gives higher weights to modals with high separability,and adaptively predicts emotions.This method shows better flexibility in emotion prediction.(3)Through experimental verification on the public datasets DEAP and DECAF of multi-modal emotion recognition,the results show that: compared with traditional decision-level fusion methods such as majority voting,average voting,and Bayesian fusion,methods based on fuzzy integral static,adaptive and integrated fusion proposed in this paper have better performances in individual emotion recognition.In the DEAP dataset,the method proposed in this paper improves the recognition accuracy by 11%-15% on the emotion model Valence dimension,improves 6%-12% on the Arousal dimension;In the DECAF data set,the method proposed in this paper improves the recognition accuracy by 5%-18% on Valence,improves 8%-12% on Arousal.Compared with the new decision-level fusion methods proposed by other research institutions,the method proposed in this paper also has performance advantages,and has better scalability and applicability.In summary,the fuzzy integral fusion model created by the Choquet integral in this paper can take into account the complementary relationship between modalities and the differences of different individual modalities.In the face of multi-source and heterogeneous physiological data,the method in this paper has better scalability and stability,and proposes a more flexible and stable method for emotion recognition and fusion of multi-modal physiological signals.
Keywords/Search Tags:Emotion Recognition, Multimodal Physiological Signals, Decision-level Fusion, the Choquet Integral, Adaptive Weight
PDF Full Text Request
Related items