Font Size: a A A

Research And Application Of Emotion Recognition Algorithm Based On Multi-modal Behaviour Data

Posted on:2024-09-11Degree:MasterType:Thesis
Country:ChinaCandidate:Y Z JiangFull Text:PDF
GTID:2568307064496684Subject:Engineering
Abstract/Summary:PDF Full Text Request
Emotion computing has shown great promise in the fields of medical rehabilitation and teaching management,and emotion recognition based on multi-modal behavioral data has attracted the attention of many researchers.With the rapid development of smartphones and wearable devices,various high-precision sensors have made it easier for researchers to obtain relevant information about humans and their surroundings,leading to new research ideas for emotion recognition.Although emotion recognition based on multi-modal sensor data has been explored and achieved,there are still many challenges for emotion recognition in the real world.In traditional emotion recognition research,the critical problem is that most emotion recognition models are based on data collected in laboratory environments.However,in the real world,the quality of sensor data is often poor due to equipment and environmental limitations,making it difficult to apply models based on laboratory data.User heterogeneity in sensor data is also a severe problem for applying emotion recognition models in the real world.Due to the particularity of emotions,different people have different emotional patterns,which causes models trained according to one user’s data to often be unable to be applied to another user.In emotion recognition,users need to label their own emotions.However,privacy concerns prevent many people from sharing their labeled data,creating data islands.This results in the "cold start" problem where new users cannot use the large-scale data of others to train their models.Given the above three problems,this paper makes the following key contributions:(1)To address the issue of sensor data sparsity in emotion recognition,this paper proposes a multi-modal fusion model based on a set,which uses multi-modal behavior data collected by smartphones for emotion recognition.The model can effectively overcome the impact of data sparseness on neural networks and achieve end-to-end emotion recognition.(2)Aiming at the issue of user heterogeneity in emotion recognition,this paper proposes a multi-task learning approach that combines a shared model component with user-specific model components.In this approach,each user is treated as a separate classification task,allowing for personalized model training based on their unique emotional patterns.(3)In order to address the issues of data privacy protection and "cold start" in traditional emotion recognition applications,this paper designs a smartphone emotion recognition application based on federated learning,which uses model parameter transmission instead of original data transmission to protect user privacy.
Keywords/Search Tags:Affective Computing, Emotion Recognition, Multi-modal Data, Feature Fusion, Multi-task Learning, Federated Learning, Sparse Data, Data Heterogeneity
PDF Full Text Request
Related items