Font Size: a A A

Research On Affective Computing Based On Human Behavior

Posted on:2017-09-30Degree:DoctorType:Dissertation
Country:ChinaCandidate:J L LiangFull Text:PDF
GTID:1318330515967068Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
With the development of computer science and personalized human-computer interaction,the importance of affective computing in human-computer interaction becomes significant increasingly.Research on human-computer interaction based on understanding and expressing of emotion has been widely concerned in various fields.As the basis of affective computing,the studies on emotion recognition and understanding,which makes computers having the abilities of emotional perception,is very important.Facial expressions and body language are two main ways to express emotions by human beings.So they are used in the emotion recognition research.In this paper,multimodal emotional expressions are analyzed and understood using proposed new methods and models.The main studies in this paper are recognizing and undertanding behaviors related to emotions.Furtherly,the emotional intention understanding is also researched based on the basic emotional behaviors recognition.The main studies and meaningful achievements are as follows:1.Facial expression recognition is firstly studied in emotion recognition,and an improved decision forest model for facial expression recognition is proposed.Facial expressions are mainly conveyed by only a few discriminative facial regions of interest.In this paper,we study the discriminative regions for facial expression recognition from video sequences.The goal of our method is to explore and make use of the discriminative regions for different facial expressions.For this purpose,we propose a Hidden Markov Model HMM Decision Forest(HMMDF).In this framework,each tree node is a discriminative classifier,which is constructed by combining weighted HMMs.Motivated by a theory of elimination by aspects" from psychology,a group of HMMs on each node are modeled respectively for facial regions,which have discriminative capabilities on facial expressions,and are further weighted adaptively.Extensive experiments validate the effectiveness of discriminative regions on different facial expressions,and the experimental results show that the proposed HMMDF framework yields dramatic improvements in facial expression recognition compared to existing methods.2.The body language recognition in emotion recognition is studied as well.The affective interaction by human beings is analyzed,and recognizing the human interaction relative to human emotion is focused.On the basis of the analysis,the problem of interaction features representation is addressed.We propose a two-layer feature description structure that exploits the representation of spatio-temporal motion features and context features hierarchically.On the lower layer,the local features for motion and interactive context are extracted respectively.We first characterize the local spatio-temporal trajectories as the motion features.Instead of hand-crafted features,a new hierarchical spatio-temporal trajectory coding model is presented to learn and represent the local spatio-temporal trajectories.To further exploit the spatial and temporal relationships in the interactive activities,we then propose an interactive context descriptor,which extracts the local interactive contours from frames.These contours implicitly incorporate the contextual spatial and temporal information.On the higher layer,semi-global features are represented based on the local features encoded on the lower layer.And a spatio-temporal segment clustering method is designed for features extraction on this layer.This method takes the spatial relationship and temporal order of local features into account and creates the mid-level motion features and mid-level context features.3.On the basis of the above studies,further research on enmotional intention expressed by facial expressions and body language is introduced.The same behaviors are always found during human interactions.However,the intentions from these behaviors are different because of the different context in the interactions.Therefore,a new model is constructed for the intention understanding during human interaction based on facial expressions and body language.In this model,the facial expressions from both persons in the interaction are taken as the context for each other.Besides,the interactive body movements are also incorporated in the model.Based on these behavioral elements,intentions during the human interactions are recognized.4.Furthermore,a multimodal behavior dataset is constructed for the validation of the method for emotional intention recognition based on facial expression and body language.The dataset includes 283 video clips from 32 movies for 4 types of emotional intentions,which are celebrating,greeting,comforting and thanking.Besides,the properties of the body movements and facial expressions in the video clips are annotated for the dataset.In summary,this study focuses on the affective computing on the low level with basic emotion recognition on the one hand.On the other hand,further research on the semantic layer for affective computing is implemented based on the basic emotion expressions,such as facial expressions and body movements.This paper aims to study the emotion understanding from different layers,so as to make the computers having more abilities of emotion understanding and make the human-computer interaction more harmoniously.
Keywords/Search Tags:Facial expression recognition, Human interaction recognition, Multimodal fusion, Affective computing, Human-Computer Interaction
PDF Full Text Request
Related items