Font Size: a A A

Interactive Emotion Model And Control Method Of Humanoid Robot With Facial Expressions

Posted on:2017-05-28Degree:DoctorType:Dissertation
Country:ChinaCandidate:J Z YanFull Text:PDF
GTID:1108330485450018Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
In order to serve human beings Artificial intelligence (AI) is an important milestone in the development of science and technology, robot, especially humanoid emotional robot, looks like a bright pearl with the sustenance of human fascination. In order to serve human beings better, robot not only needs humanoid facial features, but also has realistic and anthropomorphic emotion to take interaction effectively. Based on the existing conditions of our lab, this thesis first focused on the study of human emotions input, which is to get the user’s emotional basic information, or complex emotions. Secondly, we used Zipf law to design the key aspects of interactive dialogue automatically, according to PageRank, we created a DialRank algorithm to build the three-dimensional intelligent quiz dialogue system to finish the part of emotion conversion evolution. Third, under the guidance of the uncanny valley and FACS, we designed and built a humanoid robot which had facial expressions. Finally, we made an interactive behavioral consistency cooperative control method to do with words, action and emotion of the robot.General speaking, human-robot emotional interaction can be divided into three parts, human emotion information input, emotion conversion evolution between human and robot, robot emotional expression output, which are the problems to be solved in this thesis.The main work and innovation of this thesis are as follows:(1) As to the input of human emotion, this thesis used a Domain Adaptation algorithm to improve the recognition rate of human facial expression, for the human facial emotion extraction problems in human-computer interaction, this thesis didn’t use the conventional database, according to the human facial features, we choose the age span of 76 years, which’s divided into "child, young, middle, old" four age stages. Though only SVM couldn’t get the higher recognition rate cross ages, we proposed a DASVM improved algorithm to improve the all recognition rate. Finally, under the premise of higher expression recognition, we used the expression potential decomposition to establish expression space to six basic emotions, and experiments tested and verified our work.(2) As to the part of emotion conversion evolution, In the traditional human-computer interaction, the dialogue was often limited to the pre-set databases, and people almost lose their interest. This thesis used Zipf Law and PageRank algorithm to propose an automatic emotional DialRank matching algorithm model. With the principles of the Zipf Law, we calculated the correlation, according to the keyword number and the sequence, and got the dialogue association statement Em; further we used a DialRank algorithm associated link between the dialogue statements, which were in accordance with the library "question - answer" the statement points up and down, and obtained the value of dialogue statements DR. Finally when robot got the question of human expression information and statements, robot used the existing "other people" as their answer reasonable questions and answers, which’s enhanced the satisfaction and tested by experiments.(3) As to the part of robot emotional expression output, robot’s appearance and expression occupied a prime position in human-computer interaction, based on Uncanny Valley and FACS coding, we proposed a facial expression emotion show and control Methods. This thesis designed 15 basic control points to show humanoid expressions. Based on the hardware, we further built the robot head movements to achieve control model, which’s divided into the input layer, responsive layer, motion cell layer and output layer, the model was optimized between the rational expression control regions and control points, both could show six basic expressions, but also the performance of complex emotions on the basis of the basic expressions. Finally we found our robot head got the high satisfaction from the testers.(4) In the smart home, it is a big question to coordinate the voice, motion control and face implementation issues of the robot, access to the facial emotion and expression control, which were realized in the above three parts, we proposed an emotional interaction behavioral consistency cooperative control model. When we got the emotion from human, and used the PLOSA algorithm to create different speed, tone, timbre and volume commands, further emotional speech synthesis sentences, matching action to achieve control system; according to individual personality and emotional attenuation, which is refined into a general model and personal model, we unified the voice, movement and facial expressions of the trinity cooperative control model, and verified by the corresponding experiments.
Keywords/Search Tags:Human-robot emotional interaction, humanoid facial expression robot, Uncanny valley, FACS, Domain adaptation, Zipf law, Cooperative control model
PDF Full Text Request
Related items