Font Size: a A A

Research On Facial Expression Recognition And Representation Method For Humanoid Robot

Posted on:2017-06-03Degree:DoctorType:Dissertation
Country:ChinaCandidate:Z HuangFull Text:PDF
GTID:1318330512468665Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
As intelligent machines, humanoid robots are required not only to look similar to humans, but also to possess the same intelligence for emotional perception and emotional representation. As we all known, facial expression is not only the most important carrier for emotion perception, but also the most immediate and the most obvious way for emotional representation. Hence, researches on methods of facial expression recognition and representation for humanoid robots, have significant theoretical meaning for improving the ability of emotional interaction, and application value for promoting humanoid robots toward practicality.This thesis is arranged as two parts: emotional perceptive brain and expressive face.Some key issues, such as multi-pose facial expression feature description, regional feature fusion, head pose estimation, sequential feature extraction, and facial expression representation, have been deeply studied in the thesis. The main research contents and achievements are as follows:(1) To give a humanoid robot with emotional brain in natural human-computer interaction, two multi-pose facial expression recognition methods based on regional feature fusion are proposed. Firstly, in the aspect of multi-pose facial expression feature extraction, a multi-pose expression feature descriptor, which combines active appearance model and histogram of oriented gridients, is constructed for improving the accuracy of feature point location by multi-pose templates, as well as better describing local expression details by extracting regional histogram of oriented gridients. Secondly,in aspect of regional feature fusion and classification, given facial expression features more focusing on eyebrows, eyes, mouth and other regions, a regional feature-level fusion method based on fuzzy c-means, is proposed on the basis of regional multi-pose expression feature extraction. Moreover, a regional decision-level fusion strategy based on Dempster-Shafer theory, which is addressed to highlighting the credibility and supportability of regional features, is further proposed. Improving the expression classification rate from multi-pose feature extraction and regional fusion strategies, is the innovation in facial expression recognition of robot.(2) Besides the capability of accurate expression perception, a facial expression representation based on a single-frame image is proposed to endow humanoid robots with expressive face. Firstly, relevance vector machine is utilized to implement nonlinear mapping from head features to pose control servos for coordinating rigid head motion with non-rigid facial motion. Secondly, a forward kinematics model based on the energy conservation principle is constructed to map servo control space into expression shape space. Meanwhile, a single-frame facial expression representation method for the robot is achieved by optimization of weighted objective function. The accurate estimation of head pose as well as the lifelike imitation of facial expressions, is the innovation in the single-frame expression representation of robot.(3) For the low similarity and servo hop problems when the single-frame expression representation method being applied to multi-frame expression learning, a multi-frame expression imitation algorithm based on performance-driven is proposed. Firstly, a servo sequence prediction model is constructed based on radial basis function neural network. Secondly, the forward kinematics model and the servo sequence model are fused for addressing to objective function optimization. The fusion of the two models,which can not only keep the similarity of single-frame expression imitation, but also maintain the smoothness of continuous servo movements, is the innovation in the multi-frame expression imitation of robot.(4) To further improve space-time similarity and consistency of the dynamic facial expression imitation, which is regarded as a natural sequence process, an online expression migration method based on temporal features is proposed. Firstly, based on facial motion information captured by Kinect camera, low-level expression semantics,extracted by Laplace transform, and high-level expression semantics, constructed from expression action units together with expression deformation features, are used to represent the space-time relationship of dynamic facial expression. Secondly, a novel inverse kinematics model based on time recursive neural network, is constructed to map dynamic expression sequence into servo control sequence. The proposed inverse kinematics strategy, which not only integrates the forward kinematics model with the servo time sequence model for space-time consistency of dynamic expression imitation,but also simplifies the optimization process and improves the efficiency of expression migration, by transferring expression space-time semantics space into servo control space directly, is the innovation in the online expression migration of robot.
Keywords/Search Tags:Humanoid Robot, Expression Regional Fusion, Head Pose Estimation, Forward Kinematics Model, Single-frame Expression Representation, Multi-frame Expression Imitation, Online Expression Migration
PDF Full Text Request
Related items