Font Size: a A A

Research On Three-Dimensional Virtual Human Facial Expression Synthesis Techniques

Posted on:2014-03-12Degree:MasterType:Thesis
Country:ChinaCandidate:Y WangFull Text:PDF
GTID:2268330428978949Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Computer virtual reality technology is one of the current hot topics. Avatars are playing the most important role in virtual scene and virtual human technology is widely used in the film and television production, game production, multimedia, e-commerce, video conferencing and video telephony and other fields.Virtual human who has facial emotional expression can enhance the realism of the virtual scene and immersion in human-computer interaction, therefore synthesizing a three-dimensional virtual human facial expression which is realistic and easy to control has very large research value. Virtual human expression synthesis includes virtual face model and expression synthesis.Now the current main method of face expression synthetic is to use a scanner to create a3D model and character show to drive expression synthesis. This method can gain a more realistic expression model, but the scanner is so expensive and data obtained in the face model is too large to control the expression, which requires manpower to complete. To solve the above problems, this project creates three-dimensional face model without the aid of any instruments and equipment modeling, uses a simple, small amount of data and is based on the MPEG-4standard expression synthesis methods that is easy to control to achieve virtual human face animation.First, we establish an appropriate3D face model according to the programs of expression synthesis of the MPEG-4standard. The3D face model used in our project is divided into two categories, one is parent model of facial expression synthesis which used to establish data of facial animation, and the other is the sub-models of facial expression synthesis which is used to validate feasibility and versatility of the scheme of the expression synthesis. The project selects Candide-3model as a parent model, which has a simple structure to match MPEG-4standard human facial expression parameters and establishes facial expression synthesis sub-models with Loop subdivision Candide-3model. The project does mesh simplification which is exported from the Poser software in which the virtual face model is elaborate, establish other sub-model synthesis for virtual face, and propose the vertex weights based on edge collapse mesh simplification algorithm.This project achieves a3D virtual facial expression synthesis system based on MPEG-4standard for facial animation principles.To begin with, the system determines MPEG-4human facial expression parameters and Candie-3model’s matching relationship. Next building facial animation definition table, the paper proposes an improved algorithm which ignores the impact of0boundary value to calculate a three-dimensional human face FAP grid point coordinates. Finally, synthesize the six basic facial expressions, realize change of facial expression drivenby facial animation parameters, and use two different facial expression synthesis sub-models to verify the feasibility and versatility of optimized expression synthesis methods.
Keywords/Search Tags:Experssion synthesis, MPEG-4facial animation standard, Facial Expressionparameter, Edge collapse simplification, the dual constraint mechanism
PDF Full Text Request
Related items