Font Size: a A A

Three Dimensional Facial Expression Synthesis Based On Scanning Data

Posted on:2011-11-07Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y X LinFull Text:PDF
GTID:1118330332478363Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
By combining variant motions of 43 muscles, human beings can perform over 10,000 kinds of facial expression. Therefore, facial expression is the way that expresses emotion apart from language and becomes an important work in psychology which infers one's emotion from his/her facial expressions. In computer application field, people have hoped to communicate with computer in a human being way for a long time. Although such lofty aspiration has not been achieved yet, some valuable researches have been explored in computer animation, HCI and compute security field. At the same time, as the rapid developments of three dimensional scanner and computer, three dimensional facial expression synthesis and recognition based on scanning data is becoming a hot research area of computer graphics and computer vision.In this thesis, we aim to establish a three dimensional facial expression synthesis system based on scanning data and study the key algorithms of facial expression data processing and synthesis. The thesis includes four major contributions:1. Due to the limitation of scanning device, the captured three dimensional facial expressions often suffer from un-scanned regions, as well as low resolutions for some applications. To solve this problem, a novel surface reconstruction approach, called "Dual-RBF surface" is proposed. By making analogy with parallel capacitor model, the implicit surface is regarded as a zero equi-potential surface of capacitor; then the locations of paired electric charges are initialized by the input points; after that, two greedy non-linear steps are adopted to optimize the zero equi-potential surface which make the surface more precisely, and the implicit surface is visualized by a GPU which output high resolution surface very fast.2. In order to synthesize realistic 3D facial expression in a flexible and robust manner, a facial expression synthesis framework is proposed based on sparse coding algorithm, wherein the expressions are effectively encoded by sparse coding, and new facial expressions can be produced by specifying coefficients or providing new examples. Moreover, by partial projection, input expressions with missing region and/or noise can be handled.3. A new facial expression synthesis framework, called "joint learning", is proposed to adapt the nonlinear distribution of human facial expressions. Joint learning framework is based on a nonlinear subspace algorithm, and by introducing projection constrain, joint learning projects facial expressions with same attribute to identical low dimensional representation; then by projection and un-projection operators, any basic facial expression can be synthesized from neutral face; moreover, the facial expression retargeting and face expression recovery can be also handled.4. A prototype system on facial expression synthesis is developed to make the proposed algorithms practical.
Keywords/Search Tags:Implicit surface, Dual-RBF, GPU, partial projection, facial expression recovery, facial expression synthesis, retargeting, sparse coding, joint learning
PDF Full Text Request
Related items