Font Size: a A A

Facial modeling on high resolution geometry and appearance data

Posted on:2007-01-07Degree:Ph.DType:Thesis
University:State University of New York at Stony BrookCandidate:Wang, YangFull Text:PDF
GTID:2458390005482606Subject:Computer Science
Abstract/Summary:
The advent of new technologies that allow the capture of massive amounts of high resolution, high frame rate face data, leads us to propose data-driven face models that describe detailed appearance of static faces as well as to track subtle geometry changes that occur during expressions. However, since the dense data samples in these 3D scans are not registered in object space, inter-frame correspondences can not be established, which makes the tracking of facial features, estimation of facial expression dynamics and other analysis difficult.;In order to use such data for the temporal study of the subtle dynamics in expressions, an efficient non-rigid 3D motion tracking algorithm is needed to establish interframe correspondences. In this dissertation, we present two new frameworks for high resolution, non-rigid dense 3D point tracking. The first framework is a hierarchical scheme using a deformable generic face model. To begin with, a generic face mesh is first deformed to fit the data at a coarse level. Then in order to capture the highly local deformations, we use a variational algorithm for non-rigid shape registration based on the integration of an implicit shape representation and the Free-Form Deformations (FFD). The second framework, a fully automatic tracking method, is presented using harmonic maps with interior feature correspondence constraints, which provides highly accurate facial expression tracking, even in the presence of topology changes and large head motion. The novelty of this work is the development of an algorithmic framework for 3D tracking that unifies tracking of intensity and geometric features, using harmonic maps with added feature correspondence constraints. Due to the strong implicit and explicit smoothness constraints imposed by both algorithms and the high-resolution data, the resulting registration/deformation field is smooth and continuous. Both our methods are validated through a series of experiments demonstrating its accuracy and efficiency.;Furthermore, in order to exploit the information provided by high resolution face data, another proposed work is to perform face feature analysis based on high resolution face images (e.g., 1K*IK), especially using important cues provided by the details on human faces for matching and identification purposes. The main goal of our data analysis is to find features which are consistent through different scans of a single person and varying among different persons. By performing an effective spatial analysis using several filters, we will be able to develop a new face feature analysis method based on partial face data. In order to analyze the performance of the proposed method, we captured a small database of high resolution facial images and compared various experimental results on different matching window sizes and different resolutions.;Finally, synthesis and re-targeting of facial expressions is central to facial animation and often involves significant manual work in order to achieve realistic expressions, due to the difficulty of capturing high quality dynamic expression data. The availability of high quality dynamic expression data opens a number of research directions in face modeling. In this dissertation, we use the motion data generated by our tracking frameworks to synthesize new expressions as expression transfer from a source face to a target face.
Keywords/Search Tags:High resolution, Data, Face, Facial, Tracking, New, Expression
Related items