This dissertation presents an integrated framework for face modeling, facial motion analysis, and facial motion synthesis. This framework systematically addresses three closely related research issues: (1) selecting a quantitative representation of facial deformation for face modeling and animation; (2) automatic facial motion analysis based on the same visual representation; (3) speech-to-facial-coarticulation modeling. The framework provides a guideline for methodically building a face modeling and animation system. The systematicness of the framework is reflected by the links among its components, whose details are presented. Based on this framework, a face modeling and animation system, called the iFACE system, is developed. The system provides functionalities for customizing a generic face model for an individual, text-driven face animation, off-line speech-driven face animation, and real-time speech-driven face animation. |