Font Size: a A A

Real-Time Performance-Driven Facial Animation

Posted on:2015-02-24Degree:MasterType:Thesis
Country:ChinaCandidate:C SunFull Text:PDF
GTID:2268330425481450Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
In recent years, performance-driven facial animation has been an active research area. It refers to the problem of mapping an actor’s facial expressions to a digital avatar and making the avatar’s facial expressions realistic and consistent with the input performance. It is widely used in video games, teleconference, virtual reality, human-computer reaction, medical cosmetology and e-commerce.Based on studying the key algorithms, this paper designed and implemented a system of real-time performance-driven facial animation. The main contributions and innovations include:1. This paper designed and implemented a real-time performance-driven facial animation system. As far as we know, it was the first time to introduce skeleton-based skinning model to performance-driven facial animation system. The model is biomimetic. Compared with other models, it is realistic and easy to drive. The system is real-time, easy to use (you don’t need to do any training processes before using it) and non-intrusive.2. Face detection and tracking of the actor. This article implemented face detection using Viola-Jones algorithm and then implemented face tracking using Mean Shift algorithm. At this stage, to improve the detection speed and the robustness of complicated background, we introduced pre-segment using depth image.3. Extracting facial animation parameters. At this stage, we proposed an algorithm taking full advantage of the color images and depth images. Firstly, AAM was used to track the2D facial key points based on color images. Then with the constraint of2D key points, ICP was used to compute facial animation parameters in Candide-3model based on depth images. The combination of color images and depth images improved the accuracy and stability of the generated parameters.4. The development of the prototype system. Based on the above research, we developed a real-time performance-driven facial animation system. We used3ds Max to render the facial skeleton-based skinning model and used Kinect to capture the actor’s RGB-D video. Then the above algorithms were used to extract facial animation parameters and the extracted parameters were transmitted to3ds Max to drive the facial skeleton-based skinning model using the Musical Instrument Device Interface communication protocol. Experimental results demonstrated that the digital avatar can mimic general apparent facial expressions of the performer, and the facial animation can reach30frames per second as the video capture rate.
Keywords/Search Tags:performance-driven, facial animation, Candide-3, Skeleton-based Skinning Face Model, depth map
PDF Full Text Request
Related items