Font Size: a A A

Research On Facial Expression Animation Technology Based On Dirichlet Free Form Deformation Algorithm

Posted on:2020-01-22Degree:MasterType:Thesis
Country:ChinaCandidate:H NiFull Text:PDF
GTID:2428330620962253Subject:Electronic Science and Technology
Abstract/Summary:PDF Full Text Request
With the acceleration of the pace of machine intelligence,more and more service robots into thousands of households.People are no longer satisfied with the past “ask and answer” type virtual robot.Multi-functional and high requirements of the virtual robot have become a research hotspot.When people communicate with virtual robots,they hope to get feedback from virtual robots,especially emotional feedback,and the most important information reflecting emotions is facial expressions.Therefore,how to give the virtual robot a realistic shape and be able to do all kinds of facial movements as freely as human beings are an urgent problem to be solved.The specific performance is as follows: First,how to accurately model and simulate the movements of the lips,eyes,eyebrows,teeth,tongue and other parts of the virtual human.Secondly,how to guarantee coordinated movement and mutual association between each moving part,there will be no movement disorder.Then,how to ensure the consistency of the threedimensional virtual speaker during the speech,the voice information,the facial motion trajectory and the content to be expressed.In order to solve the above problems,this thesis designs and implements a facial expression animation synthesis system based on Dirichlet free deformation algorithm.The system consists of two parts: the first part uses Facial Motion Capture(Facial Motion Capture)to collect the facial motion data of real-life performers.This part mainly includes the collection,analysis and processing of facial motion data,so that the processed data can be used as the driving data of the facial expression animation synthesis system.In the second part,DFFD(Dirichlet free-form deformations)algorithm is applied to 3d virtual face deformation.Based on the DFFD algorithm,combined with the portability of C++ language and the cross-platform of OpenGL graphics library,a three-dimensional virtual facial expression animation synthesis system is realized.In addition,voice-driven lip animation can be realized by using the output of the constructed 3d speaker voice and vision database trained by LSTM-RNN model as the input of the facial expression animation synthesis system.The main research work of this thesis is as follows:(1)The DFFD algorithm is designed and implemented in C++ language,and the DFFD algorithm is encapsulated in a class,which provides a friendly calling interface for the user to call.An improved weighted DFFD algorithm is proposed to control the deformation strength,which makes the deformation effect better.(2)DFFD algorithm and OpenGL image processing library are used to simulate the movements of eyes,eyebrows,nose,lips,teeth and tongue.Combined with the relevance of the movement of each region,the movement of each part is driven synchronously by the action data of real people,so as to achieve the coordination and synchronization of all facial regions and make the animation effect more realistic.(3)LSTM-RNN model is used as the training model to build the mapping relationship between the input voice and the output lip movement track,and realize the speech-driven lip animation,which increases the function of using voice as the system driving data and makes the system function more perfect.(4)The application experiment of the system was designed.By using the system's synthesized facial expression video and the corresponding human facial expression video as the stimulation material,the researcher observes the difference of the mode when observing the two kinds of videos,so that the system can be used for later experimental research.After experimental comparison and subjective manual evaluation,the results prove that the facial expression synthesis system in this thesis can realistically simulate the expression movements of real people when speaking.And the system can well guarantee the consistency between the three-dimensional virtual speaker in the process of speaking,visual,auditory and content to be expressed.
Keywords/Search Tags:Dirichlet free-form deformations, Weighted DFFD, Facial motion capture, expression animation, LSTM-RNN
PDF Full Text Request
Related items