Font Size: a A A

Automatic Generation Of Facial Expressions In Mobile 3D Animation System

Posted on:2020-10-07Degree:MasterType:Thesis
Country:ChinaCandidate:M M ZhaoFull Text:PDF
GTID:2428330623956689Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
The technology of automatic animation generation is a whole process of automatic animation production proposed by Academician Lu Ruqian of the Institute of Mathematics of the Chinese Academy of Sciences.The technology is based on the understanding of the story,and the animation is generated by computer aided.In 2008,Zhang Songmao,a researcher of the Chinese Academy of Sciences proposed the concept of the mobile 3D animation automatic generation system based on this technology,which transforms the SMS natural language text content into an animated form for transmission.At present,the system has been running smoothly and has basically achieved the expected results.It can build varieties of plots and add relevant models,lights,special effects,etc.,according to the content of the SMSs.However,the system has not studied the expression,but the expression is an essential factor in animation.Therefore,this paper proposes to explore the realization of the animation of facial expressions based on the automatic generation system of mobile animation.The work of this paper mainly includes two aspects:Firstly,the expression ontology library is designed and implemented.The expression ontology library includes the expression class and the action unit class.The expression class contains emotional and non-emotional types.The former is based on the emotional round model proposed by Plutchik,the emotions are divided into eight categories,each contains emotional types of different intensities,and the latter includes the common facial activity types.The action unit class selects the three parts of the eyebrow,the eye and the mouth as the expression component,and classifies the motion modes of each part based on the action unit(AU)proposed by Paul Ekman in the Face Coding System.The relationship between the expression class and the motion unit class is established by analyzing the AU type of each part corresponding to the expression.In addition,the factors related to expression in this system include the theme,the template and the action type,so as to establish the object attributes and describe the relationship between expression class and related class by axiom.According to our research,this is the first ontology library for animated character expressions.It contains 60 categories,5 object attributes,91 instances,and 96 axioms.It can perform corresponding expression planning on 26 topics and 69 templates.Secondly,the realization of the qualitative planning based on the expression ontology library and the quantitative calculation of the expression animation.In the qualitative planning process,the number and the type of expressions and the combination of the motion units are determined according to the content of short messages,mainly through the sequence of theme-based,action-type-assisted and template-based.The specific quantitative implementation of the expression animation relies on the fusion deformation technology and the key frame animation technology.It mainly includes three parts: making the basic expression of the model,establishing facial controller,calculating the controller parameters and setting key frames by using the fusion deformation technology.The controller parameter calculation and key frames setting parts are processed for the hybrid controller type,different intensity controller types and composite function controller types,and the expression is automatically and smoothly transitioned.This paper makes an experiment on open Chinese natural language short messages,and evaluates it from three aspects: feasibility,diversity and ornamentality.Through the experiment of 800 short messages,the successful planning rate of human expression is 95.02%,which shows that this method has certain feasibility.By counting the results of short message planning corresponding to different themes or templates and 30 repeated tests of the same short message,we can see that our expression animation is rich in types,and the diversity planning rate is about 70%.Finally,through 20 short messages generated animation design questionnaire survey and statistical results,found that the index scores can reach the upper and middle level of the questionnaire accounted for 72.55%,indicating that our animation effect can meet the ornamental needs of the audience.This paper firstly proposed and realized the automatic generation of facial expressions in the mobile 3D animation automatic generation system,which enables the facial expressions of the characters to match their actions,express the content of the short messages,and enhance the vividness of the animation.Further work includes expanding the expression ontology library,attempting to incorporate labeled behavior-driven methods into the existing animation quantitative implementation process,so as to obtain more realistic facial expression animation.
Keywords/Search Tags:automatic animation generation, facial expression animation, expression ontology, fusion deformation, keyframe animation
PDF Full Text Request
Related items