Font Size: a A A

Research On 3D Face Model Control Based On Real-time Expression Driven

Posted on:2019-04-18Degree:MasterType:Thesis
Country:ChinaCandidate:M Y HuangFull Text:PDF
GTID:2428330548495944Subject:Control engineering
Abstract/Summary:PDF Full Text Request
At present,the face three-dimensional expression simulation technology is a hot topic that cannot be avoided in the field of virtual reality and computer vision.In particular,with the rapid development of entertainment industries such as three-dimensional animation movies and games,the demand for three-dimensional expression models is also increasing.Human beings are highly intelligent animals with complex emotions,and their expressions are often the most significant carriers of emotions.Expression contains a wealth of information,can convey attitudes,psychological conditions,emotions and other information to the outside world.The more complex and natural the expression,the more able to convey more information.The research on three-dimensional model driven by a performer's facial expression gradually becomes a trend of three-dimensional expression animation modeling.Compared with the traditional modeling method,the modeling method has obvious advantages such as high efficiency,exquisiteness,and natural expression.In the current mainstream method for emoticon driven expressions by the user's expression,most of the image acquisition devices based on 3D images that contain depth information are mostly binocular cameras or infrared ranging cameras,and the cost is higher than that of ordinary monoculars.Cameras are much less popular.This article designed a kind of expression animation drive system that is specially designed for monocular cameras.The main purpose of the system is to collect real-time expression changes of users through a monocular camera and restore the user's facial expressions and head gestures on a constructed 3D model.Without using face depth information and using only two-dimensional picture information,the real-time and authenticity of emoticon simulation is guaranteed.The main points of this study are as follows:To begin with,The first one is the preprocessing of face features.This part is divided into face detection and face feature extraction.The function of this part is to provide the required feature points for subsequent expression parameter estimation and attitude parameter estimation algorithms.Then,a kind of expression mapping model under different posture space is designed.At the same time,the concept of expression threshold is proposed.The meaning of the gesture space mapping model is to uniformly map the facial expression key features of the user in different postures to the positive face posture,so as to facilitate the estimation of the expression parameters,then an algorithm for estimating the user's facial expression parameters and posture parameters is designed.Finally,design a face model driving system based on user's expression.The system is mainly divided into template entry module,expression decoupling module and expression synthesis module.
Keywords/Search Tags:3D expression simulation, key features of expression, face pose space mapping model, expression decoupling and synthesis, face pose
PDF Full Text Request
Related items