| Avatar expression appearing in the virtual social space is one of the key technologies to convey people's emotions and facilitate the social interactions effectively via the virtual social system.It is a hot research topic in the field of computer vision and graphics.As an important evolution of the future social approach,facial expression tracking and animation technology based on webcam also has the vital practical significance in film and television performances,game animation and network communication industry.Aiming at lack of feasible solutions for synchronized facial expressions in current commercial virtual social,this paper presented a virtual social system based on facial expression tracking and animation technology,focusing on facial expression modeling,expression coding technology,expression tracking and animation technology,Expression-voice data synchronization,and achieved the following research results.(1)FACS(Facial Action Coding System)is used to decompose complex facial expressions into mutually orthogonal attributes.Multi-variable linear regression algorithm is used to map all face attributes into the double line model,which are identity and expression attributes respectively.The neutral facial expression mesh model was adjusted with the depth data map captured by Kinect to generate 51 standardized facial expression models.(2)Cascaded pose regression was adopted to train a dynamic expression model to infer the expression coefficients from 2D video frames,and the facial landmarks in regression were extracted by supervised descent method instead of 2D cascaded pose regression to achieve better robustness and fault tolerance in facial tracking and animation.(3)A multi-scale adaptive expression coding technique is proposed,which uses timestamp and adaptively adjusted dynamic circular queue technology to realize real-time expression-voice synchronization of expression animation and voice data,and uses Qo S feedback mechanism to monitor changes in varied complex network situations to striking balance between real-time and richness of facial expressions.Our proposed dynamic expression modeling technology eliminates any pre-processing operations that calibrate specific users,captures user facial expressions in real time and reenacts them on the virtual character.Compared to the traditional dynamic expression technology based on cascade state regression,the algorithm has better robustness and fault tolerance can be widely deployed in consumer-level applications.The experimental results show that the proposed facial tracking and animation system is practical and feasible,can accurately capture the facial expression information of the user,and abstractly describe as expression coefficients which can reproduce the animation effect on different virtual characters.And multi-scale adaptive expression coding technique can cope well with varied complex network situations changes,producing a high degree of realistic emotional cues in virtual social system. |