Font Size: a A A

Character Motion Synthesis And Style Transfer Based On Deep Learning And Spatio-Temporal Constraint

Posted on:2020-08-03Degree:MasterType:Thesis
Country:ChinaCandidate:D HuFull Text:PDF
GTID:2428330590963147Subject:Engineering
Abstract/Summary:PDF Full Text Request
Character movement is an important part of animation,film and game applications.However,human motion data has the limitation of cumbersome and low utilization rate,which leads to high production cost of human motion style in practical application.We use motion capture data in human motion synthesis and style transfer,and attempt to directly generate human motion style in video.The main research contents are as follows:(1)According to the abstraction and complexity of human motion capture data,we establish a motion style transfer model by combining RBM and auto-encoder,and map the original human motion capture data to the motion feature space for motion style transfer synthesis.High-dimensional motion capture data are mapped into to lowdimensional feature space by coding network,and motion style transfer constraints are established in feature space.Finally,human motion transfer results are obtained by decoding.(2)The motion style transfer model may suffer from the unnatural posture and lack of adaptability with continuous motion.We address these problems by present an efficient motion style transfer approach via deep auto-encoder and spatio-temporal feature constraint.First,according to the user requirement for style transfer,we propose to decompose the human motion into behavior motion and style motion.Second,we construct motion transfer model by embedding the history motion frames within the deep auto-encoder model.Finally,the Gram matrix is used to establish the motion style transfer constraint and achieve motion style transfer.(3)We propose a human motion style transfer model by combining Markov random field and cycle constraint to solve the problem,which is difficult to accurately define the style transfer result due to the abstraction of human motion style.Firstly,motion capture data are mapped into the feature space by using the coding network composed of the convolutional network.Then,the Markov random field is used to establish the connection between different motions in the feature space,and the association between the transfer motion and the original motion data is established by using the cycle constraint.Finally,we evaluate our end-to-end motion style transfer on motion capture datasets using inception scores,and use the model to build a simple application platform to achieve real-world application.(4)An attempt on human motion style transfer by using image and video data,we propose a pixel-level human motion style transfer model based on conditional generative adversarial networks.The model uses convolution LSTM and convolution to establish two branch encoding networks to extract the video feature and picture content.Then,the combined two features are decoded by a decoding network to generate human motion video data frame by frame.Finally,the Gram matrix is used to establish constraints on the encoding and decoding features to control the motion style transfer.
Keywords/Search Tags:Motion Style Transfer, Auto-Encoder, Markov Random Fields, Generative Adversarial Network, Cycle Consistent
PDF Full Text Request
Related items