Choreography is usually done by professional choreographers,while the development of motion capture technology and artificial intelligence has made it possible for computers to choreograph with music.There are two main challenges in choreography:1)how to get real and novel dance moves without relying on motion capture and manual production;2)how to use the appropriate music and motion features and matching algorithms to enhance the synchronization of music and dance.To achieve these two targets above,this thesis proposes a novel framework based on Mixture Density Network(MDN)to synthesis dances that match the target music.The framework includes three steps:motion generation,motion screening and feature matching.Compared with the previous studies,the dance generated in this thesis has been improved in the coherence and authenticity of movements.In addition,the user’s subj ective score indicates that the choreography results in this thesis match the music better.The significance of this thesis consists of three aspects as follows:Firstly,in order to make the dance movements generated by the model applicable for choreography with music,this thesis proposes a parameter control algorithm and a coherence-based motion screening algorithm to improve the consistency of dance movements.In the process of motion generation,the mean value of Gauss model from MDN output is taken as the skeleton position,and the motion coherence is measured according to the rate of joint velocity change in adjacent frames.The experimental results show that compared with the original generated motions,the consistency of the filtered motions is significantly improved.Secondly,to improve the unity of music and motions,this thesis proposes a multi-level music and motion feature matching algorithm,which combines global feature matching with local feature matching.First of all,calculate the note density and Beats Per Minute(BPM)of the target music based on the constant Q transform and match with features such as motion speed and spatiality,and then adapt the local feature matching of music and motion segments based on rhythm and intensity.The experimental results show that,with the multi-level feature matching algorithm,the speed and other characteristics of each motion segment in the final synthesis result are more uniform,and the overall choreography is more aesthetic.Thirdly,this thesis deeply analyses the whole process of computer choreography with music and proposes a choreography framework,which provides a new method to solve this problem.The framework consists of four modules:motion dataset construction,model training and motion generation,dance motion arrangement and synthesis,and 3D character animation visualization.To enhance the practicability of the framework,user control is introduced into the third module,so the choreography results are influenced by setting the threshold of local skeletal speed and spatial features.Using this framework,dance movements matching the target music can be generated automatically.Compared with the music-motion mapping model based on deep neural network,this framework has stronger stability and generalization ability. |