Font Size: a A A

Research On The Automatic Style Transfer Of MIDI Music Performance Based On Deep Learning

Posted on:2021-02-25Degree:MasterType:Thesis
Country:ChinaCandidate:A D DaiFull Text:PDF
GTID:2428330611966415Subject:Communication and Information System
Abstract/Summary:PDF Full Text Request
Today,with the rise of artificial intelligence,computer music is developing rapidly.Music has two important aspects: composition and performance.Different genres of music convey different styles.Performers inject different rhythms and dynamics into it when playing,thus producing rich expressive force.The development of image style transfer has opened up the exploration of music style transfer.The study of music style transfer can inspire music creators and assist music generation,which is an important issue in many fields of artificial intelligence.MIDI(Musical Instrument Digital Interface)is used to record the performance information of music.This thesis studies the transformation of music performance style,takes velocity as the main feature of performance style,uses deep learning to predict the velocity changes in different styles of MIDI music,and automatically configures the velocity of different performance styles for the same piece of music,so that the computer can also achieve close to musician's music performance.The main work and innovation of this thesis include:(1)A quantitative representation method of MIDI music content and music performance style based on note matrix and velocity matrix is proposed,which overcomes the problem that piano roll representation method can not distinguish between single long note and multiple continuous notes with the same pitch,describes music information more comprehensively,and extracts music implicit style from note matrix based on autoencoder,which can better eliminate the potential influence of music content on music performance style.(2)A performance style transfer network based on recurrent neural network and convolution neural network is proposed.The bi-directional recurrent neural network based on GRU(Gated Recurrent Unit)is used to extract the note feature vector sequences of different styles,and the one-dimensional convolution neural network is used to predict the velocity of the extracted note feature vector sequences in a specific style,which can better learn the velocity changes of different styles of MIDI music.(3)The experiment of performance style transfer is designed and a subjective and objective evaluation method is proposed,in which the objective evaluation puts forward the style transfer intensity evaluation based on velocity classifier;the subjective evaluation adopts the way of online audition evaluation,and music lovers are invited to audition to determine whether the converted MIDI music is played by humans or machines.The results showed that only 46% of the 200 songs could identify the machine correctly.
Keywords/Search Tags:MIDI, performance style transfer, velocity, deep learning
PDF Full Text Request
Related items