| With the development of the Internet and multimedia technology in recent years,more and more digital music has been greatly developed.Faced with the explosive growth of Internet music and application scenarios,traditional data organization methods based on music titles and artist names have been proved unable to meet all the needs.Music tags are a set of keywords used to describe high-level information in music.They are useful for organizing and retrievaling music resources efficiently.Music tagging is becomming common but important way of organizing music.The music tag can be added to the music knowledge graph as the attribute of music,and further applied to upper-level applications such as a dialog-based music search system based on the knowledge graph.Therefore,this article tries to studies the music tagging based on music text.The main research work is as follows:In terms of music topic tagging,this paper designs a topic classification model.To solve the problem of limited semantic expression of traditional static word vectors,the vector representation of song name and lyric is obtained through the powerful pre-trained language model BERT(Bidirectional Encoder Representations from Transformers).Global information coming from [CLS] vector of each layer of Transformer was obtained by using the attention mechanism.Aiming at the shortcomings of the small scope of the traditional CNN structure,IDCNN(Iterated Dialated Convolution)that can expand the receptive field is selected to further encode the BERT’s word vector matrix to obtain local information.The global information and local information are stitched at the feature fusion level and then connected to the fully connected layer for classification.Experiments show that this model is better than other deep learning models based on static word vectors,with average F1 value of 69%.In terms of music emotion tagging,this paper designs an emotion classification model.The emotional dictionary is very helpful to indicate the emotional tendency of the text.Firstly the model generate the emotional word vector of the song name and lyric through an emotional dictionary,and then combine emotional word vector with the word vector obtained by BERT to get emotional information.Emotion classification relys on word order information,but the position embedding in BERT has poor ability to extract order information,thus we use Bi GRU(Bidirectional Gated Recurrent Unit)to extract bidirectional contextual semantic information.Further more,attention mechanism is used to improve the importance of different words to sentiment classification.Experiments prove that adding emotion information and using attention mechanism can improve the performance of emotion classification,with average F1 value of 72%.Based on the above research,this paper designs and implements an automatic music tagging system,which can predict music tags.The music tags generated by the system can be added to the music knowledge graph as song attributes,thereby serving as a data support for a music company’s online conversational music searching system. |