Font Size: a A A

Research On Music Emotion Classification Based On Audio And Lyrics

Posted on:2016-01-31Degree:MasterType:Thesis
Country:ChinaCandidate:K Y TaoFull Text:PDF
GTID:2308330473965534Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
The explosive growth and the huge amount of music makes our times become a real digital music era. At the same time, how to effectively manage music has also become an issue worthy of attention. Emotion is not only the most essential features of music, but also people’s psychological feelings. So, automatically identify the emotion in the music has great significance to promote the development of artificial intelligence.For the problem of music emotion classification, a method of multi-modal music emotion category combines audio and lyrics is put forward to compensate for the shortage of the single-modal music emotion classification method that only uses audio features for classification. In this paper, we mainly discuss how to use the lyrics as well as how to combine audio and lyrics for music emotion classification based on the selection of music emotion model and the analysis and processing of music features, and compare the classification performance of multi-modal and single-modal.In the lyrics-based music emotion classification, we propose an improved CHI feature selection algorithm that introduce three parameters(that is frequency, concentration and distribution information) to regulate the CHI statistical values, and use TFIDF method for weight calculation and LSA for quadratic dimension reduction. The experimental datum show that, in the lyrics-based music emotion classification, the accuracy of the traditional CHI feature selection method is 58.20%, the accuracy of the improved CHI feature selection method is 67.21%, and the accuracy of the combination of LSA and the improved CHI feature selection method is 69.68%. Thus, the accuracy of the third method is higher and the dimensions is lower.In the multi-modal music emotion classification based on audio and lyrics, for multi-modal fusion problem, we put forward the improved LFSM fusion method, and compare the results of several different fusion methods though a series of experiments. And the experimental results show that the accuracy of the improved method is highest, reaching 84.43%, verifying the feasibility and the effectiveness of the method.
Keywords/Search Tags:music emotion classification, CHI feature selection algorithm, LSA, multi-modal fusion, LFSM
PDF Full Text Request
Related items