Font Size: a A A

Feature Based Informative Model For Music Recommendation

Posted on:2013-01-28Degree:MasterType:Thesis
Country:ChinaCandidate:B ChengFull Text:PDF
GTID:2218330362459268Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
The explosive growth of available choices from content providers has given greatprominence to recommendation systems. In the past years, recommendation systemshave shown great potential to help users find interesting items from large item space. Inpractical applications, recommendation systems can bring benefits to both customersand content providers. Taking Amazon4 as an example, recommend appealing itemsto customers may not only increasing sales, but also improving customer experience.Due to its great benefits to both users and content providers, recommendation systemshave been actively researched since it was introduced.For this year's KDD Cup Challenge5, Yahoo! Labs released a large music rat-ing dataset. The contest consists of two tracks. The first track is a rating predictionproblem that aims at minimizing RMSE (Root Mean Square Error). It is similar tothe famous Net?ix Prize Challenge 6. The task of the second one is to discriminatethe 3 songs rated highly by the user from the 3 ones never rated by her. In this task"rate highly"means a rating greater than or equal to 80. We tackle this problem as atop-n recommendation problem. That is, the three songs with higher prediction scoresare regarded as the user's favorite songs, while the other 3 songs are considered to beunrated.In this paper, we use ranking oriented SVD to solve this problem. A negativesampling technique is utilized to further improve prediction accuracy. Most impor-tantly, we propose to use a feature based informative model to incorporate differentkinds of information into a single model. The ensemble of many algorithms is a usefulapproach to improve the overall performance. This has been proved by the winnersof the Net?ix Prize. Different algorithms capture different information of the dataset, so they blend well. All the publicized results on KDD Cup Track2 also adopt ensem-ble techniques to boost their final predictions. However, ensemble usually needs extracomputation cost. There is no doubt that a single model with comparable performanceto ensemble models is more acceptable. Here we propose such a model. Differentkinds of information, such as taxonomy of items (more on this in Section 2.1), itemneighborhoods, user specific features and implicit feedback, are integrated into a sin-gle model. With this model, we achieve an error rate of 3.10% on the test set. This isthe best result of single predictors among all publicized results on this task, even betterthan the performance of many ensemble models.
Keywords/Search Tags:Recommendation
PDF Full Text Request
Related items