Font Size: a A A

Multi-label Transfer Learning Based On The Difference In Samples

Posted on:2017-10-01Degree:MasterType:Thesis
Country:ChinaCandidate:S B YuFull Text:PDF
GTID:2348330503465374Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Multi-label learning is an important part of machine learning. It is widely used in the real life. The performance evaluation is important indicator to measure the quality of the classifier. Popular evaluation metrics used in multi-label system include hamming loss, one-error, coverage, ranking loss and average precision. A multi-label system give classification results of test samples without the value of evaluation metrics. It is necessary to mark test samples when get value of evaluation metrics and it is costly to labeled test samples, sometimes. Can we get value of evaluation metrics without labeling test samples? To solve this problem, this paper present different estimating method from different perspectives. One method is based on the differences in samples distribution. Another method is based on the differences in samples instances. The experimental results show that the differences between training samples distribution and test samples distribution have good linear relation with evaluation metrics of multi-label system. The differences between training samples instance and test samples instance also have good linear relation with performance evaluation in multi-label system. We also estimate value of evaluation standard by combining the differences in samples distribution and the differences in samples instances. The experimental results show that these three methods have good effect.Transfer learning is a hotspot in machine learning. It is more and more widely used in real life. Negative transfer is an inevitable topic in transfer learning. The effect of transfer learning depends on the similarity between the source domain and the target domain. When the similarity is smaller, the effect of transfer learning may not be good, and even negative transfer occurs. On the other hand, positive transfer occurs. In this paper, we study the similarity between the source domain and the target domain from the differences in samples distribution and in samples instances. The experimental results show that when the differences in samples distribution is small, positive transfer occurs. Otherwise, prone to negative transfer. When the differences in samples instances is small, positive transfer occurs. Otherwise, it is easy to have negative transfer.The comprehensive study on multi-label learning and transfer learning is still relatively small. This paper improves the single label transfer learning algorithm TrAdaBoost, and it is applied to the multi-label learning. Experiments show that the improved TrAdaBoost has good effect. And then, we study the relationship between the effect of multi-label transfer learning and the similarity of source domain and target domain, considering from the differneces in samples distribution and in samples instances. The experiments show that the relationship between the effect of multi-label transfer learning and the similarity of source domain and target domain is similar to the single label transfer learning.
Keywords/Search Tags:Multi-label learning, the samples distribution, the samples instances, transfer learning, TrAdaBoost
PDF Full Text Request
Related items