Font Size: a A A

Research On Semi-Supervised Multi-Task Learning Based On Regularization

Posted on:2020-07-19Degree:MasterType:Thesis
Country:ChinaCandidate:X K JiaFull Text:PDF
GTID:2428330578451284Subject:Systems analysis and integration
Abstract/Summary:PDF Full Text Request
With the development of machine learning,semi-supervised learning and multi-task learning have attracted more and more researchers' attention.The common purpose of semi-supervised learning and multi-task learning is that whether the additional information source is from unlabeled sample data or shared information from related tasks,it can be applied to supervised learning and improve the performance of supervised learning methods.However,most of the current multi-task learning methods belong to supervised learning,which requires a large number of sample data with category labels in the learning process;while most of the semi-supervised learning methods are for single-task learning problems,ignoring the related information between tasks(sub-problems).Then combining the multi-task learning methods with the semi-supervised learning methods can not only fully learn the labeled and unlabeled sample data,but also learn the shared information between the mixed sample data,which can further improve the generalization performance of the modelIn the multi-task learning system,the key problem is how to mine the shared information between related tasks.Generally speaking,the shared information representation of multi-task learning can be divided into two kinds:parameter-based sharing and regularization-based sharing.The paper analyzes the algorithms based on the above two methods most of which for solving supervised problems of shared information representation,and combines the semi-supervised learning method to design and verify two semi-supervised multi-task learning methods under the condition of less labeled sample data.Firstly,a semi-supervised multi-task learning method based on parameter sharing is proposed in this paper.This method combines the least squares support vector machine based on l0-norm regularization with multi-task learning.Assuming that the classification hyperplane function of each related task is composed of a common function l0 and a private function l1,the model can better learn the shared parameter information between related tasks.To further learn the unlabeled sample data,the regional tagging and the label resetting methods are used to update the category label of the unlabeled sample data.Then,on the basis of modified multi-task least squares support vector machine as a classifier to mine the parameter sharing information between related tasks,the method utilizing the l2,1-norm regularization term,makes full use of the labeled and unlabeled sample data of related tasks,learns the sparse feature selection matrix of each task,and mines the shared structure of all tasks by adding global constraints to the feature selection matrix.Finally,we verify the feasibility and effectiveness of the two semi-supervised multi-task learning methods proposed in the paper on the Dermatology dataset,Automobile and Vehicle Silhouettes data sets.
Keywords/Search Tags:Multi-task learning, Semi-supervised learning, Norm regularization, Feature selection, Shared representation
PDF Full Text Request
Related items