Font Size: a A A

RNN Based Multi-label And Multi-task Learning

Posted on:2018-03-06Degree:MasterType:Thesis
Country:ChinaCandidate:L WangFull Text:PDF
GTID:2428330596453945Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
Multi-label learning tasks are ubiquitous in real-world problems.In multi-label learning,the label set size is an unknown priori for each unseen instance.An intuitive approach to solving multi-label problem is to decompose it into multiple independent binary classification Problems.However,this kind of method does not consider the correlations between the different labels of each instance and the expressive power of such a system can be weak.Multitasking learning also has important research significance,as it usually performs better than single task learning.In this paper,we develop two multi-label learning models,both of which are based on RNN.One of them is called LSTM based RNN multi-label classifer.It learns a representation of the instance through RNN first,then the representation is used to learn the label set.Another is called Attention based RNN multi-label classifer.It learns the representation of a instance,then the Decoder outputs one label every time step.Due to the structure of RNN,this kind of classifer can learn the dependency of different labels effectively.There also exist an attention mechanism in Decoder which can link the most relevant feature with label.Except the two models above,we also introduce a kind of RNN,whose repetitive module is substituted by kalman filter.It can filter system noise due to the kalman filter's capacity.And the use of innovation to the update of system state can make the information spread effectively.The results of the experiments show that our models can perform better than the existing methods.We develop a common and special feature based multi-task learning model in this paper.In order to learn the common and special feature,we have two different ways.First we map the instance into two part,one of which is the common information and the other is the special information.We also use RNN to learn the common feature.The common feature is then taken part in the learning of every task.The results of the experiments show that our models can capture the relation between different tasks effectively.
Keywords/Search Tags:Multi-label Learning, Multi-task Learning, RNN, LSTM, Kalman Filtering
PDF Full Text Request
Related items