Font Size: a A A

On Multi-task Learning Methods

Posted on:2014-12-02Degree:DoctorType:Dissertation
Country:ChinaCandidate:J PuFull Text:PDF
GTID:1108330464964388Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Multi-task Learning (MTL) aims to improve the generalization power by learning multiple learning tasks jointly and simultaneously. Through explor-ing the hidden relationships among multiple tasks, many MTL algorithms have proved to be successful both in theoretical and in real applications. In general, the task relationships are mostly organized as task grouping and task outlier dur-ing the learning process. In this dissertation, we will investigate MTL methods under the aforementioned task relationships.First, we present a flexible multi-task learning framework to identify latent grouping structures under agnostic task grouping setting, where the prior of the latent subspace is unknown to the learner. In particular, we relax the latent subspace to be full rank, while imposing group sparsity and orthogonality on the latent representation matrix of target models. As a result, the target mod-els still lie in a low dimensional subspace spanned by the selected basis tasks, and the structure of the latent task subspace is fully determined by the data. The final learning process is formulated as a joint optimization procedure over both the latent subspace and the target models. Besides the theoretical per-formance guarantee of our method, experimental results and comparisons with several competing approaches corroborate the efficiency and effectiveness of the proposed method.Second, we adopt a generic formulation for robust MTL with the consider-ation of both grouped tasks and outlier tasks. Instead of performing model de-composition to cope with various structure elements, we directly impose a hybrid l11/l21-norm regularization term to formulate an unconstrained and non-smooth convex optimization problem. In order to derive efficient solutions for the generic MTL, we propose two algorithms with the emphasis on different strength:the Iteratively Reweighted Least Square (IRLS) method and the Accelerated Proxi-mal Gradient (APG) method. The performance bound and extensive experiments demonstrate the effectiveness of our methods.Finally, we formulate a generic MTL method to solve the fine-grained visual categorization, which aims at classifying visual data at a subordinate level, e.g., identifying different species of birds. The training of each category classifier is treated as a single learning task, and multi-class categorization is formulated a generic MTL framework to train multiple classifiers simultaneously. In order to automatically discover both clusters of similar categories and outliers, we propose to optimize the hybrid l11/l21n-norm regularized classification problem. We show that the objective of our formulation can be solved using an iterative reweight-ed l2 optimization method. The experimental results on two fine-grained visu-al categorization benchmark datasets validate the effectiveness of the proposed method.The proposed three methods are all focused on the MTL paradigm. Com-paring to the existing methods, they not only achieve better results in empirical study but also have theoretical guarantees.
Keywords/Search Tags:Multi-task Learning, Group Sparse Coding, Proximal Gradient Method, Iteratively Reweighted Least Square, Fine-Grained Visual Categoriza- tion
PDF Full Text Request
Related items