Font Size: a A A

Research On Task-Oriented Extension Of Gaussian Process Latent Variable Model

Posted on:2019-03-18Degree:DoctorType:Dissertation
Country:ChinaCandidate:P LiFull Text:PDF
GTID:1368330590966695Subject:Software engineering
Abstract/Summary:PDF Full Text Request
Gaussian process(GP),as a flexible Bayesian non-parametric model,has been widely used in ma-chine learning scenarios and obtains superior results.However,existing GP-based models,such as Gaussian process regression model,Gaussian process classification model and Gaussian process latent variable model(GPLVM),just introduce the GP prior into the modeling and ignore the characters of the task itself and the valuable information it contains,thus can not meet requirements of various ma-chine learning tasks.To address the above-mentioned problem,in this paper,we explore the extension methods of Gaussian process and Gaussian process latent variable model from the perspective of four concrete machine learning tasks(multi-task learning,metric learning,multi-view learning and feature-correlation learning).The main contributions of this paper are as follows:(1)We establish a hierarchical Gaussian process model for multi-task learning.Different from the existing GP-based multi-task learning models,this model has a two-layer structure:shared feature learning layer and feature correlation embedding layer.Specifically,the former layer is used for the learning of features that shared among multiple tasks;the latter layer is used for the explicit embedding or learning of task-correlations.Based on this two-layer structure,our model not only explicitly embeds the task-correlations into the modeling but also needs not construct and compute the cross-covariance matrix involved in corss-covariance-based methods,leading to lower computation and memory com-plexities.Experimental results demonstrate its superiority in performance of multi-task learning to recently proposed approaches.(2)We propose a Bayesian non-parametric metric(similarity)learning model(GP-Metric).In the modeling process,we use Gaussian process to extend the bilinear similarity into a non-parametric met-ric and then develop a non-parametric metric learning method.Different from the parametric metric learning methods,our method effectively reduces the risk of model over-fitting and avoids the problem of lack of flexibility in parametric model.As a result,the learned metric not only inherits the flexibility of GP but also possesses the ability of nonlinear learning.Experimental results demonstrate the effec-tiveness of GP-Metric and show that our GP-Metric has superior performance in dimension reduction and metric learning.(3)We propose a shared Gaussian process latent variable model for incomplete multi-view cluster-ing.Specifically,in order to address the incomplete multi-view data clustering problem,it learns a set of intentionally aligned representative auxiliary points in individual views jointly to not only compensate for missing instances but also implement the group-level constraint,and by which we can significantly improve the performance of shared Gaussian process latent variable model in incomplete multi-view learning task.The experimental results demonstrate the superiorities of our method in incomplete multi-view clustering.Furthermore,it is also straightforwardly extended to cases with more than two views without adding any complexity in formulation.(4)We propose a feature-correlation-aware GPLVM.In order to effectively utilize the feature-correlation information and improve the performance of feature learning,this paper introduces a set of extra latent variables into the original GPLVM to explicitly model the feature-description information and then defines a joint objective function to to simultaneously learn the low-dimensional latent vari-ables and the feature-description variables.Furthermore,it can be applied to both unsupervised and supervised learnings to improve the performance of dimension reduction.Experimental results show that in these two learning scenarios the proposed models outperform their state-of-the-art counterparts.In addition,we also explore the sparse Gaussian process method for the formulation of the proposed extension models in this paper and respectively use variational inference and stochastic variational in-ference for the optimization of the models in(3)and(4)in order to make them available for much big dataset.
Keywords/Search Tags:Gaussian process, latent variable model, feature learning, multi-task learning, metric learning, multi-view learning, feature correlation
PDF Full Text Request
Related items