Font Size: a A A

Research On Theories And Methods Of Shared Subspace Representation Learning For Multi-view Data

Posted on:2022-01-19Degree:DoctorType:Dissertation
Country:ChinaCandidate:M X XuFull Text:PDF
GTID:1488306560989569Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
Under the background of big data era,the diversification of data collection makes the collected data appear to be more complex together with the high dimensionality,as well as their diversified characteristics of multi-description,multi-modality and multi-source.Technically,such data are generally referred to as multi-view data in machine learning.The pervasive existence of multi-view data has made traditional single view oriented machine learning theory and methods confront with new challenges.The research on theories and methods for multi-view learning has become a hot topic in the domain of machine learning.Multi-view data jointly characterize the same semantic objects,and they contain abundant complementary and consistent information.How to mine and utilize such complementary and consistent information to obtain consistent representation for the semantic objects,is a core issue to be solved imperatively in multi-view learning.Therefore,taking the subspace learning as the main line,we in this thesis study the fundamental theoretical methods of multi-view representation learning,on the objective of resolving the intrinsic correlations of multi-view data,the reliability of multiple views,the nonlinearity of data distribution,and the introduction of semantic association infor-mation.The main contributions and creative research results are as follows:1)Toward obtaining a compact and discriminative shared representation for the data associated with multiple views,we propose a l2,1-norm constrained Canonical Cor-relation Analysis method,i.e.,L2,1-CCA.In L2,1-CCA,the l2,1 norm is employed to constrain the canonical loadings and measure the correlation loss simultaneous-ly,to well facilitate exploiting the complementary and coherent information across multiple views.It enables,on the one hand,the canonical loadings to be with the capacity of variable selection for facilitating the interpretability of the learned canon-ical variables,and on the other hand,the learned canonical common representation keeps highly consistent with the most canonical variables from each view of the da-ta.Meanwhile,with the l2,1 norm constraint on the correlation loss,the proposed L2,1-CCA can also be provided with the desired insensitivity to noise(outliers)to some degree.To solve the optimization problem,an efficient alternating optimiza-tion algorithm is developed and its convergence is analyzed theoretically.Extensive experimental results on several real datasets show good performance of L2,1-CCA.2)To solve the problems of view redundancy,noisy views and view consistency among multi view data in multi-view representation learning,we propose a multi-view low-rank consistent common representation method with view selectivity.By introducing l0 norm sparsity regularizer acting as view selector,the proposed method can auto-matically eliminate potential negative views in shared representation learning,and ensure the shared representation of multi-view data more reliable.By incorporating non-negative matrix decomposition and low-rank learning,it can not only remove re-dundant information among multi-view data,but also enable the subspace shared by multi view data to fully cover the consistent and complementary information.The vi-sualization results by t-SNE show that the multi-view low-rank consistency represen-tation can well maintain the intrinsic structure of multi-view data.The effectiveness of the proposed method is validated on multi-view data clustering and classification tasks.3)To implement nonlinear correlation modeling of multi view data,we propose a ker-nel dependence maximization subspace learning model.Different from most existing cross-modal subspace learning methods based on correlation,the proposed model maps cross-modal data into different Hilbert spaces with the same dimension through sparse projection,and measures their correlation based on Hilbert-Schmidt Indepen-dence Criterion(HSIC),so as to realize the correlation to heterogeneous cross-modal data.Moreover,by introducing the dependency between semantic labels and cross-modal data in hilbert space,the learned shared subspace is more discriminative.To solve the optimization problem,an effective iterative optimization algorithm is de-signed,and the convergence analysis is provided.Cross-modal retrieval results on public available datasets verify the effectiveness of the proposed model.4)To introduce semantic correlations information into multi view representation learn-ing so as to enhance the deep understanding of semantic objects,we propose a semantic-consistent subspace representation learning model and apply it to multi-label cross modal retrieval.By introducing the regularization term based on HSIC,it can not only make full use of the correlations information among multiple labels but also maintain the consistency of each modality.Moreover,based on semantic consistency projection,a middle-level consistency mapping is learned to balance the semantic gap between the low-level feature space of each modality and the shared high-level semantic space,to acquire a more discriminative shared subspace.To solve the proposed model,an effective alternating iteration optimization algorithm is designed.Experimental results on NUS-WIDE and VOC2007 datasets show that the introduction of semantic association information can effectively improve the per-formance of multi-label cross modal retrieval.
Keywords/Search Tags:Mulit-view learning, shared subspace learning, canonical correlation analysis, factor factorization, hilbert space, cross-modal retrieval
PDF Full Text Request
Related items