Font Size: a A A

Research On Multi-view Feature Learning Based On Non-negative Matrix Factorization

Posted on:2020-06-09Degree:MasterType:Thesis
Country:ChinaCandidate:X R QiuFull Text:PDF
GTID:2428330590996795Subject:Software engineering
Abstract/Summary:PDF Full Text Request
Multi-view learning is an effective way for multi-view heterogeneous data by utilizing complementary information between different views to assist in heterogeneous feature fusing.In multi-view learning,subspace learning is a topic research direction by assuming that all views can be generated by the same semantic subspace.Recently,subspace learning based on non-negative matrix factorization can fuse heterogeneous features among multiple views to achieve dimensionality reduction and avoid“Curse of Dimensionality”.However,these methods still pose serious challenges for low-quality data with noise.Aiming at the noise reduction of multi-view subspace learning,we focus on learning the latent common subspace via non-negative matrix factorization and dual graph regularization.First,we propose a novel semi-supervised method,namely dual graph-regularized multi-view feature learning(DGMFL),for data representation in this paper.To reduce the effects of uncorrelated items for the common subspace among different views,our approach locates view-specific features for each view.And then,DGMFL exploits the local geometrical structure to explore data manifold.Meanwhile,it forms the intra-class affinity graph and the inter-class penalty graph by using labeled items to regularize conceptual manifold with the principle of"close relatives and great differences".In this way,DGMFL could achieve a more comprehensive representation hidden in multi-view datasets.Second,we propose a novel subspace learning model,called Adaptive Dual Graph-regularized Multi-View Non-Negative Feature Learning(ADMFL),for multi-view data representation.We lift the effect of unrelated features down through separating the view-specific features for each view.Furthermore,we utilize the geometric structures of both data and feature manifold to model the distribution of data points in the common subspace.Moreover,we introduce a weight factor to balance the influences for all views.Finally,we maintain the sparsity of the latent common representation by l1,2-norm to guarantee that the unimportant features in each data are zero values while zero columns do not exist in common feature.In this paper,we evaluate our algorithms on real-world datasets and compare with some state-of-the-art multi-view learning algorithms.The experimental results show that the proposed algorithms perform better and more robust than other comparison algorithms.Therefore,the proposed algorithms can learn the common representation for multi-view data and achieve feature fusion for high-dimensional and low-quality multi-view data with noise.
Keywords/Search Tags:Non-negative Matrix Factorization, Multi-view Learning, Graph Dual Regularization
PDF Full Text Request
Related items