Font Size: a A A

Compact Representation Learning For Multi-view Data

Posted on:2020-07-19Degree:MasterType:Thesis
Country:ChinaCandidate:Y Q LiuFull Text:PDF
GTID:2518306518463204Subject:Computer technology
Abstract/Summary:PDF Full Text Request
With the development of information technology,real-world data often have the characteristics of multiple views,in other words,the same object consists of multiple sources or multiple modalities.For example,the feature of one object can be represented by its visual images or textual descriptions;a natural image may be extracted features from colors,shapes and textures,respectively and represented jointly.Since machine learning tasks rely heavily on the data representation,representation learning is developed which can reduce the difficulty and randomness of manual design,and automatically learn efficient representation from the data.Whether supervised or unsupervised,the compact representation learning of multi-view data with rich features is an important research direction to improve the performance of data analysis.This paper focuses on the compact representation learning of multi-view data.In the scenarios of both supervised and unsupervised tasks,we study how to use and balance two characteristics to fuse multiple features,as follows:(1)Multi-metric based multi-view representation learning.In the supervised situation,this paper proposes a multi-view feature fusion method based on metric learning.The method uses improved linear discriminant analysis method to learn a specific metric function for each view to maintain its uniqueness.At the same time,HilbertSchmidt Independence Criterion is used to maximize the correlation between different views,which realizes the consistency of each projected representation in RKHS.The experimental results show better classification performance of proposed method.(2)Nested autoencoders based multi-view representation learning.In the unsupervised scenario,this paper proposes a model of nested autoencoder networks.On the one hand,the inner autoencoders involved in the proposed model are utilized to extract respective representations for each view.On the other hand,contrast to mapping them into a common subspace,the outer autoencoders reconstruct each view with the same input,i.e.,latent representation,which flexibly balances the consistency and complementarity of multiple views.The experiment verifies that the learned representation has superior performance in classification and clustering tasks.
Keywords/Search Tags:Multi-view Representation Learning, Metric Learning, Autoencoder in Autoencoder, Compact Representation
PDF Full Text Request
Related items