Font Size: a A A

Study Of Multi-view Learning Algorithms With Applications Based On Shared Subspace Learning

Posted on:2018-04-21Degree:MasterType:Thesis
Country:ChinaCandidate:Y TanFull Text:PDF
GTID:2348330536988520Subject:Communication and Information System
Abstract/Summary:PDF Full Text Request
With the rapid development of information technology,the digital data produced and collected have presented the characteristics of multi-view.And the information provided by individual view are incomplete and insufficient.In order to interpret the object more accurately,the multi-view learning technology arises at the historic moment.The basic idea of multi-view learning is that fusing the different features of the object effectively via the metric learning,the co-training and the latent variables subspace learning,etc.,and obtaining the discriminant description of the object.The multi-view learning technology provides a new idea for solving the "semantic gap" problem in the field of the image retrieval,video enrichment,document classification,etc.Based on the peculiarity of data: high-dimensionally,multi-dimensionally,redundancy,unpredictability of the potential structure and so on,the research how to make full use of the potential relationship amongst multiple features to find the highlevel semantic features has a certain theoretical research significance and application value.The study of multi-view learning based on shared subspace learning means that making full use of the internal connection of the data,and trying to find a unified lowdimensional subspace from multiple high-dimensional feature space under the condition that the nature structure of the original data has not been changed,so as to interpret the original data much effectively.Meanwhile,it can solve the dimension disaster to a certain extent.For the “semantic gap” problem in the field of the image classification,etc.,this thesis follows the main line about "multi-view learning methods based on shared subspace learning and the application ".Meanwhile,the article reduces the dimension of the multi-view data via shared subspace learning based on matrix factorization.In addition,according to the existence of prior knowledge,we set up the corresponding multi-view learning.The details can be summarized as the following two aspects:1)In view of the problems that the uneven contribution of the different characteristics and the use of inadequate of the inner relation attribute leads to obtain a low learning performance,this thesis puts forward a multi-view shared subspace clustering algorithm based on non-negative matrix factorization(NMF).The multiview learning algorithm maps the multi-dimensional data in a low-dimensional subspace,and brings the certain constraints into the low-dimensional representation.It obtains the strong robustness of the clustering results.More specifically,the algorithm constraints each pair view via the co-regularization function to dig out the complementary information amongst multiple views.Meanwhile,it considers each view of the same data point should be assigned to the same class so that it constraint the low-dimensional representations and shared representation should be as relevant as possible.2)For these cases that the data attached label information in the practical application and the information loss seriously in the learning process,etc.,this paper puts forward multi-view shared subspace classification algorithm via the PAF(patch alignment framework).In the process of designing classifier,it makes full use of the label features to guide the learning process,and the relation amongst multiple labels is projected onto a low-dimensional subspace to analysis the underlying geometry structure via the DLA(Discriminative Locality Alignment).In order to reduce the learning loss,the algorithm measures the reconstruction error via the correntropy induced metric.Meanwhile,it depict the relationship between the low-dimensional representations of each view and low-dimensional shared representation via the similarity degree matrix.Moreover,the hinge loss functions of all the data points are used to measure the classification error.And it obtains a strong discrimination ability of the classification hyperplane.
Keywords/Search Tags:Shared subspace learning, Multi-view learning, Matrix factorization, Clustering, Label feature, Classification, Geometry structure
PDF Full Text Request
Related items