Font Size: a A A

Multi-View Shared Representation Optimized For Multi-Label Learning

Posted on:2024-08-22Degree:MasterType:Thesis
Country:ChinaCandidate:Y T XuFull Text:PDF
GTID:2568307127963799Subject:Software engineering
Abstract/Summary:PDF Full Text Request
In multi-view multi-label learning,features typically describe instances from several different perspectives and are associated with multiple labels.The different description perspectives result in different dimensions of each view feature,i.e.the view feature space suffers from heterogeneity.Traditional multi-view multi-labelling learning combines multiple views into one view borrowing multi-label algorithms or using separate multi-label algorithms for each view to solve the problem,but this ignores the private and common information of the views respectively.Subspace learning solves this problem by learning a potential shared subspace from the view space.At the same time,mining richer shared information can effectively improve the classification accuracy of multi-view multi-label.Therefore,the main research of this paper is as follows:1)In multi-view multi-label learning,subspace learning is often used to solve the problem of heterogeneity among views.Subspace extraction is generally achieved by reducing the dimensionality.However,the mapping of subspace to the label space is a process from lowdimensional to high-dimensional,which is prone to cross-dimensional problems.We turn the prediction process into an equidimensional mapping from a low-dimensional space to a lowdimensional space.Firstly,on the base of learning the shared subspace of multi-view multilabel data,the latent semantics and coefficient matrices are extracted from the label space.Secondly,the shared subspace and the original label space respectively constrain the latent semantic space.Finally,the shared subspace and latent semantic matrix are put into the MLRKELM classifier for learning.Thus,this paper proposes a Latent Semantic Learning Based on Shared Subspace method(LSLSS).2)In multi-label learning,label-specific feature(LSF)learning considers that the label is determined by inherent characteristics of its own.However,in multi-view multi-label learning,the heterogeneity problem persists in the feature space.Existing algorithms only extract LSF for each view separately,and suffer from inadequate communication of LSF and poor classification accuracy.The subspace learning method can extract the shared subspace of each view as a feature representation space substitute for the original view feature space,and the richer the information in the shared subspace the better it represents the original view feature.Firstly,the label groups are obtained via spectral clustering.The correlation of the label groups to the feature is fully considered and the specific set of relevant view feature corresponding to the label groups is obtained.Secondly,the feature representation space(global shared subspace)and local subspace(local shared subspace)are extracted according to the original feature space and feature sets respectively.Finally,the local subspace is complemented with the feature representation for LSF learning.Based on the above analysis,we propose a multi-view multilabel learning for label-specific feature algorithm GLSSL(GLocal Shared Subspace Learning).This paper addresses the possible cross-dimensional problem in the learning process of multi-view subspaces by learning the latent semantics of the labels.The mapping from the shared subspace to the original label space is changed into an iso-variant mapping to the latent semantic space of the labels.The performance of multi-view classification is optimized.However,considering that the shared information contained in the latent semantics is not complete enough,we discard the concept of latent semantics.A richer shared information is obtained by extracting the local shared subspace from the feature groups that are more relevant to the label groups.
Keywords/Search Tags:Multi-view learning, Multi-label learning, Latent semantic learning, Label-specific feature learning, Shared representation learning
PDF Full Text Request
Related items