With the rapid development of information acquisition and computer storage capacity,the amount of data increases quickly,and the internal structure of data becomes complex.How to dig out useful information from the huge and complex data to create value for the pro-duction and life of todays society is the mam challenge in the era of big data.Clustering analysis is the basic tool in data mining,and its mam purpose is to divide the dataset into several clusters according to the similarity between samples.For the problem of high dimen-sional multi-view dataset's clustering,subspace learning based methods have been widely concerned due to its good clustering performance.In recent years,many multi-view clustering methods based on subspace learning have been proposed.However,these methods still have some disadvantages.This article makes use of adaptive latent representation,manifold regularization,block-diagonal regularization and diversity constraint to deal with the disadvantages.Two multi-view clustering methods based on subspace learning are proposed,which can be summarized as follows:Firstly,because traditional multi-view clustering methods separate the processes of adaptive consistent representation learning and similarity matrix learning,an accurate similarity ma-trix for multi-view dataset cannot be obtained.Adaptive latent representation for multi-view subspace clustering is proposed to solve the above problem.As a joint learning framework,this method fuses those two parts into a common objective function,which could obtain more compact clustering results.The manifold regularization is utilized to make latent rep-resentation keep local geometric structure of each original view's data.The experimental results show that this method can effectively improve the accuracy of clustering.Secondly,existing multi-view subspace clustering methods utilize specific type of norm to constrain the noise or outliers matrix,which is not robust.Multi-view subspace clustering via K-block diagonal decomposition and HSIC is proposed to solve the above problem.As-sume that the subspaces of the dataset are independent,the structure of the ideal adjacency matrix corresponding to each view should be block diagonal.The proposed method utilizes block diagonal regularization to keep the independence between subspaces and reduce the influence of noise or outliers data.At the same time,this method utilizes Hilbert-Schmidt independence criterion to compute the difference of multi-view's self-expressiveness coeffi-cient matrices,which could mining the complementary information of different views.The overall self-expressiveness coefficient matrices could express the similarity between sam-ples fully and accurately.Experimental results demonstrate the effectiveness of the proposed method.The two methods proposed in this paper start from learning consistency information and complementary information of multiple views respectively.And those two methods utilize the related theory of subspace learning to get the elustering results of dataset.The exper-imental results show that the two methods proposed in this paper improve the clustering performance effectively. |