Font Size: a A A

Research On Multi-view Learning Algorithms And Applications

Posted on:2018-11-03Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y Q WangFull Text:PDF
GTID:1368330569498442Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Different views of the object describes different aspects.How to use the complementary and compatible information of these views to obtain the most profound understanding of the object,is a hot issue of academic research recently.These information fusion methods are called multi-view learning(MVL)algorithms.Compared with single view learning algorithms,multi-view learning algorithms are more robust.The performance of single view learning would decline sharply if the data is noisy.In contrast,MVL algorithms could reduce this limitation,which increases the robustness of the algorithm.MVL algorithms have been widely applied in many areas,such as computer vision,bio-informatics,natural language processing and medical image analysis.Therefore,the research on MVL has important application value.However,it is still an open question that how to design algorithms for selecting views and their corresponding weights.This paper would introduce several novel MVL algorithms.The main work and innovation of this paper are summarized as follows:(1)An efficient and effective convolutional auto-encoder extreme learning machine network for 3D feature learning is proposed.This paper proposes a rapid 3D feature learning method,namely,convolutional auto-encoder extreme learning machine(CAE-ELM)that combines the advantages of the convolutional neuron network,autoencoder,and extreme learning machine(ELM).In addition,we define a novel architecture based on CAE-ELM,which accepts two types of 3D shape representation,namely,voxel data and signed distance field data,as inputs to extract the global and local features of 3D shapes.By integrating the advantages of these two views,the classification accuracy of our approach achieves the state-of-the-art.(2)A multiple kernel learning algorithm using hybrid kernel alignment maximization is proposed.To solve the limitation of intra-class variation of samples introduced by the global kernel alignment,this paper proposes the local kernel alignment which keep the variation of samples.Nevertheless,the local kernel alignment also has its disadvantages.To overcome the disadvantages of these two types of kernel alignment,the paper designs a local kernel alignment,and proposes the corresponding alternative optimization algorithm.Extensive experiments show that the hybrid kernel alignment achieves the best classification performance.(3)A multiple kernel clustering framework with improved kernels is proposed.Existing multiple kernel clustering algorithms are largely dependent on the quality of predefined base kernels,which cannot be guaranteed in practical applications.This may adversely affect the clustering performance.To address this issue,the paper proposes a simple while effective framework to adaptively improve the quality of base kernels.Under this framework,the paper instantiates three MKC algorithms based on the widely used multiple kernel k-means clustering(MKKM),MKKM with matrix-induced regularization(MKKM-MR)and co-regularized multi-view spectral clustering(CRSC).After that,the paper designs the corresponding algorithms with proved convergence to solve the resultant optimization problems.To the best of our knowledge,this framework fills the gap between kernel adaption and clustering procedure for the first time in the literature and is readily extendable.(4)An approximate large-scale multiple kernel k-means algorithm is proposed.Existing MKC algorithms cannot be applied to large scale clustering tasks due to that: i)the heavy computational cost to calculate the base kernels;and ii)insufficient memory to load the kernel matrices.In this paper,we propose an approximate algorithm to overcome these issues,and to make it be applicable to large-scale applications.In specific,our algorithm trains a deep neuron network to regress the indicating matrix generated by MKC algorithms on a small subset,and then obtains the approximate indicating matrix of the whole data set using the trained network,and finally performs the k-means on the output of our network.By this way,our algorithm avoids computing the full kernel matrices by mapping features into indicating matrix directly,which dramatically decreases the memory requirement.
Keywords/Search Tags:Multi-view, Multi-kernel Learning, Clustering, Kernel Alignment, Large-scale, Extreme Learning Machine, Deep Neuron Network
PDF Full Text Request
Related items