Font Size: a A A

Research On Robust Linear Subspace Learning Algorithm And Framework

Posted on:2016-05-30Degree:DoctorType:Dissertation
Country:ChinaCandidate:F J ZhongFull Text:PDF
GTID:1108330485988602Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
For extracting the useful information of many high-dimensional data, linear subspace learning methods are often used to reduce the dimensionality of these data. However, many current linear subspace learning algorithms are not robust to noise, outliers or other disturbance, lacking the reliability in various application systems. Therefore, this dissertation focuses on improving the robustness of the traditional linear subspace learning algorithms. The dissertation firstly analyzes the theoretical basis of linear subspace learning, then finds the cause of impairing its robustness, and lastly improves some linear subspace methods. Moreover, after concluding some related methods, the dissertation presents two general frameworks which provide the research foundation for the future work. The main work and innovations of this dissertation are summarized as follows:(1) To improve the robustness of LPP-L1 against outliers, Chapter 2 presents a locality preserving projection algorithm based on maximum correntropy criterion (MCC), named as LPP-MCC. LPP-MCC adopts the correntropy to measure the similarity of two data and forms its objective function based on MCC, and then the objective problem is efficiently solved via an iterative half-quadratic optimization procedure. LPP-MCC has three important advantages as follows:1) LPP-MCC is more robust to outliers than the conventional LPPs based on L2-norm or L1-norm.2) The optimization procedure of LPP-MCC is essentially a simple standard optimization method.3) It avoids the small sample size problem. The experimental results on both synthetic and real-world databases demonstrate that LPP-MCC is more robust than LPP-L2 and LPP-L1 against outliers.(2) Although LDA-R1 significantly improves the robustness of LDA-L2 against outliers, it takes too much time to achieve convergence for a large dimensional input space. Motivated by PCA-L1 and CSP-L1, Chapter 3 presents a linear discriminant analysis algorithm based on L1-norm maximization, named as LDA-L1. LDA-L1 is a simple but effective robust LDA version which learns a set of local optimal projection vectors by maximizing the ratio of the L1-norm-based between-class distance and the L1-norm-based within-class distance. However, it is very difficult to directly find the global optimal solution of LDA-L1. To solve this problem, a greedy search scheme based on an iterative procedure is provided to obtain an approximate solution. The experimental results on artificial datasets, standard classification datasets and three high-dimensional image databases demonstrate that LDA-L1 is more robust against outliers than LDA-L2 and LDA-R1 but has lower time cost than LDA-R1.(3) As a linear dimensionality reduction technique based on manifold learning, conventional discriminant locality preserving projection (DLPP-L2) is much sensitive to outliers because its objective function is based on the distance criterion using L2-norm. Motivated by those methods based on L1-norm maximization, Chapter 4 proposes a robust DLPP version based on L1-norm maximization (DLPP-L1), which learns a set of local optimal projection vectors by maximizing the ratio of the L1-norm-based locality preserving between-class dispersion and the L1-norm-based locality preserving within-class dispersion. The solving procedure of DLPP-L1 is proven to be feasible while overcoming the small sample size problem. The experimental results on artificial datasets, Binary Alphadigits dataset, FERET face dataset and PolyU palmprint dataset have demonstrated that DLPP-L1 is more robust than L2-norm-based DLPPs.(4) After adequately discussing several discriminant analysis methods, Chapter 5 proposes a general framework of discriminant analysis based on similarity measure. The general framework shows that a discriminant analysis method consists of four aspects:the similarity measure metric; the data representation; the computing way of the similarity; the formulation of the objective problem and its solving. This framework can not only describe many existing discriminant analysis methods, but also be adopted to design new robust discriminant analysis algorithms. So, Chapter 5 presents a robust discriminant analysis algorithm based on L2-norm and L1-norm, named as LDA-L2&L1, which adopts L2-norm-based distance as the between-class similarity measure metric and LI-norm-based distance as the within-class similarity measure metric. The experimental results demonstrate that LDA-L2&L1 is effective, and indirectly show that the proposed general framework is effective.(5) For enhancing the reliability of subspace learning methods while processing image data, Chapter 6 proposes a general framework of subspace learning from the local texture patterns in the face recognition field, which adopts the adding idea to form an effective integrated scheme. Under the guidance of this framework, Chapter 6 presents a face recognition scheme based on robust subspace learning from ELDP. To gain the robustness, the scheme adopts a robust texture operator named as ELDP which is an enhancing version of LDP. The experimental results on three face databases have demonstrated that ELDP is more robust to slight noise than LDP while keeping the discriminability. And the experimental results on CAS-PEAL-R1 face database indicate that the proposed scheme is effective and the proposed general framework has a certain reference value.
Keywords/Search Tags:linear subspace learning, robust, locality preserving, discriminant analysis, L1-norm, correntropy, similarity measure, local texture patterns
PDF Full Text Request
Related items