Font Size: a A A

Research On Linear Discriminant Analysis With Image Spatial Information

Posted on:2015-03-21Degree:MasterType:Thesis
Country:ChinaCandidate:L L NiuFull Text:PDF
GTID:2298330422980988Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Dimensionality reduction is one of the components in pattern recognition system, and itsessence is to map the data from the original high dimensional space to low dimensional spacereflecting the nature of classification. According to different ways of the mapping, it can be dividedinto linear dimensionality reduction algorithms and nonlinear dimensionality reduction algorithms.Among linear dimensionality reduction algorithms, principle component analysis (PCA) and lineardiscriminant analysis (LDA) are the two most famous methods, and have been widely used in imagedimensionality reduction. However, PCA and LDA usually take an image as a high dimensional vector,which does damage to the original spatial structure of the image. As a result, some prior knowledgecannot be utilized, and their performances cannot be further improved. So recently, researches onembedding spatial information into dimensionality reduction have attracted much attention.In this paper, we mainly take LDA as the carrier to do a series of researches and obtain thefollowing results:1. We summarize the current two types of methods integrating spatial structure information tolinear dimensionality reduction algorithms, one is spatially smooth subspace learning, the other isspatially smoothing based on the Euclidean distance metric. The former employs the spatialinformation by regularizing the optimization goal, while the latter uses it by spatially smoothingEuclidean distance. We combine LDA with them and generate two corresponding methods calledspatially smoothing linear discriminant analysis (SLDA) and image Euclidean distance discriminantanalysis (IMEDA), and we attempt to explore the relationship between the two kinds of LDA: wetheoretically prove that SLDA is a special case of IMEDA when the mean of the data set is zero; weconduct experiments to compare SLDA with IMEDA on Yale, AR, and FERET face datasets and alsoanalysis the influences of the parameters on performance of the algorithms.2. According to the fact that the projection matrices of SLDA and IMEDA are attained byoptimizing the ratio of average within-class and average between-class scatters, enlightened byWLDA, we improve SLDA and IMEDA and propose to maximize the minimal between-class scatter(or separation) while maintaining an upper bound for average within-class scatter (or compactness) toreplace the original optimization, and the novel algorithms are called WSLDA and WIMEDA, whichcan make the distance of different classes as large as possible and be good to classify. Experiments onYale, AR and FERET face datasets validate our approaches have better performances.3. We propose a general learning framework based on supervised dimensionality reduction. So far, we have completed to embed LDA, locality preserving projection (LPP), neighborhood preservingembedding (NPE) and sparse preserving projection (SPP) into this framework, and we will do morefurther research on this framework, such as embedding more algorithms, deriving noveldimensionality reduction approaches, optimizing the existing algorithms, embedding spatial structureinformation and introducing cost-sensitive learning.
Keywords/Search Tags:Linear Discriminant Analysis, Dimensionality Reduction, Spatial Structure Information, Spatially Smooth, Average Scatter, Eigenvalue Optimization
PDF Full Text Request
Related items