Font Size: a A A

Research On Discriminant Analysis Based On Worst-case Separation And Average-case Compactness

Posted on:2015-07-14Degree:MasterType:Thesis
Country:ChinaCandidate:L L YangFull Text:PDF
GTID:2298330422480988Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Linear discriminant analysis (LDA) is one of the famous sub-space presentation algorithms, and ithas been widely used in many fields of pattern recognition. The goal of LDA is to find the bestsub-space where those classes can be separated as much as possible. And the criterion of LDA is tomaximize the ratio of between-class scatter and within-scatter. In order to make LDA be appropriatefor more complex situations, many scientists have developed a lot of extensions of it, e.g. regularizedlinear discriminant analysis (RLDA), non-parameter discriminant analysis (NDA), generalizeddiscriminant analysis (GDA), worst-case linear discriminant analysis (WLDA). This thesis focuses onresearching on the definition of scatter used in LDA, and thus proposes a new framework ofdiscriminant analysis. This thesis also introduces this framework into low-resolution imagesrecognition to get better recognition accuracy. The major contributions of this thesis are that:Two recent DA techniques, Minimal Distance Maximization (MDM) and worst-case LDA (WLDA),seek projections by optimizing the worst-case scatters. Specifically, MDM maximizes the worst-casebetween-class scatter, and WLDA maximizes the ratio of the worst-case between-class scatter to theworst-case within-class scatter. From a viewpoint of defining scatter, it can be found that LDA, MDMand WLDA respectively lie at three points in a DA coordinate system (Fig.1(a)) represented by(between-class, within-class). In this paper, we check those remaining points in this coordinate systemand develop a new LDA framework called WSAC. It resides at point (worst, average) and thus isaccomplished by maximizing the ratio of the worst between-class projected scatter to the averagewithin-class projected scatter. Its solution can finally be formulated by relaxing the trace ratiooptimization as a distance metric learning problem. Comparative experiments demonstrate itseffectiveness. In addition, as an important fact, DA counterparts using local geometry of data andkernel trick can likewise be embedded into our framework and solved in the same way.Low-resolution is an important issue when handling real world image recognition problems. Theperformance of traditional recognition algorithms, e.g. LDA/PCA, will drop drastically due to the lossof discriminant information compared to high-resolution or super-resolution images. In order to getrid of this problem, in recent years, many methods have been proposed based on coupled projections,i.e. learning two sets of different projections, one for high-resolution images and one forlow-resolution images. For example, SDA (Simultaneous Discriminant Analysis) gets projections bymaximizing the average between-class scatter while minimizing the average within-class scatters. Solike LDA, SDA cannot make projected classes separating, especially those closer ones. In this paper, we propose a novel discriminant analysis method trying to get the optimal projections by maximizingthe minimum distance between pair-wise classes. Experiments on several image datasets verify theefficiency of our methods.
Keywords/Search Tags:Dimension Reduction, Sub-Space Presentation, Discriminant Analysis Learning, Worst-case Separation, Average-case Compactness, Couple Resolution
PDF Full Text Request
Related items