In this paper, we focus on two-class discriminating problem and chiefly study two types of linear discriminant analysis: principal component classifier (PCC) and Fisher linear discriminant analysis (FLDA).PCC takes the normal vector of a hyperplane as the projecting direction, onto which the algebraic sum of all samples' projections is maximized, such that samples in one class can be separated well from the other by this hyperplane. Although the resistance of PCC to single outlier is slightly better than that of support vector machine (SVM), it is not verified for more than one outliers. Furthermore, it seems unreasonable in real world that PCC views outliers, together with noises, as important as other normal samples. In view of the above drawbacks, by introducing k-NN and 'weighted' strategies to the discriminating criterion, a set of more robust classifiers have been designed and implemented to enhance the robustness and generalization of the PCC.While FLDA only can get one optimal discriminating vector, by maximizing the Fisher criterion, due to that the rank of the between-class scatter matrix is at most 1 for binary-class problem. It is this point that limits us to search more discriminating directions to further boost recognition performance of FLDA. To breakthrough this notorious limitation, we propose multi-feature FLDA (MFLDA) by only replacing the original the between-class scatter with a new scatter measure. MFLDA still keeps its analytical simplicity. Additionally, its recognition performance on unseen samples, i.e., generalization, surpasses that of the original FLDA classifier, even outperforms SVM in some cases.Experiments based on the toy problems and the real world datasets show superiority of our proposed methods. |