In recent years, a new data acquisition technology--Compressed Sensing Technology isbecoming researchersâ€™ focus for breaking the Nyquist sampling rate limit. The theory attempts toachieve signal dimension reduction with the least number of observation to the sampled signals,without loss of information of the original signal, and the signal can be recovered from theprojections. At the same time, because the Compressed Sensing Technology can save the highdimensional image information with less data, so it can be widely used in high-dimension imageprocessing. For example, the Sparse Preserving Projection algorithm applied in feature extractionfield and Sparse Representation Classifier used in recognition field. They are all successfulapplications of Compressed Sensing in the field of graphics and image processing and patternrecognition.While reducing dimensions, Sparse Preserving Projection algorithm has efficient structurepreserving effect,which can preserve the sparse reconstruction relationship between samples.However it is essentially a global linear dimensionality reduction, unable to cope with the greatdifferences between samples with the same class. Whatâ€™s worse, as an unsuperviseddimensionality reduction method, the Sparse Preserving Projection algorithm canâ€™t take thecongenital advantage of supervised dimensionality reduction method. Therefore, this paperproposes a method for locally dimensionality reduction with supervision mechanism--LocallySparsity Preserving Projection Algorithm Considering Pairwise Constraint. The nearest neighborsamples instead of all the training samples are used to compute sparse relationship in the trainingprocess, which not only keep the local structure features among samples in the reducing process,more is to improve the efficiency of the algorithm as the time complexity is greatly reduced at thesame time.Mutual supervision mechanism is introduced to improve the weight of same classneighbors and reduce the weight of heterogeneous neighbors, which effectively avoids an errorclassification due to the small differences between two heterogeneous samples.Sparse Representation Classifier is based on the assumption that test sample can be linearrepresented by all training samples, and only the same class training samples have a larger linearrepresentation coefficient, while the other coefficients are approximately equal to zero. But in caseof practical application, linear representation coefficient can not be perfectly sparse because of the existence of the inevitable error, so we calculate the sparse reconstruction error of different classes,then the category with the smallest error is just the category of the training sample. In addition,effects of different reconstruction algorithms use in Sparse Representation Classifier is discussedin this paper. Because of the lack of combining different importance of different area in SparseRepresentation Classifier, this paper finally proposes an improved sparse representation classifieridea in which face image is divided into blocks with respective weights. Sparse representationclassifiers are applied on blocks. Samples classes are determined by voting of blocks categories.The effectiveness and feasibility of the proposed algorithm in this paper is verified by facerecognition experiments on ORL and YALE database from the perspective of recognition rate andruntime. |