Font Size: a A A

Research On Image Classification Based On Discriminative Learning Of Symmetric Positive Definite Matrices

Posted on:2017-11-17Degree:MasterType:Thesis
Country:ChinaCandidate:S ZhongFull Text:PDF
GTID:2348330488459906Subject:Electronic and communication engineering
Abstract/Summary:PDF Full Text Request
Image classification has attracted a lot of attention, and became one of the most hot topics in computer vision due to its wide applications and challenges. Image modeling is a basic and important problem in image classification, and a robust image modeling method can greatly improve the classification performance. Among the image modeling methods, the symmetric positive definite ?SPD? matrices-based ones have achieved excellent performances in many image classification tasks, because they can naturally fuse a variety of image cues and are very robust to noise, etc. However, the space of SPD matrices lie on a Riemannian manifold, so the learning algorithms designed in Euclidean space cannot be directly employed. This makes the discriminative learning on the space of SPD matrices very challenging. The main task of this paper is to do research on discriminative learning of SPD matrices and design an effective and efficient classification method.Covariance descriptors as SPD matrices have been successfully applied in many image classification tasks, and discriminative learning methods on the covariance descriptors have also been widely studied. Although the covariance descriptors have strong ability to represent images, there still exist some limitations. First, the covariance descriptors discard the mean information of image features, but the mean information is helpful for modeling images. To introduce the mean information, this paper proposes the Gaussian descriptor to model images. However, the manifold of Gaussian descriptor is quite different from that of covariance descriptors, and discriminative learning on the Gaussian manifold is still an open issue. To solve this problem, this paper first analyzes the Riemannian manifold of Gaussian, and introduces a new embedding method that maps the Gaussian manifold into the space of SPD matrices. However, it is well known that the space of real SPD matrix is not in the Euclidean space, so discriminative learning designed for the Euclidean space can't be directly used. This paper presents three kinds of discriminative learning methods based on the log Euclidean metric, which are, respectively, called the large margin learning based on the log Euclidean metric, the linear discriminative learning based on the log Euclidean metric and the canonical correlation analysis based on the log Euclidean metric. These methods first exploit the log Euclidean metric to map the real SPD matrices from the Riemannian manifold to the Euclidean space, then perform the discriminative learning in Euclidean space. The methods proposed in this paper can not only maintain geometric structure of the Gaussian descriptor, but also make the discriminative learning on Gaussian manifold very efficient.Besides the loss of mean information, the other limitations of the covariance descriptors are:the covariance descriptor is singular when the feature dimension of features is larger than the sample number; the covariance descriptor can only model linear relationships between the features, but couldn't deal with the nonlinear relationships between them. Note that the above situations commonly exist in the real-world applications. To overcome the aforementioned limitations, this paper introduce kernel matrices to model images. The image representations based on the kernel matrices can not only avoid singularity of SPD matrices, but also model nonlinear relationships between image features. To study the nonlinear relationships between features, this paper presents five kinds of kernel matrices to model the images. Because the kernel matrices are real symmetric positive definite, the above three proposed discriminative learning methods based on log Euclidean metric can be used to deal with them. To our best knowledge, the usage of five kinds of kernel matrices for image representations and discriminative learning on kernel matrices have been not studied in previous literatures.This paper makes evaluation and analyses of the proposed methods in a variety of image classification tasks ?i.e., texture classification, image set classification and face recognition? and employs five widely used benchmarks, including UIUC, FMD, ETH80, FERET and YTC. The experimental results show that the classification accuracies of Gaussian descriptor and kernel matrices-based representations are higher than those of covariance descriptors, and the discriminative learning methods can greatly improve the classification accuracy. Meanwhile, the proposed discriminative learning can reduce the dimension of image representations, which not only improve the efficiency but also reduce the storage cost. In addition, the proposed methods can obtain better performances than their counterparts and achieve state-of-the-art results in many databases.
Keywords/Search Tags:Image Classification, Gaussian Descriptor, Kernel Matrix Representation, Discriminative Learning
PDF Full Text Request
Related items