Font Size: a A A

Feature Dimensionality Reduction And One-class Classifier With Applications To Radar Target Recognition

Posted on:2021-03-30Degree:DoctorType:Dissertation
Country:ChinaCandidate:W ZhangFull Text:PDF
GTID:1488306311971619Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
With the rise of artificial intelligence and the improvement of hardware level,pattern recognition has attracted more and more attention of experts and scholars.A complete pattern recognition system generally includes five steps: data acquisition,data preprocessing,feature extraction,feature dimensionality reduction and classifier design.Feature dimensionality reduction can map the features from the original space to a meaningful low-dimensional subspace via a linear or nonlinear transformation,which can not only reduce the time and space complexity,but also alleviate the “the curse of dimensionality” and “small sample problem”.One-class classifier tries to classify the interested target samples and all other uninterested samples.Since only target samples are available in the training step,one-class classification can solve the problem that two-class classifier cannot be trained when samples belonging to a special class are missing.As an application of pattern recognition in radar field,radar automatic target recognition(RATR)has been widely developed and applied in civil and military fields.In this paper,the relevant theory and technology of pattern recognition are studied from two aspects: feature dimensionality reduction and design of a class of classifiers.The main contents of this dissertation are summarized as follows:1.Dimensionality reduction is an important preprocessing step for high-dimensional data classification.Some traditional linear dimensionality reduction methods,e.g.,linear discriminant analysis,cannot handle complex nonlinear data.Although some nonlinear dimensionality reduction methods,e.g.,Local Linear Embedding,are proposed,these methods cannot ensure the between-class separability of transformed data,which will seriously affect the accuracy of classification.To solve this problem,a novel dimensionality reduction method is proposed based on based on mixture of factor analysis(MFA)model and distance metric learning.In our method,MFA divides the original high-dimensional data into several clusters,which can partition the complex nonlinearly separable space into several local linearly separable space,and transforms the high-dimensional data of each cluster into a low-dimensional subspace via factor analysis model.Simultaneously,distance metric constraint(DMC),i.e.,the Euclidean distance of transformed data from same class is minimal and the Euclidean distance of transformed data from different classes is maximal,is imposed into each cluster to ensure the between-class separability.Moreover,the log-likelihood function of MFA model and DMC loss function are jointly optimized.The experiments on synthetic data,benchmark datasets and measured radar data show that the performance of our method is better than those of other related methods.2.As the effective tools for one-class classifier,one-class support vector machine(OC-SVM)and support vector data description(SVDD)have attracted much attention.However,the performance of these two methods is sensitive to the values of Gaussian kernel parameter.To overcome this disadvantage,this paper proposes a method of automatically learning Gaussian kernel parameter for OC-SVM and SVDD.For a suitable Gaussian kernel parameter,the distance between samples and classification boundary satisfies a certain geometric relationship in OC-SVM and SVDD,i.e.,edge samples of the input space are transformed to the region close to boundary and more likely to become support vectors(SVs),while interior samples of input space are transformed to the region far away from boundary and barely become SVs.In our method,we define information entropy of each sample in the Gaussian kernel space to measure the distance between samples and classification boundary.Especially,the larger the information entropy is,the closer the sample is to classification boundary.The optimal Gaussian kernel parameter is learned automatically for OC-SVM and SVDD by minimizing the information entropy of interior samples and simultaneously maximizing the information entropy of the edge samples.Experimental results on synthetic datasets,benchmark datasets and synthetic aperture radar real datasets demonstrate the effectiveness of our method.3.Some traditional one-class methods train a classifier using all input data and ignore the underlying structure of the data,which degrades the classification performance when the distribution of the data is complex.To overcome this problem,an ensemble max-margin one-class classifier(En-MMOCC)is proposed in this paper.In En-MMOCC,we partition the input data into several clusters with the Dirichlet process mixture(DPM),and learn a modified OC-SVM in each cluster.Specifically,the clustering procedure and the modified OC-SVM are jointly learned in a unified Bayesian frame to guarantee the consistency of clustering and linear separability in each cluster.Experimental results based on benchmark datasets and synthetic aperture radar(SAR)real data demonstrate that the proposed method has enhanced classification performance and is more robust to the model parameters than some conventional methods.4.Feature selection can reduce feature redundancy and improve classification performance by selecting features related to classification.In order to enhance the feature separability and reduce the feature redundancy in En-MMOCC,an ensemble Beta process max-margin one-class classifier(En-BPMMOCC)is proposed.In En-BPMMOCC,the feature selection factor obeying the prior distribution of beta process(Bernoulli-Beta prior)is added in the model.Because of the sparse Characteristic of Bernoulli-Beta prior,most of the elements in the feature selection factor is 0,which reduces the feature redundancy of the transformed samples.In addition,feature selection factor and classification parameters of the model are joint optimized,so that the features with the best separability can be selected and the classification performance can be enhanced.The experimental results based on the synthetic data,benchmark data and SAR real data show that the proposed method improves the classification results by adding feature selection factors compared with En-MMOCC.
Keywords/Search Tags:Pattern recognition, radar automatic target recognition (RATR), feature dimensionality reduction, one-class classifier, mixture of factor analysis model (MFA), distance measurement learning, Gaussian kernel parameters
PDF Full Text Request
Related items