Font Size: a A A

Feature selection methods for support vector machines for two or more classes, with applications to the analysis of Alzheimer's disease and its onset with MRI brain image processing

Posted on:2011-03-25Degree:Ph.DType:Dissertation
University:The Pennsylvania State UniversityCandidate:Aksu, YamanFull Text:PDF
GTID:1448390002950376Subject:Engineering
Abstract/Summary:
Feature selection for classification in high-dimensional spaces can improve generalization, reduce classifier complexity, and identify important, discriminating feature "markers". For support vector machine (SVM) classification, a widely used technique is Recursive Feature Elimination (RFE). We demonstrate RFE is not consistent with margin maximization, central to the SVM learning approach. We thus propose explicit margin-based feature elimination (MFE) for SVMs and show both improved margin and improved generalization, compared with RFE. Moreover, for the case of a nonlinear kernel, we show RFE assumes the squared weight vector 2-norm is strictly decreasing as features are eliminated. We demonstrate this is not true for the Gaussian kernel and, consequently, RFE may give poor results in this case. We show that MFE for nonlinear kernels gives better margin and generalization. We also present an extension which achieves further margin gains, by optimizing only two degrees of freedom---the hyperplane's intercept and its squared 2-norm---with the weight vector orientation fixed. We finally introduce an extension that allows margin slackness. We compare against several alternatives, including RFE and a linear programming method that embeds feature selection within the classifier design. On high-dimensional gene microarray data sets, UC Irvine repository data sets, and Alzheimer's disease brain image data, MFE methods give promising results. We then develop several MFE-based feature elimination methods for the case of more than two classes (the "multiclass" case). We compare against RFE-based multiclass feature elimination and show that our MFE-based methods again consistently achieve better generalization performance. In summary, we identify some difficulties with the well-known RFE method, especially in the kernel case, develop novel, margin-based feature selection methods for linear and kernel-based two-class and multiclass discriminant functions for support vector machines (SVMs) addressing separable and nonseparable contexts, and provide an objective experimental comparison of several feature selection methods, which also evaluates consistency between a classifier's margin and its generalization accuracy.;We then apply our SVM classification and MFE methods to the challenging problem of predicting the onset of Alzheimer's Disease (AD), focusing on predicting conversion from Mild Cognitive Impairment (MCI) to AD using only a single, first-visit MRI for the person so as to aim for early diagnosis. In addition, we apply MFE for selecting brain regions as disease "biomarkers". For these aims, for the pre-classification image data preparation step, we co-develop an MRI brain image processing pipeline system named STAMPS, as well as develop a related system with additional capabilities named STAMPYS. These systems utilize external standard MRI brain image processing tools and generate output image types particularly suitable for detecting (and encoding) brain atrophy for Alzheimer's disease. We identify and remedy some basic MRI image processing problems caused by some limitations of external tools used in STAMPS---i.e. we introduce our basic fv (fill ventricle) algorithm for ventricle segmentation of cerebrospinal fluid (CSF). For prediction of conversion to AD for MCI patients, we demonstrate that our early diagnosis system achieves higher accuracy than similar recently published methods.
Keywords/Search Tags:MRI brain image, Feature selection, Methods, Alzheimer's disease, Support vector, RFE, Generalization, MFE
Related items