Font Size: a A A

The evaluation of competing classifiers

Posted on:2001-08-20Degree:Ph.DType:Dissertation
University:Air Force Institute of TechnologyCandidate:Alsing, Stephen GregoryFull Text:PDF
GTID:1465390014958668Subject:Operations Research
Abstract/Summary:
Evaluation procedures are developed using two differing worldviews of the classifier comparison problem. Classifiers or pattern recognition algorithms are used in a wide range of military and medical applications. Specific examples include, Automatic Target Recognition, in which a computer processes radar returns in an attempt to discriminate between viable targets, such as tanks and ground clutter, or Computer Assisted Diagnosis of mammograms, with the express purpose of early identification of breast cancers. Two new methodologies are developed for evaluating competing classifiers. The first method is based on a commonly used evaluation tool called the receiver operating characteristic or ROC curve. A proof of convergence with respect to increasing sample size for these ROC curves is provided. This ROC convergence theorem is important because it provides the basis for a framework for the comparison of ROC curves and hence, the comparison of classifiers. The second method developed uses a statistical procedure called a “multinomial selection procedure”, which has not been previously employed in the pattern recognition community to the problem of evaluating competing algorithms. Both methodologies are applied in two broad classes of applications, including automatic target recognition and pilot workload classification. Based on these applications, interpretations for the proposed performance measures using these two new methods are discussed.
Keywords/Search Tags:Classifiers, ROC, Competing, Recognition
Related items