Font Size: a A A

Visual learning from small training datasets

Posted on:2006-08-20Degree:Ph.DType:Thesis
University:University of California, Santa CruzCandidate:Shi, XiaojinFull Text:PDF
GTID:2459390008956538Subject:Engineering
Abstract/Summary:
The goal of an image classification system is to automatically assign an image patch to a predefined class. It is important to many applications like robot navigation and image database retrieval. Our work focuses on the design issues of the image classification system when finite training data are available.; In the first part of this thesis, we present a new Bias/Variance decomposition. Compared to the existing decompositions in the literature, our expression is more intuitive and easier to evaluate.; In the second part of the thesis, we focus on classifier design issues. Besides the analysis of the Bayes rates for Bayesian and Bayes fusion classifiers, we further compare their performance when finite training samples are available. We illustrate the small sample effect with demonstration experiments, and further explain the small sample effect with the Bias/Variance theory discussed in the first part of the thesis.; The third part of the thesis addresses practical issues relating to feature design, both theoretically and empirically. Theoretically, based on the analysis of the relationship between invariant and non-invariant features with Lie's approach, we propose an innovative feature---"randomized invariant", which bridges the gap between invariant and non-invariant features, and breaks the binary choice in feature selection. Given the abundant choice of randomized invariant features, we further propose a combination algorithm, which can automatically select significant features according to the training data. This algorithm has shown stable performance during our experiments.; Empirically, we consider the design of two popular visual features: color and texture. For color features, based on the comparison of different color models, we investigate the performance of the Maximum likelihood (ML) and Adaboost classifiers based on invariant, non-invariant and randomized features.; Regarding texture features, a treatment of rotational invariance based on steerable filter banks has been proposed. We analytically derive invariant operators for 2nd and 3rd order steerable texture features and further evaluate the performance of ML and Adaboost classifiers based on the invariant, non-invariant, and randomized invariant features with three real-world image datasets.
Keywords/Search Tags:Features, Image, Invariant, Training, Small, Performance
Related items