Font Size: a A A

Visual Encoding: Neural Evidence And Visual Recognition

Posted on:2014-02-03Degree:DoctorType:Dissertation
Country:ChinaCandidate:X LiFull Text:PDF
GTID:1108330503956638Subject:Pattern Recognition and Intelligent Systems
Abstract/Summary:PDF Full Text Request
Visual priors are some stable distributions of natural visual signals, playing important roles in visual representation, perception and recognition. By means of neuroscience experiments, statistical modeling and analysis, we investigated the neural encoding and statistical representation of the visual priors. Inspired by the above results and simulating the encoding mechanism, we proposed visual feature learning approaches for recognition, and analyzed their generalization ability in recognition. The main contributions of this work are fourfold as follows.1. Shown that visual priors and signals are encoded by neural connectivities.By recording the neural activities of disparity neurons in V1 cortex of awake monkeys, we observed that the functional connectivity relies on the con?gural orientation of the spatial receptive ?elds of two neurons. That is, functional connectivities with horizontal or vertical con?gural orientations are stronger than those with other con-?gural orientation, which is called cardinal effect. This connectivity pattern can be predicted by the correlation pattern of visual signals from two spatial points. These results suggested that visual priors are encoded in the functional connectivities between neurons, and facilitate visual perception. The experiments also shown that, the neural connectivites and visual priors encoded in functional connectivities can be modeled by probabilistic generative models.2. Proposed posterior divergence feature mapping to exploit generative information and visual priors. The fact that the neural connectivites and visual priors can be modeled by generative models indicates that, the random variables and model parameters can encode visual priors and can represent visual signals. Inspired by this observation, we proposed the posterior divergence algorithm which is able to exploit generative information to represent visual signals. The algorithm maps random variables and model parameters into features. It is comprised of three types of measures:how a sample affects the model parameters; how well a sample ?ts the model; the uncertainty in the ?tting. These measures correspond to the effect of visual signals to neural connectivities, neural decoding, the uncertainty in neural decoding respectively.The proposed method shows strong robustness to different settings, and achieved stateof-the-art performance in several tasks.3. Proposed suffcient statistics feature mapping and its discriminative learning approach. In probabilistic generative models, suffcient statistics summarizes the information of samples and distributions. Inspired by this observation, we proposed a simple yet effective method – suffcient statistics feature mapping. The proposed feature mapping takes the form of the expectation of suffcient statistics, without involving model parameters. Further, we proposed a discriminative learning method for the feature mapping, by constraining the classi?cation margin. Bene?ting from the simple form of feature mapping, the learning rules are simple, easy to implement and computationally effective, and can be scaled to large scale tasks straightforwardly.4. Theoretical analysis of the generalization ability of the proposed feature mappings. The two above feature mappings can be applied to visual recognition tasks.By means of PAC-Bayes theory, we derived the generalization error bound for these feature mappings under the settings of separated learning and joint learning. It can be proven that these feature mappings with linear classi?ers are at least as good as Naive Bayes classi?er; the generalization errors of these methods can be lower than the raw samples under some speci?c settings. Further, we extended the above deterministic feature mappings to stochastic versions and derived their generalization error bound for semi-supervised learning. As a demonstration, we derived a joint learning approach.These proposed methods were cooperated with several probabilistic generative models, and applied to a number of visual recognition tasks. The experimental results shown that, in comparison with the state-of-the-art methods, the proposed methods shown highly competitive performance.
Keywords/Search Tags:visual prior, neural interaction, probabilistic generative model, feature mapping, recognition, error analysis
PDF Full Text Request
Related items