Font Size: a A A

Computational models of high-level visual perception and recognition

Posted on:2003-08-27Degree:Ph.DType:Thesis
University:University of California, San DiegoCandidate:Dailey, Matthew NelsonFull Text:PDF
GTID:2468390011486875Subject:Computer Science
Abstract/Summary:
How does the brain represent and process visual stimuli for recognition? How can we build artificial systems that see? This thesis tackles these questions with computational models of psychological data on high-level visual tasks such as face recognition and facial expression recognition. We first explore scenarios for the development of the so-called “fusiform face area” (FFA), a brain region thought to be specialized for face recognition. We introduce a simple competitive learning mechanism to model the participation of multiple brain regions in a classification task. We find that if the model FFA region is “seeded” with low spatial frequency input and the overarching task is face individuation as opposed to basic-level face classification, the region reliably specializes for face processing. We thus show that the FFA could be the result of competitive learning and task constraints rather than an innate “module.” We then explore memory for faces by modeling data from a psychological experiment in which subjects appear to form false memories for blended pairs of studied faces. We provide a simple computational account of these errors that does not rely on an explicit blending mechanism. In a third study, we use a simple computational model to explain psychological data on emotional facial expression recognition. In the data, human categorization of facial expressions sometimes appears discrete (like the colors in a rainbow) and other times continuous. We show how these seemingly contradictory effects emerge naturally at different levels of our model. In a final study, we explore differences in the way Japanese and U.S. subjects interpret facial expressions using the same computational model. We first present a cross-cultural emotion judgment study showing that Japanese and U.S. subjects differ substantially in their attribution of emotional intensity to facial expressions. We then show how the computational model allows separate analysis of the experimental and cultural factors contributing to subjects' judgments. We find that differing response bias is a more important factor than differing prior experience in predicting cross cultural differences in behavior.
Keywords/Search Tags:Recognition, Computational model, Visual
Related items