Font Size: a A A

How come when you turn your head I still know who you are: Evidence from computational simulations and human behavior

Posted on:1997-10-17Degree:Ph.DType:Dissertation
University:The University of Texas at DallasCandidate:Valentin, DominiqueFull Text:PDF
GTID:1461390014480074Subject:Psychology
Abstract/Summary:PDF Full Text Request
As a face is subjected to rotation in depth, its retinal projections change drastically. Yet, human observers seem to have very little difficulty in recognizing familiar faces from most viewpoints. In recent years, it has been mostly from computational models that the complexity of this task has become appreciated. A review of these models is first presented. The models are classified according to three main dimensions-information representation, architecture, and task performed. Four main types of models emerge from these dimensions: (1) autoassociative memory, (2) backpropagation network, (3) filter models, and (4) cognitive models. The simplest of these models--the autoassociative memory--is then used in conjunction with some human experiments to explore the problem of recognizing faces from new orientations. The simulations and experiments are organized around three aspects of this problem.; First, the ability of an autoassociative memory to generalize from single views of faces is compared with that of human observers. Results show that the performance of both human observers and the autoassociative memory decreases significantly after a 30-degree rotation in depth. After this point the autoassociative memory is at chance level ({dollar}dprime{dollar} = 0) whereas human observers remain above chance ({dollar}dprime{dollar} = 1). It is proposed that up to 30 degrees human subjects use a global matching strategy somewhat similar to that of the autoassociative memory, but that after this point they rely on localized information visible from many orientations.; Second, the ability of an autoassociative memory, trained to reconstruct multiple views of a set of faces, to generalize to new views of the faces is evaluated. A first series of simulations shows that the autoassociative memory spontaneously dissociates two kinds of perceptual information: orientation versus identity information. A second series of simulations shows that a limited set of 2D pixel-based images enables a correct recognition rate of 80% from new view angles. These results indicate that a 3D invariant representation is not necessary for a computational model to recognize faces from new orientations.; Third, the patterns of results obtained when different views, or combinations of views, are used as the internal representation of a two-stage identification network consisting of an autoassociative memory followed by an RBF network are compared. Results show that (1) the optimal performance is obtained when a frontal and a profile view are used as the network internal representation; (2) all the different representations used produce a 3/4 view advantage, similar to that generally described for human subjects. The results show that although 3/4 views yield better recognition than other views, they need not be stored in memory to yield this advantage.
Keywords/Search Tags:Human, Memory, Views, Simulations, Computational
PDF Full Text Request
Related items