Font Size: a A A

A neural model of visually-guided navigation and object tracking in a cluttered world: Computing ego and object motion in a model of the primate magnocellular pathway

Posted on:2010-03-28Degree:Ph.DType:Thesis
University:Boston UniversityCandidate:Browning, Neil AndrewFull Text:PDF
GTID:2448390002477465Subject:Biology
Abstract/Summary:
Visually guided navigation through a cluttered natural scene is a challenging problem that animals and humans accomplish with ease. This thesis introduces the Visually-guided Steering, Tracking, Avoidance and Route Selection (ViSTARS) model, which proposes how primates use motion information to segment objects and determine heading, or direction of travel, for purposes of goal approach and obstacle avoidance in response to video inputs from real and virtual environments. The model produces trajectories similar to those of human navigators. It does so by describing processes performed by neurons in several areas of the primate magnocellular pathway, from retina through V1, MT and MST. In particular, ViSTARS predicts how computationally complementary processes in cortical areas MT--/MSTv and MT+/MSTd compute object motion for tracking, and self-motion for navigation, respectively. The model retina responds to transients in the input stream. Model V1 generates a local speed and direction estimate. This local motion estimate is ambiguous due to the neural aperture problem. Model MT+ interacts with MSTd via an attentive feedback loop to compute accurate heading estimates in MSTd that quantitatively simulate properties of human heading estimation data. The model estimates heading to within 1.5° in random dot or photo-realistically rendered scenes and within 3° in video streams sampled while driving in real-world environments. Simulated camera or eye rotations of less than 1° per second do not affect model performance, but faster simulated rotation rates degrade performance, as they do in humans Model MT-- computes ON-center OFF-surround differential motion signals and interacts with MSTv via an attentive feedback loop to compute accurate estimates of speed, direction and position of moving objects. This object information is combined with heading information to produce steering decisions wherein goals behave like attractors and obstacles behave like repellers. These steering decisions lead to navigational trajectories that closely match human performance. ViSTARS demonstrates that processing in the primate magnocellular pathway can provide sufficient information for human-like performance even with low resolution noisy inputs.
Keywords/Search Tags:Primate magnocellular, Model, Navigation, Motion, Object, Human, Tracking, Information
Related items