Font Size: a A A

Neural mechanisms of multisensory cue integration for self-motion perception

Posted on:2010-06-21Degree:Ph.DType:Dissertation
University:Washington University in St. LouisCandidate:Fetsch, Christopher RobertFull Text:PDF
GTID:1448390002489386Subject:Biology
Abstract/Summary:
The information received through our senses is inherently probabilistic, and one of the main tasks faced by the brain is to construct an accurate representation of the world in spite of this uncertainty. This problem is particularly relevant when considering the integration of multiple sensory cues, since the uncertainty associated with each cue can vary rapidly and unpredictably. Recent psychophysical studies have shown that human observers combine cues by weighting them in proportion to their reliability, consistent with statistically optimal schemes derived from Bayesian probability theory. The neural basic of cue re-weighting remains unknown, in part due to the lack of a suitable animal model system for simultaneous behavioral and neurophysiological measurements during cue integration. We have established such a paradigm in monkeys using a visual-vestibular self-motion (heading) discrimination task. We found that monkeys can dynamically re-weight cues according to their reliability in a near-optimal fashion, the first such demonstration in a nonhuman animal. This paradigm has allowed ongoing studies to search for specific neural correlates of cue re-weighting at the single-cell level. Preliminary results suggest that neurons in area MSTd exhibit dynamic cue re-weighting with changes in reliability, analogous to the monkeys' behavior. These results will further our understanding of the neural representation of sensory uncertainty, as well as providing the first direct evidence of a neural implementation of Bayesian inference in multisensory processing.;This dissertation also describes separate studies that addressed two ancillary but important questions about sensory cue integration. First, in what spatial reference frame(s) are heading signais represented in MSTd? Vestibular afferents signal motion in a head-centered frame, whereas the early visual system encodes motion in an eye-centered frame. We found that reference frames of visual and vestibular signais remained distinct within MSTd, but computational modeling showed that such a representation could still optimally represent and combine these signais. Second, what is the temporal structure of MSTd vestibular responses? Visual motion signais in the brain primarily encode stimulus velocity, whereas vestibular otolith afferents encode acceleration. We found that this temporal incongruity is resolved at the level of MSTd, as vestibular signais also encode velocity in this region.
Keywords/Search Tags:Cue, Neural, Mstd, Vestibular, Signais, Sensory, Motion
Related items