Font Size: a A A

Brain Mechanism Of Rest Reference-based Audio-visual Integration

Posted on:2011-02-13Degree:MasterType:Thesis
Country:ChinaCandidate:Q ZhangFull Text:PDF
GTID:2204360308467247Subject:Biomedical engineering
Abstract/Summary:PDF Full Text Request
In the real world, there is a lot of information which is multisensory. In everyday life, there will be a lot of multisensory information from different time and space, which applies to our perception system. So our performance often critically depends on our ability to attend and integrate the various features from multisensory objects. Despite of the importance, so far the scientific research based on human psychological processes of multisensory integration is still very limited. Therefore, the research of multisensory integration can help us further explore the brain's cognitive processes of integration. Here, we investigated the neural processes of audiovisual multisensory integration using cue-target paradigm as a research tool and REST (Reference Electrode Standardization Technique) as an EEG re-reference method. And we also applied the technology of source location to get the activated areas in the brain. Our major works and achievements were listed as bellow:1. We compared the three re-reference of EEG, including REST, average, and linked-ear reference in the application of audiovisual multisensory integration experiments. We found that REST had a relative larger advantage than average and linked-ear reference in the experiments.2. During our audiovisual integration experiments, the SOA between cue and target was relatively long (1500ms). When the audiovisual targets were located on both sides of two spatial locations, participants responded slower when the cue and target were identical in the corresponding spatial location than in the opposite spatial location. When the audiovisual targets were located in one side of the two spatial locations, the responses to audiovisual targets when auditory cue was located in the same spatial location were not significantly different from visual cue was located in the same spatial location. Moreover, compared to audiovisual targets were located in one side of spatial locations; the response time was longer when the audiovisual targets were located in the both sides of the spatial location.?3. In general, ERPs to multisensory stimuli were consisted of an occipital P1 component, frontal-central N1 component, plus a N1 component in the posterior ?occipital-parietal areas and a P3 component in the posterior parietal areas. No matter early or late ERP components, when the audiovisual targets were located on both sides of spatial locations, the corresponding stimulus of cue and target was located in the identical location can evoke smaller amplitude than the opposite location;? on the contrary, when the audiovisual targets were located in one side of the two spatial locations, the amplitudes between the target which was located in the spatial location of auditory cue and the visual cue had no differences.?This may be due to the audiovisual target stimulus in the bilateral spatial location whose corresponding stimuli were completely different from bilateral cue had to cost more to convert the spatial attention location of the stimuli, but the audiovisual stimuli in one side of two spatial locations had less difference in the spatial attention conversion from cue.4. No matter the visual and auditory target stimuli were located in the same or different spatial location, visual and auditory modality integrated together. The areas of integration were temporoparietal junction areas.
Keywords/Search Tags:Audiovisual Multisensory Integration, ERP, Cue-Target paradigm, REST, LORETA
PDF Full Text Request
Related items