Font Size: a A A

Cognitive Neural Mechanisms Of The Interplay Between Audiovisual Integration And Attention Among College Students

Posted on:2022-09-17Degree:DoctorType:Dissertation
Country:ChinaCandidate:S ZhaoFull Text:PDF
GTID:1527306344482134Subject:Higher Education
Abstract/Summary:PDF Full Text Request
Effectively integrating information from visual and auditory modalities in class is a prerequisite for college students to grasp complicated knowledge,and the complexity of knowledge in higher education also impels them to focus attention in class.Previous studies have suggested subtle bidirectional influences between audiovisual integration and attention.Thus,investigating the cognitive and neural mechanisms of the interaction between audiovisual integration and attention among college students has important contributions to the teaching process in higher education both in theory and in practice.The auditory boost on visual attentional blink is a compelling phenomenon exemplifying the beneficial effect of audiovisual integration on attention:if two successive targets(T1 and T2)embedded in a rapid stream of visual stimuli are temporally close(200-500 ms),observers often fail to discriminate T2 given the attentional engagement to T1;interestingly,this attentional blink can be substantially reduced by presenting a task-irrelevant sound synchronously with T2.However,since the existing studies typically employed meaningless sounds(e.g.pure tones)rather than natural sounds in real life(e.g.beeps of cars)as the auditory cues,it is currently unknown whether and when the auditory benefit on attentional blink would be modulated by semantic congruency between T2 and the coincident sound.On the other hand,as an attention-contingent audiovisual integration phenomenon,the cross-modal spread of attention refers to that the gain of visual selective attention can spread to the task-irrelevant auditory modality,either through the stimulus-driven binding process or via the representation-driven priming process.Although the space-based and object-based visual selective attention have been considered as the prerequisites for stimulus-driven and representation-driven cross-modal spread of attention,respectively,it is currently unclear whether the two types of cross-modal attentional spreading per se would occur in an automatic manner when these prerequisites are met.Using event-related potential(ERP)recordings.Experiment 1 of the present study firstly explored the effect of audiovisual semantic congruency on auditory boost of visual attentional blink and its electrophysiological time course when the sound and T2 co-occurred both in time and space.The behavioral results showed a larger improvement of T2 discrimination for the congruent than incongruent sound during the blink interval.The ERP results revealed that both the congruent and incongruent sounds elicited a occipitally distributed cross-modal component N195(192-228 ms after T2 onset),and N195 amplitude was larger for correct than incorrect trials irrespective of semantic congruency.Subsequently,the incongruent sound evoked a greater parietally distributed cross-modal component N440(424-448 ms)than the congruent sound,and this N440 effect was actually evident only on incorrect trials.These findings indicate that the auditory boost on visual attentional blink occurs at early visual discrimination stage of processing,whereas the semantic congruency effect occurs at late semantic integration stage.Experiment 2 introduced new conditions wherein the sound could be presented either 200 ms before or 100 ms after T2 onset,in order to further investigate whether the auditory boost on visual attentional blink and the semantic congruency effect depend on the temporal co-occurrence of the sound and T2.Both the behavioral and ERP results supported this hypothesis,whereby demonstrating not only that the transient-induced,modality-nonspecific alerting effect cannot account for the auditory boost on visual attentional blink,but also that the audiovisual temporal synchrony takes priority over semantic congruency in the phenomenon.Moreover,it was also shown that the occipitally distributed cross-modal P195 component(also 192-228 ms)was decreased for the incongruent than congruent sound,suggesting the semantic congruency effect could also occur at early visual discrimination stage.In Experiment 3,T2 was randomly presented in the left or right rapid stream of visual stimuli while the sound was always centrally delivered,in order to test whether the auditory boost on visual attentional blink and the semantic congruency effect could be further generalized to a circumstance under which T2 was spatially unpredictable and the sound was non-informative on T2 location.Again,both the behavioral and ERP results supported this hypothesis,hence the improvement of ecological validity.Furthermore,it was also found that the lateralization of the cross-modal P195 component(i.e.larger amplitude over the hemisphere contralateral to T2 location)occurred only for correct trials in congruent condition,whereas the N2pc component(190-240 ms)reflecting the allocation of visuospatial attention was enhanced in a compensatory manner for correct trials in incongruent condition.These results suggest again that the semantic congruency effect could occur at early visual discrimination stage.Based on the differences in experimental design between Experiment 2,3 versus 1,as well as findings from previous studies,the present study proposes that the processing locus of semantic congruency effect might depend on status of attention to auditory information.Experiments 4-6 of the present study investigated the effects of multiple forms of attentional resources on cross-modal spread of attention and their electrophysiological timing by recording ERPs.Via taking advantage of the visual attentional blink paradigm under which the space-and object-based visual attention would be intact,Experiments 4 firstly explored whether the representation-driven spread of attention would be inhibited by the lack of post-perceptual attentional resources.Experiment 5,based on the optimization of paradigm,further examined whether the stimulus-driven spread of attention would be limited when the post-perceptual attentional resources are inadequate.The ERP results from Experiment 4 and 5 revealed that not only the representation-driven but also the stimulus-driven auditory Nd components(both 300-400 ms after sound onset)were completely suppressed during the attentional blink interval.These findings demonstrate that sufficient post-perceptual attentional resources are shared prerequisites for the two types of cross-modal attentional spreading,thus neither of them occurs in an automatic way.In addition,Experiment 5 also found that outside the blink interval,the stimulus-driven process was independent of,whereas the representation-driven process was dependent on,audiovisual semantic congruency,hence validating the dual-mechanism model for the cross-modal spread of attention.Lastly,Experiment 6 used the sustained visuospatial attention paradigm to test the influence of space-based visual selective attention on representation-and stimulus-driven spread of attention,respectively.The ERP results revealed that the representation-driven spread of attention was unaffected by the space-selective visual attention but modulated by the co-occurrence of visual stimuli,indicating that it might not only be relatively independent of,but also benefit in an all-or-nothing manner from,visual selection for the presented visual target object.In contrast,the stimulus-driven attentional spreading was found to be modulated by,but not entirely dependent on,the space-selective visual attention,suggesting attention to visual modality per se might be the proximate trigger for its occurrence.The present empirical evidence has the following implications for the teaching process in higher education.On one hand,when teaching complicated knowledge in class,teachers should use semantically congruent auditory cues to enhance college students’attention status to visual information on the blackboard/PPT,while ensuring the temporal synchrony,but not sticking too precisely to the spatial co-occurrence,between auditory cues and visual information.On the other hand,teachers should also extend the time intervals between multiple key knowledge points if time permits in class,in order to avoid the inadequacy of attentional resources at working memory stage,thereby ensuring that college students’ visual attention to teaching content can spread smoothly to teachers’speech and hence preparing the best ground for learning in class.
Keywords/Search Tags:college students, audiovisual cross-modal integration, attention, ERPs, higher education
PDF Full Text Request
Related items