Font Size: a A A

Multi-sensory Modulation Of Degraded Visual Object Identification By Semantically Congruent Sound

Posted on:2020-11-04Degree:DoctorType:Dissertation
Country:ChinaCandidate:L LuFull Text:PDF
GTID:1488306131967939Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
To enable effective perception with our multisensory environment,the human brain integrates information from multiple sources into a coherent percept.In such cases,multisensory integration facilitates detection,identification,and categorization of objects and events.Multi-sensory integration in the human brain works better than computer in some aspects of object identification,such as complex scene object identification,semantic extraction and adversarial examples.Neurophysiological and functional imaging studies in human and nonhuman primates have revealed multisensory interactions in a widespread system encompassing sensory cortex,higher order association cortices and frontal brain areas.However,little is known about the neural mechanisms underlying multi-sensory integration for degraded visual object identification under complex scene.A visual object identification experiment in complex scene was designed in the present study.Participants were presented with audio-visual stimuli in three different modalities:auditory only(A),degraded visual only(V_d),and auditory with degraded Visual simultaneously(AV_d).Functional magnetic resonance imaging was performed to assess the cognitive processing mechanism and the characteristic of multi-sensory integration.The contributions and innovations are summarized as follows:Firstly,conjunction analysis and'max criterion'rule were used to investigate integrative properties.Superadditive interactions were found in the visual association cortex and subadditive interactions were observed in the superior temporal sulcus/superior temporal gyrus(STS/STG).Our results demonstrate that the visual association cortex and STS/STG are involved in the integration of auditory and degraded visual information.In addition,the pattern classification results imply that semantically congruent sounds may facilitate identification of degraded images in both coarse and fine groups.Importantly,when naturalistic visual stimuli were further subdivided,facilitation through auditory modulation exhibited category selectivity.Secondly,we constructed effective connectivity network to investigate the neuromodulation between sensory modalities which has three audiovisual integration cortical nodes:the visual association cortex,STS and the heschl's gyrus.Dynamic causal modeling(DCM)was then used to infer effective connectivity between these regions.Our results revealed that the modulation of auditory stimulus resulted in increased connectivity from the heschl's gyrus to visual association cortex and from STS to visual association cortex,suggesting that visual association cortex are not only modulated via feedback and top-down connections from higher-order convergence areas,but also lateral feedforward connectivity from auditory cortex.The present findings give support to the interconnected models of cross-modal information integration.Thirdly,we constructed effective connectivity network to investigate network features and information processing mechanisms.The functional connectivity results showed that prefrontal cortex,STS and lateral occipital complex are core nodes of multi-sensory network.In addition,only STS has the positively correlated with the sensory network nodes,which suggested that multi-sensory network might be organized in a hierarchical manner.Our results conform to the distributed-plus-hub model.In conclusion,our study investigated the characteristics of functional segregation in multi-sensory interaction regions,especially the facilitation and superadditive interactions in sensory cortex.our network results revealed that this multi-sensory information processing mechanism is in agreement with interconnected hierarchical processing models.These results provide a new horizon and great practical significance for computer vision.
Keywords/Search Tags:Multi-sensory, Facilitation, Superadditive interaction, Feedback, Feedforward, Interconnected model
PDF Full Text Request
Related items