当前位置: X-MOL 学术Atten. Percept. Psychophys. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Crossmodal associations modulate multisensory spatial integration.
Attention, Perception, & Psychophysics ( IF 1.7 ) Pub Date : 2020-07-05 , DOI: 10.3758/s13414-020-02083-2
Jonathan Tong 1, 2 , Lux Li 1 , Patrick Bruns 1 , Brigitte Röder 1
Affiliation  

According to the Bayesian framework of multisensory integration, audiovisual stimuli associated with a stronger prior belief that they share a common cause (i.e., causal prior) are predicted to result in a greater degree of perceptual binding and therefore greater audiovisual integration. In the present psychophysical study, we systematically manipulated the causal prior while keeping sensory evidence constant. We paired auditory and visual stimuli during an association phase to be spatiotemporally either congruent or incongruent, with the goal of driving the causal prior in opposite directions for different audiovisual pairs. Following this association phase, every pairwise combination of the auditory and visual stimuli was tested in a typical ventriloquism-effect (VE) paradigm. The size of the VE (i.e., the shift of auditory localization towards the spatially discrepant visual stimulus) indicated the degree of multisensory integration. Results showed that exposure to an audiovisual pairing as spatiotemporally congruent compared to incongruent resulted in a larger subsequent VE (Experiment 1). This effect was further confirmed in a second VE paradigm, where the congruent and the incongruent visual stimuli flanked the auditory stimulus, and a VE in the direction of the congruent visual stimulus was shown (Experiment 2). Since the unisensory reliabilities for the auditory or visual components did not change after the association phase, the observed effects are likely due to changes in multisensory binding by association learning. As suggested by Bayesian theories of multisensory processing, our findings support the existence of crossmodal causal priors that are flexibly shaped by experience in a changing world.



中文翻译:


跨模态关联调节多感官空间整合。



根据多感觉整合的贝叶斯框架,与更强烈的先验信念相关的视听刺激,即它们具有共同原因(即因果先验),预计会导致更大程度的知觉结合,从而导致更大的视听整合。在目前的心理物理学研究中,我们系统地操纵了因果先验,同时保持感官证据不变。我们在关联阶段将听觉和视觉刺激配对,使其在时空上一致或不一致,目的是使不同视听对的因果先验朝相反的方向发展。在这个关联阶段之后,听觉和视觉刺激的每对组合都在典型的腹语效果(VE)范例中进行了测试。 VE 的大小(即听觉定位向空间差异视觉刺激的转变)表明了多感觉整合的程度。结果表明,与时空一致的视听配对相比,暴露于时空一致的视听配对会导致更大的后续 VE(实验 1)。这种效应在第二个 VE 范式中得到了进一步证实,其中一致和不一致的视觉刺激位于听觉刺激的两侧,并且显示了一致视觉刺激方向上的 VE(实验 2)。由于听觉或视觉成分的单感觉可靠性在关联阶段后没有改变,因此观察到的效果可能是由于关联学习引起的多感觉结合的变化所致。正如多感官处理的贝叶斯理论所表明的,我们的研究结果支持跨模式因果先验的存在,这些先验是由不断变化的世界中的经验灵活塑造的。

更新日期:2020-07-05
down
wechat
bug