当前位置: X-MOL 学术Atten. Percept. Psychophys. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Speech and non-speech measures of audiovisual integration are not correlated
Attention, Perception, & Psychophysics ( IF 1.7 ) Pub Date : 2022-05-24 , DOI: 10.3758/s13414-022-02517-z
Jonathan M P Wilbiks 1 , Violet A Brown 2 , Julia F Strand 3
Affiliation  

Many natural events generate both visual and auditory signals, and humans are remarkably adept at integrating information from those sources. However, individuals appear to differ markedly in their ability or propensity to combine what they hear with what they see. Individual differences in audiovisual integration have been established using a range of materials, including speech stimuli (seeing and hearing a talker) and simpler audiovisual stimuli (seeing flashes of light combined with tones). Although there are multiple tasks in the literature that are referred to as “measures of audiovisual integration,” the tasks themselves differ widely with respect to both the type of stimuli used (speech versus non-speech) and the nature of the tasks themselves (e.g., some tasks use conflicting auditory and visual stimuli whereas others use congruent stimuli). It is not clear whether these varied tasks are actually measuring the same underlying construct: audiovisual integration. This study tested the relationships among four commonly-used measures of audiovisual integration, two of which use speech stimuli (susceptibility to the McGurk effect and a measure of audiovisual benefit), and two of which use non-speech stimuli (the sound-induced flash illusion and audiovisual integration capacity). We replicated previous work showing large individual differences in each measure but found no significant correlations among any of the measures. These results suggest that tasks that are commonly referred to as measures of audiovisual integration may be tapping into different parts of the same process or different constructs entirely.



中文翻译:

视听整合的言语和非言语测量不相关

许多自然事件都会产生视觉和听觉信号,而人类非常善于整合这些来源的信息。然而,个体将所闻与所见结合起来的能力或倾向似乎存在显着差异。视听整合的个体差异已经通过一系列材料建立起来,包括言语刺激(看到和听到说话者)和更简单的视听刺激(看到闪光与音调相结合)。尽管文献中有多个任务被称为“视听整合的测量”,但任务本身在所使用的刺激类型(语音与非语音)和任务本身的性质方面存在很大差异(例如,某些任务使用冲突的听觉和视觉刺激,而其他任务则使用一致的刺激)。目前尚不清楚这些不同的任务实际上是否在测量相同的基础结构:视听集成。本研究测试了四种常用的视听整合测量之间的关系,其中两种使用语音刺激(对麦格克效应的敏感性和视听效益的测量),另两种使用非语音刺激(声音引起的闪光)幻觉和视听整合能力)。我们重复了之前的工作,显示每个测量值存在很大的个体差异,但发现任何测量值之间没有显着的相关性。这些结果表明,通常被称为视听整合措施的任务可能正在利用同一过程的不同部分或完全不同的结构。

更新日期:2022-05-25
down
wechat
bug