当前位置: X-MOL 学术J. Neurosci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Crossmodal Phase Reset and Evoked Responses Provide Complementary Mechanisms for the Influence of Visual Speech in Auditory Cortex
Journal of Neuroscience ( IF 5.3 ) Pub Date : 2020-10-28 , DOI: 10.1523/jneurosci.0555-20.2020
Pierre Mégevand 1, 2, 3 , Manuel R Mercier 4, 5, 6 , David M Groppe 1, 2, 7 , Elana Zion Golumbic 8 , Nima Mesgarani 9 , Michael S Beauchamp 10 , Charles E Schroeder 11, 12 , Ashesh D Mehta 2, 13
Affiliation  

Natural conversation is multisensory: when we can see the speaker's face, visual speech cues improve our comprehension. The neuronal mechanisms underlying this phenomenon remain unclear. The two main alternatives are visually mediated phase modulation of neuronal oscillations (excitability fluctuations) in auditory neurons and visual input-evoked responses in auditory neurons. Investigating this question using naturalistic audiovisual speech with intracranial recordings in humans of both sexes, we find evidence for both mechanisms. Remarkably, auditory cortical neurons track the temporal dynamics of purely visual speech using the phase of their slow oscillations and phase-related modulations in broadband high-frequency activity. Consistent with known perceptual enhancement effects, the visual phase reset amplifies the cortical representation of concomitant auditory speech. In contrast to this, and in line with earlier reports, visual input reduces the amplitude of evoked responses to concomitant auditory input. We interpret the combination of improved phase tracking and reduced response amplitude as evidence for more efficient and reliable stimulus processing in the presence of congruent auditory and visual speech inputs.

SIGNIFICANCE STATEMENT Watching the speaker can facilitate our understanding of what is being said. The mechanisms responsible for this influence of visual cues on the processing of speech remain incompletely understood. We studied these mechanisms by recording the electrical activity of the human brain through electrodes implanted surgically inside the brain. We found that visual inputs can operate by directly activating auditory cortical areas, and also indirectly by modulating the strength of cortical responses to auditory input. Our results help to understand the mechanisms by which the brain merges auditory and visual speech into a unitary perception.



中文翻译:

跨模态相位重置和诱发反应为听觉皮层中视觉语音的影响提供了补充机制

自然对话是多感官的:当我们可以看到说话者的脸时,视觉语音提示可以提高我们的理解力。这种现象背后的神经元机制仍不清楚。两种主要的替代方法是听觉神经元中神经元振荡(兴奋性波动)的视觉介导相位调制和听觉神经元中的视觉输入诱发反应。使用自然视听语音和颅内录音研究这个问题,我们发现了这两种机制的证据。值得注意的是,听觉皮层神经元使用其慢振荡的相位和宽带高频活动中的相位相关调制来跟踪纯视觉语音的时间动态。与已知的感知增强效果一致,视觉相位重置放大了伴随听觉语音的皮层表征。与此相反,并且与早期报告一致,视觉输入降低了对伴随听觉输入的诱发反应的幅度。我们将改进的相位跟踪和降低的响应幅度的组合解释为在存在一致的听觉和视觉语音输入的情况下更有效和可靠的刺激处理的证据。

重要性陈述观看演讲者可以帮助我们理解所说的内容。负责视觉线索对语音处理的这种影响的机制仍未完全理解。我们通过通过手术植入大脑内部的电极记录人脑的电活动来研究这些机制。我们发现视觉输入可以通过直接激活听觉皮层区域来运作,也可以通过调节皮层对听觉输入的反应强度来间接运作。我们的结果有助于理解大脑将听觉和视觉语言合并为单一感知的机制。

更新日期:2020-10-30
down
wechat
bug