当前位置: X-MOL 学术J. Neurosci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Crossmodal Phase Reset and Evoked Responses Provide Complementary Mechanisms for the Influence of Visual Speech in Auditory Cortex
Journal of Neuroscience ( IF 4.4 ) Pub Date : 2020-10-28 , DOI: 10.1523/jneurosci.0555-20.2020
Pierre Mégevand 1, 2, 3 , Manuel R Mercier 4, 5, 6 , David M Groppe 1, 2, 7 , Elana Zion Golumbic 8 , Nima Mesgarani 9 , Michael S Beauchamp 10 , Charles E Schroeder 11, 12 , Ashesh D Mehta 2, 13
Affiliation  

Natural conversation is multisensory: when we can see the speaker's face, visual speech cues improve our comprehension. The neuronal mechanisms underlying this phenomenon remain unclear. The two main alternatives are visually mediated phase modulation of neuronal oscillations (excitability fluctuations) in auditory neurons and visual input-evoked responses in auditory neurons. Investigating this question using naturalistic audiovisual speech with intracranial recordings in humans of both sexes, we find evidence for both mechanisms. Remarkably, auditory cortical neurons track the temporal dynamics of purely visual speech using the phase of their slow oscillations and phase-related modulations in broadband high-frequency activity. Consistent with known perceptual enhancement effects, the visual phase reset amplifies the cortical representation of concomitant auditory speech. In contrast to this, and in line with earlier reports, visual input reduces the amplitude of evoked responses to concomitant auditory input. We interpret the combination of improved phase tracking and reduced response amplitude as evidence for more efficient and reliable stimulus processing in the presence of congruent auditory and visual speech inputs.

SIGNIFICANCE STATEMENT Watching the speaker can facilitate our understanding of what is being said. The mechanisms responsible for this influence of visual cues on the processing of speech remain incompletely understood. We studied these mechanisms by recording the electrical activity of the human brain through electrodes implanted surgically inside the brain. We found that visual inputs can operate by directly activating auditory cortical areas, and also indirectly by modulating the strength of cortical responses to auditory input. Our results help to understand the mechanisms by which the brain merges auditory and visual speech into a unitary perception.



中文翻译:


跨模态相位重置和诱发反应为视觉言语对听觉皮层的影响提供了补充机制



自然对话是多感官的:当我们可以看到说话者的脸部时,视觉语音提示可以提高我们的理解力。这种现象背后的神经机制仍不清楚。两种主要的替代方案是听觉神经元中视觉介导的神经元振荡(兴奋性波动)的相位调制和听觉神经元中视觉输入诱发的反应。通过使用自然视听语音和两性人类的颅内录音来研究这个问题,我们找到了这两种机制的证据。值得注意的是,听觉皮层神经元利用其缓慢振荡的相位和宽带高频活动中的相位相关调制来跟踪纯视觉语音的时间动态。与已知的感知增强效应一致,视觉相位重置放大了伴随听觉言语的皮层表征。与此相反,并且与早期的报告一致,视觉输入降低了对伴随听觉输入的诱发反应的幅度。我们将改进的相位跟踪和降低的响应幅度的组合解释为在一致的听觉和视觉语音输入存在的情况下更有效和可靠的刺激处理的证据。


意义陈述观看演讲者可以帮助我们理解所讲内容。视觉线索对言语处理产生影响的机制仍不完全清楚。我们通过通过手术植入大脑内的电极记录人脑的电活动来研究这些机制。我们发现视觉输入可以通过直接激活听觉皮层区域来操作,也可以通过调节皮层对听觉输入的反应强度来间接操作。我们的结果有助于理解大脑将听觉和视觉语音合并成统一感知的机制。

更新日期:2020-10-30
down
wechat
bug