当前位置: X-MOL 学术J. Neurosci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Responses to Visual Speech in Human Posterior Superior Temporal Gyrus Examined with iEEG Deconvolution
Journal of Neuroscience ( IF 5.3 ) Pub Date : 2020-09-02 , DOI: 10.1523/jneurosci.0279-20.2020
Brian A. Metzger , John F. Magnotti , Zhengjia Wang , Elizabeth Nesbitt , Patrick J. Karas , Daniel Yoshor , Michael S. Beauchamp

Experimentalists studying multisensory integration compare neural responses to multisensory stimuli with responses to the component modalities presented in isolation. This procedure is problematic for multisensory speech perception since audiovisual speech and auditory-only speech are easily intelligible but visual-only speech is not. To overcome this confound, we developed intracranial encephalography (iEEG) deconvolution. Individual stimuli always contained both auditory and visual speech, but jittering the onset asynchrony between modalities allowed for the time course of the unisensory responses and the interaction between them to be independently estimated. We applied this procedure to electrodes implanted in human epilepsy patients (both male and female) over the posterior superior temporal gyrus (pSTG), a brain area known to be important for speech perception. iEEG deconvolution revealed sustained positive responses to visual-only speech and larger, phasic responses to auditory-only speech. Confirming results from scalp EEG, responses to audiovisual speech were weaker than responses to auditory-only speech, demonstrating a subadditive multisensory neural computation. Leveraging the spatial resolution of iEEG, we extended these results to show that subadditivity is most pronounced in more posterior aspects of the pSTG. Across electrodes, subadditivity correlated with visual responsiveness, supporting a model in which visual speech enhances the efficiency of auditory speech processing in pSTG. The ability to separate neural processes may make iEEG deconvolution useful for studying a variety of complex cognitive and perceptual tasks.

SIGNIFICANCE STATEMENT Understanding speech is one of the most important human abilities. Speech perception uses information from both the auditory and visual modalities. It has been difficult to study neural responses to visual speech because visual-only speech is difficult or impossible to comprehend, unlike auditory-only and audiovisual speech. We used intracranial encephalography deconvolution to overcome this obstacle. We found that visual speech evokes a positive response in the human posterior superior temporal gyrus, enhancing the efficiency of auditory speech processing.



中文翻译:

iEEG反卷积对人后上颞回回视觉语音的反应

研究多感觉整合的实验学家将神经对多感觉刺激的反应与对孤立呈现的成分模态的反应进行了比较。该过程对于多感觉语音感知是有问题的,因为视听语音和仅听觉语音容易理解,而仅视听语音则不容易理解。为了克服这个困惑,我们开发了颅内脑成像(iEEG)解卷积。单个刺激始终包含听觉和视觉言语,但是抖动模态之间的开始异步可能会导致单感觉响应的时间过程以及它们之间的交互作用得到独立估计。我们将此程序应用于植入人癫痫患者(男性和女性)后颞上回(pSTG)的电极上,已知对语音感知重要的大脑区域。iEEG反卷积显示出对纯视觉语音的持续积极响应,以及对纯听觉语音的较大,阶段性响应。确认头皮脑电图的结果,对视听语音的响应比对仅听觉语音的响应要弱,这说明了亚加法的多感觉神经计算。利用iEEG的空间分辨率,我们扩展了这些结果,以显示亚可加性在pSTG的更后部方面最为明显。跨电极,亚可加性与视觉响应度相关,支持了其中视觉语音增强pSTG中听觉语音处理效率的模型。分离神经过程的能力可能使iEEG反卷积可用于研究各种复杂的认知和知觉任务。

意义声明理解语音是人类最重要的能力之一。言语感知使用来自听觉和视觉方式的信息。很难研究对视觉语音的神经反应,因为与仅听觉语音和视听语音不同,仅视觉语音难以理解或无法理解。我们使用颅内脑成像反卷积技术克服了这一障碍。我们发现视觉语音在人类后上颞颞回唤起积极的反应,提高听觉语音处理的效率。

更新日期:2020-09-02
down
wechat
bug