当前位置: X-MOL 学术J. Cogn. Neurosci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Neural Correlates of Phonetic Adaptation as Induced by Lexical and Audiovisual Context.
Journal of Cognitive Neuroscience ( IF 3.1 ) Pub Date : 2020-07-14 , DOI: 10.1162/jocn_a_01608
Shruti Ullas 1, 2 , Lars Hausfeld 1, 2 , Anne Cutler 3 , Frank Eisner 4 , Elia Formisano 1, 2, 5
Affiliation  

When speech perception is difficult, one way listeners adjust is by reconfiguring phoneme category boundaries, drawing on contextual information. Both lexical knowledge and lipreading cues are used in this way, but it remains unknown whether these two differing forms of perceptual learning are similar at a neural level. This study compared phoneme boundary adjustments driven by lexical or audiovisual cues, using ultra-high-field 7-T fMRI. During imaging, participants heard exposure stimuli and test stimuli. Exposure stimuli for lexical retuning were audio recordings of words, and those for audiovisual recalibration were audio–video recordings of lip movements during utterances of pseudowords. Test stimuli were ambiguous phonetic strings presented without context, and listeners reported what phoneme they heard. Reports reflected phoneme biases in preceding exposure blocks (e.g., more reported /p/ after /p/-biased exposure). Analysis of corresponding brain responses indicated that both forms of cue use were associated with a network of activity across the temporal cortex, plus parietal, insula, and motor areas. Audiovisual recalibration also elicited significant occipital cortex activity despite the lack of visual stimuli. Activity levels in several ROIs also covaried with strength of audiovisual recalibration, with greater activity accompanying larger recalibration shifts. Similar activation patterns appeared for lexical retuning, but here, no significant ROIs were identified. Audiovisual and lexical forms of perceptual learning thus induce largely similar brain response patterns. However, audiovisual recalibration involves additional visual cortex contributions, suggesting that previously acquired visual information (on lip movements) is retrieved and deployed to disambiguate auditory perception.



中文翻译:

词汇和视听上下文引起的语音适应的神经相关性。

当语音感知困难时,听众调整的一种方法是重新配置音素类别边界,利用上下文信息。词汇知识和唇读线索都以这种方式使用,但尚不清楚这两种不同形式的感知学习在神经层面上是否相似。本研究使用超高场 7-T fMRI 比较了由词汇或视听线索驱动的音素边界调整。在成像过程中,参与者听到了暴露刺激和测试刺激。词汇重调的暴露刺激是单词的录音,视听重新校准的暴露刺激是假词发音过程中嘴唇运动的音视频记录。测试刺激是在没有上下文的情况下呈现的模棱两可的语音字符串,听众报告他们听到的音素。报告反映了先前暴露块中的音素偏差(例如,在 /p/ 偏向暴露之后更多地报告 /p/)。对相应大脑反应的分析表明,两种形式的提示使用都与颞叶皮层、顶叶、岛叶和运动区域的活动网络相关。尽管缺乏视觉刺激,视听重新校准也引起了显着的枕叶皮层活动。几个 ROI 中的活动水平也与视听重新校准的强度相关,随着更大的重新校准变化,活动越大。词汇重新调整出现了类似的激活模式,但在这里,没有发现显着的 ROI。因此,视听和词汇形式的知觉学习会引起大体相似的大脑反应模式。然而,

更新日期:2020-08-20
down
wechat
bug