当前位置: X-MOL 学术Cognition › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Nevertheless, it persists: Dimension-based statistical learning and normalization of speech impact different levels of perceptual processing.
Cognition ( IF 4.011 ) Pub Date : 2020-06-05 , DOI: 10.1016/j.cognition.2020.104328
Matthew Lehet 1 , Lori L Holt 2
Affiliation  

Speech is notoriously variable, with no simple mapping from acoustics to linguistically-meaningful units like words and phonemes. Empirical research on this theoretically central issue establishes at least two classes of perceptual phenomena that accommodate acoustic variability: normalization and perceptual learning. Intriguingly, perceptual learning is supported by learning across acoustic variability, but normalization is thought to counteract acoustic variability leaving open questions about how these two phenomena might interact. Here, we examine the joint impact of normalization and perceptual learning on how acoustic dimensions map to vowel categories. As listeners categorized nonwords as setch or satch, they experienced a shift in short-term distributional regularities across the vowels' acoustic dimensions. Introduction of this ‘artificial accent’ resulted in a shift in the contribution of vowel duration in categorization. Although this dimension-based statistical learning impacted the influence of vowel duration on vowel categorization, the duration of these very same vowels nonetheless maintained a consistent influence on categorization of a subsequent consonant via duration contrast, a form of normalization. Thus, vowel duration had a duplex role consistent with normalization and perceptual learning operating on distinct levels in the processing hierarchy. We posit that whereas normalization operates across auditory dimensions, dimension-based statistical learning impacts the connection weights among auditory dimensions and phonetic categories.



中文翻译:

尽管如此,它仍然存在:基于维度的统计学习和语音规范化会影响不同级别的感知处理。

众所周知,语音是可变的,没有简单的从声学到具有语言意义的单位(例如单词和音素)的映射。对这一理论中心问题的实证研究建立了至少两类适应声学变化的知觉现象:归一化和知觉学习。有趣的是,感知学习是通过跨声学可变性的学习来支持的,但是人们认为归一化可以抵消声学可变性,这对这两种现象可能如何相互作用产生了悬而未决的问题。在这里,我们研究规范化和感知学习对声学尺寸如何映射到元音类别的共同影响。侦听器将非单词分类为setchsatch,他们在元音声学尺寸上的短期分布规律发生了变化。这种“人工口音”的引入导致元音持续时间在分类中的贡献发生了变化。尽管这种基于维度的统计学习影响了元音持续时间对元音分类的影响,但是这些非常相同的元音的持续时间仍然通过持续时间对比(归一化形式)对后续辅音的分类保持了一致的影响。因此,元音持续时间具有双重作用,与处理层次结构中不同级别上的规范化和知觉学习一致。我们假设归一化在听觉维度上起作用,

更新日期:2020-06-05
down
wechat
bug