当前位置: X-MOL 学术Atten. Percept. Psychophys. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Cross-modal transfer of talker-identity learning
Attention, Perception, & Psychophysics ( IF 1.7 ) Pub Date : 2020-10-20 , DOI: 10.3758/s13414-020-02141-9
Dominique Simmons , Josh Dorsi , James W. Dias , Lawrence D. Rosenblum

A speech signal carries information about meaning and about the talker conveying that meaning. It is now known that these two dimensions are related. There is evidence that gaining experience with a particular talker in one modality not only facilitates better phonetic perception in that modality, but also transfers across modalities to allow better phonetic perception in the other. This finding suggests that experience with a talker provides familiarity with some amodal properties of their articulation such that the experience can be shared across modalities. The present study investigates if experience with talker-specific articulatory information can also support cross-modal talker learning. In Experiment 1 we show that participants can learn to identify ten novel talkers from point-light and sinewave speech, expanding on prior work. Point-light and sinewave speech also supported similar talker identification accuracies, and similar patterns of talker confusions were found across stimulus types. Experiment 2 showed these stimuli could also support cross-modal talker matching, further expanding on prior work. Finally, in Experiment 3 we show that learning to identify talkers in one modality (visual-only point-light speech) facilitates learning of those same talkers in another modality (auditory-only sinewave speech). These results suggest that some of the information for talker identity takes a modality-independent form.



中文翻译:

说话人身份学习的跨模式转移

语音信号携带有关含义的信息以及有关传达该含义的讲话者的信息。现在已知这两个维度是相关的。有证据表明,在一个模态中与特定说话者取得经验不仅可以促进该模态中更好的语音感知,而且可以跨模态转移,从而在另一个模态中可以更好地进行语音感知。该发现表明,与讲话者的经验使他们熟悉其发音的一些非模态性质,从而可以跨模态共享经验。本研究调查说话者特定的发音信息的经验是否也可以支持跨模式说话者学习。在实验1中,我们展示了参与者可以从点光源和正弦波语音中识别出十个新颖的讲话者,并扩展了以前的工作。点光源和正弦波语音也支持类似的说话人识别准确性,并且在各种刺激类型中发现了类似的说话人困惑模式。实验2表明,这些刺激还可以支持跨模态说话人匹配,从而进一步扩展了以前的工作。最后,在实验3中,我们表明学习识别一种方式(仅视觉的点灯语音)中的讲话者有助于学习另一种方式(仅听觉正弦波语音)中的那些讲话者。这些结果表明,谈话者身份的某些信息采用与形式无关的形式。

更新日期:2020-10-20
down
wechat
bug