当前位置: X-MOL 学术Cognition › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning words in space and time: Contrasting models of the suspicious coincidence effect
Cognition ( IF 4.011 ) Pub Date : 2021-02-01 , DOI: 10.1016/j.cognition.2020.104576
Gavin W Jenkins 1 , Larissa K Samuelson 2 , Will Penny 2 , John P Spencer 2
Affiliation  

In their 2007b Psychological Review paper, Xu and Tenenbaum found that early word learning follows the classic logic of the “suspicious coincidence effect:” when presented with a novel name (‘fep’) and three identical exemplars (three Labradors), word learners generalized novel names more narrowly than when presented with a single exemplar (one Labrador). Xu and Tenenbaum predicted the suspicious coincidence effect based on a Bayesian model of word learning and demonstrated that no other theory captured this effect. Recent empirical studies have revealed, however, that the effect is influenced by factors seemingly outside the purview of the Bayesian account. A process-based perspective correctly predicted that when exemplars are shown sequentially, the effect is eliminated or reversed (Spencer, Perone, Smith, & Samuelson, 2011). Here, we present a new, formal account of the suspicious coincidence effect using a generalization of a Dynamic Neural Field (DNF) model of word learning. The DNF model captures both the original finding and its reversal with sequential presentation. We compare the DNF model's performance with that of a more flexible version of the Bayesian model that allows both strong and weak sampling assumptions. Model comparison results show that the dynamic field account provides a better fit to the empirical data. We discuss the implications of the DNF model with respect to broader contrasts between Bayesian and process-level models.



中文翻译:

在空间和时间上学习单词:可疑巧合效应的对比模型

在他们的 2007b心理评论中在论文中,Xu 和 Tenenbaum 发现早期单词学习遵循“可疑巧合效应”的经典逻辑:当出现一个新名称(“fep”)和三个相同的样本(三个拉布拉多犬)时,单词学习者对小说名称的概括比对小说名称的概括更窄。当呈现单个样本(一个拉布拉多)时。Xu 和 Tenenbaum 基于单词学习的贝叶斯模型预测了可疑的巧合效应,并证明没有其他理论能够捕捉到这种效应。然而,最近的实证研究表明,这种影响似乎受到贝叶斯账户范围之外的因素的影响。基于过程的观点正确地预测,当示例按顺序显示时,效果会被消除或逆转(Spencer、Perone、Smith 和 Samuelson,2011 年)。在这里,我们提出了一个新的,使用单词学习的动态神经场(DNF)模型的泛化对可疑巧合效应的正式说明。DNF 模型通过顺序呈现来捕获原始发现及其逆转。我们将 DNF 模型的性能与允许强采样假设和弱采样假设的更灵活版本的贝叶斯模型的性能进行比较。模型比较结果表明,动态场帐户提供了更好的拟合经验数据。我们讨论了 DNF 模型对贝叶斯模型和过程级模型之间更广泛对比的影响。s 性能与贝叶斯模型的更灵活版本相比,它允许强和弱采样假设。模型比较结果表明,动态场帐户提供了更好的拟合经验数据。我们讨论了 DNF 模型对贝叶斯模型和过程级模型之间更广泛对比的影响。s 性能与贝叶斯模型的更灵活版本相比,它允许强和弱采样假设。模型比较结果表明,动态场帐户提供了更好的拟合经验数据。我们讨论了 DNF 模型对贝叶斯模型和过程级模型之间更广泛对比的影响。

更新日期:2021-02-01
down
wechat
bug