当前位置: X-MOL 学术Cogn. Psychol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Successfully learning non-adjacent dependencies in a continuous artificial language stream
Cognitive Psychology ( IF 3.0 ) Pub Date : 2019-09-01 , DOI: 10.1016/j.cogpsych.2019.101223
Felix Hao Wang 1 , Jason Zevin 2 , Toben H Mintz 2
Affiliation  

Much of the statistical learning literature has focused on adjacent dependency learning, which has shown that learners are capable of extracting adjacent statistics from continuous language streams. In contrast, studies on non-adjacent dependency learning have mixed results, with some showing success and others failure. We review the literature on non-adjacent dependency learning and examine various theories proposed to account for these results, including the proposed necessity of the presence of pauses in the learning stream, or proposals regarding competition between adjacent and non-adjacent dependency learning such that high variability of middle elements is beneficial to learning. Here we challenge those accounts by showing successful learning of non-adjacent dependencies under conditions that are inconsistent with predictions of previous theories. We show that non-adjacent dependencies are learnable without pauses at dependency edges in a variety of artificial language designs. Moreover, we find no evidence of a relationship between non-adjacent dependency learning and the robustness of adjacent statistics. We demonstrate that our two-step statistical learning model can account for all of our non-adjacent dependency learning results, and provides a unified learning account of adjacent and non-adjacent dependency learning. Finally, we discussed the theoretical implications of our findings for natural language acquisition, and argue that the dependency learning process can be a precursor to other language acquisition tasks that are vital to natural language acquisition.

中文翻译:

在连续人工语言流中成功学习非相邻依赖

许多统计学习文献都集中在相邻依赖学习上,这表明学习者能够从连续的语言流中提取相邻的统计数据。相比之下,对非相邻依赖学习的研究结果喜忧参半,有些显示成功,有些则失败。我们回顾了关于非相邻依赖学习的文献,并检查了为解释这些结果而提出的各种理论,包括提出的学习流中存在暂停的必要性,或关于相邻和非相邻依赖学习之间竞争的提议,例如高中间元素的可变性有利于学习。在这里,我们通过展示在与先前理论的预测不一致的条件下成功学习非相邻依赖关系来挑战这些帐户。我们表明,在各种人工语言设计中,非相邻依赖项是可以学习的,而无需在依赖项边缘停顿。此外,我们没有发现非相邻依赖学习与相邻统计的稳健性之间存在关系的证据。我们证明我们的两步统计学习模型可以解释我们所有的非相邻依赖学习结果,并提供相邻和非相邻依赖学习的统一学习帐户。最后,我们讨论了我们的发现对自然语言习得的理论意义,
更新日期:2019-09-01
down
wechat
bug