当前位置: X-MOL 学术 › Cognition › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Tuning in to non-adjacencies: Exposure to learnable patterns supports discovering otherwise difficult structures.
Cognition Pub Date : 2020-07-02 , DOI: 10.1016/j.cognition.2020.104283
Martin Zettersten 1 , Christine E Potter 2 , Jenny R Saffran 1
Affiliation  

Non-adjacent dependencies are ubiquitous in language, but difficult to learn in artificial language experiments in the lab. Previous research suggests that non-adjacent dependencies are more learnable given structural support in the input - for instance, in the presence of high variability between dependent items. However, not all non-adjacent dependencies occur in supportive contexts. How are such regularities learned? One possibility is that learning one set of non-adjacent dependencies can highlight similar structures in subsequent input, facilitating the acquisition of new non-adjacent dependencies that are otherwise difficult to learn. In three experiments, we show that prior exposure to learnable non-adjacent dependencies - i.e., dependencies presented in a learning context that has been shown to facilitate discovery - improves learning of novel non-adjacent regularities that are typically not detected. These findings demonstrate how the discovery of complex linguistic structures can build on past learning in supportive contexts.

中文翻译:


调整非邻接:接触可学习的模式有助于发现其他困难的结构。



非相邻依赖在语言中普遍存在,但在实验室的人工语言实验中很难学习。先前的研究表明,考虑到输入中的结构支持,非相邻依赖关系更容易学习——例如,在依赖项之间存在高度可变性的情况下。然而,并非所有非相邻依赖关系都发生在支持性上下文中。这些规律是如何习得的呢?一种可能性是,学习一组非相邻依赖关系可以突出后续输入中的相似结构,从而有助于获取新的非相邻依赖关系,否则这些依赖关系很难学习。在三个实验中,我们表明,事先接触可学习的非相邻依赖性(即,在学习环境中呈现的依赖性已被证明有助于发现)可以改善对通常未检测到的新颖非相邻规律的学习。这些发现证明了复杂语言结构的发现如何能够建立在支持性环境中过去学习的基础上。
更新日期:2020-07-02
down
wechat
bug