当前位置: X-MOL 学术Language Learning and Development › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Is Regularization Uniform across Linguistic Levels? Comparing Learning and Production of Unconditioned Probabilistic Variation in Morphology and Word Order
Language Learning and Development ( IF 1.5 ) Pub Date : 2021-02-19 , DOI: 10.1080/15475441.2021.1876697
Carmen Saldana 1 , Kenny Smith 1 , Simon Kirby 1 , Jennifer Culbertson 1
Affiliation  

ABSTRACT

Languages exhibit variation at all linguistic levels, from phonology, to the lexicon, to syntax. Importantly, that variation tends to be (at least partially) conditioned on some aspect of the social or linguistic context. When variation is unconditioned, language learners regularize it – removing some or all variants, or conditioning variant use on context. Previous studies using artificial language learning experiments have documented regularizing behavior in the learning of lexical, morphological, and syntactic variation. These studies implicitly assume that regularization reflects uniform mechanisms and processes across linguistic levels. However, studies on natural language learning and pidgin/creole formation suggest that morphological and syntactic variation may be treated differently. In particular, there is evidence that morphological variation may be more susceptible to regularization. Here we provide the first systematic comparison of the strength of regularization across these two linguistic levels. In line with previous studies, we find that the presence of a favored variant can induce different degrees of regularization. However, when input languages are carefully matched – with comparable initial variability, and no variant-specific biases – regularization can be comparable across morphology and word order. This is the case regardless of whether the task is explicitly communicative. Overall, our findings suggest an overarching regularizing mechanism at work, with apparent differences among levels likely due to differences in inherent complexity or variant-specific biases. Differences between production and encoding in our tasks further suggest this overarching mechanism is driven by production.



中文翻译:

语言层面的正则化统一吗?比较形态学和单词顺序的无条件概率变异的学习和产生

摘要

语言在从语音学到词典再到语法的所有语言水平上都表现出差异。重要的是,这种变化倾向于(至少部分地)以社会或语言环境的某些方面为条件。当变化不受限制时,语言学习者会对其进行规范化-删除部分或全部变体,或根据上下文限制变体的使用。先前使用人工语言学习实验的研究已记录了在词汇,形态和句法变异学习中的正则化行为。这些研究隐含地假设正则化反映了跨语言水平的统一机制和过程。但是,有关自然语言学习和皮钦语/克里奥尔语形成的研究表明,形态和句法变异可能会被不同地对待。特别是,有证据表明形态变化可能更易于正则化。在这里,我们提供了这两个语言水平之间正则化强度的第一个系统比较。与以前的研究一致,我们发现存在一个偏爱的变体可以引起不同程度的正则化。但是,如果输入语言经过仔细匹配(具有可比较的初始可变性,并且没有特定于变体的偏见),则正则化可以在形态和词序方面具有可比性。不管任务是否是明确交流的,都是这种情况。总体而言,我们的发现表明,工作中存在着一种总体的正则化机制,而各个级别之间的明显差异可能是由于内在复杂性或特定于变量的偏见所致。生产和编码在我们的任务中的差异进一步表明,这种总体机制是由生产驱动的。

更新日期:2021-04-26
down
wechat
bug