当前位置: X-MOL 学术Language Acquisition › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Gradual syntactic triggering: The gradient parameter hypothesis
Language Acquisition ( IF 1.600 ) Pub Date : 2020-10-07 , DOI: 10.1080/10489223.2020.1803329
Katherine Howitt 1, 2 , Soumik Dey 1, 2 , William Gregory Sakas 1, 2
Affiliation  

ABSTRACT

In this article, we propose a reconceptualization of the principles and parameters (P&P) framework. We argue that in lieu of discrete parameter values, a parameter value exists on a gradient plane that encodes a learner’s confidence that a particular parametric structure licenses the utterances in the learner’s linguistic input. Crucially, this gradient parameter hypothesis obviates the need for default parameter values. Default parameter values can be put to use effectively from the perspective of linguistic learnability but are lacking in terms of empirical and theoretical consistency. We present findings from a computational implementation of a gradient P&P learner. The findings suggest that the gradient parameter hypothesis provides the basis for a viable alternative to existing computational models of language acquisition in the classic P&P paradigm. We close with a brief discussion of how a gradient parameter space offers a path to address shortcomings that have been attributed to the P&P framework.



中文翻译:

渐进句法触发:梯度参数假设

摘要

在本文中,我们提出了原理和参数(P&P)框架的重新概念化。我们认为,代替离散的参数值,参数值存在于渐变平面上,该平面编码学习者的信心,即特定参数结构许可学习者的语言输入中的话语。至关重要的是,这种梯度参数假设消除了对默认参数值的需求。从语言可学习性的角度来看,可以有效使用默认参数值,但是在经验和理论上缺乏一致性。我们介绍了从梯度P&P学习器的计算实现中获得的发现。研究结果表明,梯度参数假设为经典P&P范式中现有的语言习得计算模型提供了可行的替代基础。在结束时,我们将简要讨论梯度参数空间如何为解决归因于P&P框架的缺陷提供一条途径。

更新日期:2020-10-07
down
wechat
bug