当前位置: X-MOL 学术J. Mem. Lang. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Production without rules: Using an instance memory model to exploit structure in natural language
Journal of Memory and Language ( IF 4.3 ) Pub Date : 2020-12-01 , DOI: 10.1016/j.jml.2020.104165
Brendan T. Johns , Randall K. Jamieson , Matthew J.C. Crump , Michael N. Jones , D.J.K. Mewhort

Abstract Recent research in the artificial grammar learning literature has shown that a simple instance model of memory can account for a wide variety of artificial grammar results (Jamieson & Mewhort, 2009, 2010, 2011), indicating that language processing may have more in common with episodic memory than previously thought. These results have been used to develop new instance models of natural language processing, including a model of sentence comprehension (Johns & Jones, 2015) and semantic memory (Jamieson, Avery, Johns, & Jones, 2018). The foundations of the models lie in the storage and retrieval of episodic traces of linguistic experience. The current research extends the idea to account for natural language sentence production. We show that the structure of language itself provides sufficient information to generate syntactically correct sentences, even with no higher-level information (such as knowledge of grammatical classes) available to the model. Additionally, we demonstrate that the model can account for a variety of effects from the structural priming literature (e.g., Bock, 1986). This work provides insight into the highly structured nature of natural language, and how instance memory models can be a powerful model type to exploit this structure. Additionally, it demonstrates the utility of using the formalisms developed in episodic memory research to understand performance in other domains, such as in language processing.

中文翻译:

无规则的生产:使用实例内存模型利用自然语言中的结构

摘要 人工语法学习文献最近的研究表明,一个简单的记忆实例模型可以解释各种各样的人工语法结果(Jamieson & Mewhort, 2009, 2010, 2011),表明语言处理可能与情景记忆比以前想象的要多。这些结果已被用于开发新的自然语言处理实例模型,包括句子理解模型 (Johns & Jones, 2015) 和语义记忆模型 (Jamieson, Avery, Johns, & Jones, 2018)。模型的基础在于存储和检索语言经验的情节痕迹。当前的研究扩展了这一想法以解释自然语言句子的产生。我们表明,语言本身的结构提供了足够的信息来生成句法正确的句子,即使模型没有更高级别的信息(例如语法类别的知识)。此外,我们证明该模型可以解释结构启动文献(例如,Bock,1986)中的各种影响。这项工作提供了对自然语言高度结构化性质的洞察,以及实例内存模型如何成为利用这种结构的强大模型类型。此外,它还展示了使用情景记忆研究中开发的形式主义来理解其他领域(例如语言处理)中的表现的效用。我们证明该模型可以解释结构启动文献(例如,Bock,1986)中的各种影响。这项工作提供了对自然语言高度结构化性质的洞察,以及实例内存模型如何成为利用这种结构的强大模型类型。此外,它还展示了使用情景记忆研究中开发的形式主义来理解其他领域(例如语言处理)中的表现的效用。我们证明该模型可以解释结构启动文献(例如,Bock,1986)中的各种影响。这项工作提供了对自然语言高度结构化性质的洞察,以及实例内存模型如何成为利用这种结构的强大模型类型。此外,它还展示了使用情景记忆研究中开发的形式主义来理解其他领域(例如语言处理)中的表现的效用。
更新日期:2020-12-01
down
wechat
bug