当前位置: X-MOL 学术arXiv.cs.RO › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Givenness Hierarchy Theoretic Cognitive Status Filtering
arXiv - CS - Robotics Pub Date : 2020-05-22 , DOI: arxiv-2005.11267
Poulomi Pal, Lixiao Zhu, Andrea Golden-Lasher, Akshay Swaminathan, Tom Williams

For language-capable interactive robots to be effectively introduced into human society, they must be able to naturally and efficiently communicate about the objects, locations, and people found in human environments. An important aspect of natural language communication is the use of pronouns. Ac-cording to the linguistic theory of the Givenness Hierarchy(GH), humans use pronouns due to implicit assumptions about the cognitive statuses their referents have in the minds of their conversational partners. In previous work, Williams et al. presented the first computational implementation of the full GH for the purpose of robot language understanding, leveraging a set of rules informed by the GH literature. However, that approach was designed specifically for language understanding,oriented around GH-inspired memory structures used to assess what entities are candidate referents given a particular cognitive status. In contrast, language generation requires a model in which cognitive status can be assessed for a given entity. We present and compare two such models of cognitive status: a rule-based Finite State Machine model directly informed by the GH literature and a Cognitive Status Filter designed to more flexibly handle uncertainty. The models are demonstrated and evaluated using a silver-standard English subset of the OFAI Multimodal Task Description Corpus.

中文翻译:

Givenness 层次理论认知状态过滤

为了将具有语言能力的交互式机器人有效地引入人类社会,它们必须能够自然而有效地就人类环境中的物体、位置和人进行交流。自然语言交流的一个重要方面是代词的使用。根据 Givenness Hierarchy (GH) 的语言学理论,人类之所以使用代词,是因为对其所指对象在对话伙伴心中的认知状态有隐含的假设。在以前的工作中,威廉姆斯等人。为了机器人语言理解的目的,提出了完整 GH 的第一个计算实现,利用了一组由 GH 文献提供的规则。然而,这种方法是专门为语言理解而设计的,面向 GH 启发的记忆结构,用于评估给定特定认知状态的候选对象。相比之下,语言生成需要一个模型,其中可以评估给定实体的认知状态。我们提出并比较了两种这样的认知状态模型:直接由 GH 文献提供信息的基于规则的有限状态机模型和旨在更灵活地处理不确定性的认知状态过滤器。这些模型使用 OFAI 多模态任务描述语料库的银标准英语子集进行演示和评估。由 GH 文献直接提供的基于规则的有限状态机模型和旨在更灵活地处理不确定性的认知状态过滤器。这些模型使用 OFAI 多模态任务描述语料库的银标准英语子集进行演示和评估。由 GH 文献直接提供的基于规则的有限状态机模型和旨在更灵活地处理不确定性的认知状态过滤器。这些模型使用 OFAI 多模态任务描述语料库的银标准英语子集进行演示和评估。
更新日期:2020-05-25
down
wechat
bug