当前位置: X-MOL 学术Front. Neurorobotics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning Actions from Natural Language Instructions using an ON-World Embodied Cognitive Architecture
Frontiers in Neurorobotics ( IF 3.1 ) Pub Date : 2021-04-08 , DOI: 10.3389/fnbot.2021.626380
Ioanna Giorgi , Angelo Cangelosi , Giovanni L. Masala

Endowing robots with the ability to view the world the way humans do, to understand natural language and to learn novel semantic meanings when they are deployed in the physical world, is a compelling problem. Another significant aspect is linking language to action, in particular, utterances involving abstract words, in artificial agents. In this work, we propose a novel methodology, using a brain-inspired architecture, to model an appropriate mapping of language with the percept and internal motor representation in humanoid robots. This research presents the first robotic instantiation of a complex architecture based on the Baddeley’s Working Memory (WM) model. Our proposed method grants a scalable knowledge representation of verbal and nonverbal signals in the cognitive architecture, which supports incremental open-ended learning. Human spoken utterances about the workspace and the task are combined with the internal knowledge map of the robot to achieve task accomplishment goals. We train the robot to understand instructions involving higher-order (abstract) linguistic concepts of developmental complexity, which cannot be directly hooked in the physical world and are not pre-defined in the robot’s static self-representation. Our proposed interactive learning method grants flexible run-time acquisition of novel linguistic forms and real-world information, without training the cognitive model anew. Hence, the robot can adapt to new workspaces that include novel objects and task outcomes. We assess the potential of the proposed methodology in verification experiments with a humanoid robot. The obtained results suggest robust capabilities of the model to link language bi-directionally with the physical environment and solve a variety of manipulation tasks, starting with limited knowledge and gradually learning from the run-time interaction with the tutor, past the pre-trained stage.

中文翻译:

使用世界上实现的认知架构从自然语言指令中学习动作

使机器人具有以人类的方式观察世界,理解自然语言并在部署到物理世界中时学习新颖的语义的能力是一个引人注目的问题。另一个重要方面是将语言与行动联系起来,特别是在人工代理中将涉及抽象词的话语联系起来。在这项工作中,我们提出了一种新颖的方法,使用了灵感来自大脑的体系结构,以人形机器人中感知和内部运动表示来建模语言的适当映射。这项研究提出了基于Baddeley的工作记忆(WM)模型的复杂体系结构的第一个机器人实例。我们提出的方法可以在认知体系结构中对口头和非口头信号进行可扩展的知识表示,从而支持增量式开放式学习。将有关工作区和任务的人类语音与机器人的内部知识图相结合,以实现任务完成目标。我们训练机器人以理解涉及发展复杂性的高阶(抽象)语言概念的指令,这些指令不能直接挂在物理世界中,也不能在机器人的静态自我表示中预先定义。我们提出的交互式学习方法可以在不重新训练认知模型的情况下,在运行时灵活地获取新颖的语言形式和真实世界的信息。因此,机器人可以适应包括新颖对象和任务结果的新工作空间。我们评估拟人方法在人形机器人验证实验中的潜力。
更新日期:2021-04-08
down
wechat
bug