当前位置: X-MOL 学术Front. Neurorobotics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A neural network model for learning 3D object representations through haptic exploration
Frontiers in Neurorobotics ( IF 2.6 ) Pub Date : 2021-02-23 , DOI: 10.3389/fnbot.2021.639001
Xiaogang Yan , Steven Mills , Alistair Knott

Humans initially learn about objects through the sense of touch, in a process called haptic exploration. In this paper, we present a neural network model of this learning process. The model implements two key assumptions. The first is that haptic exploration can be thought of as a type of navigation, where the exploring hand plays the role of an autonomous agent, and the explored object is this agent's local environment. In this scheme, the agent's movements are registered in the coordinate system of the hand, through slip sensors on the palm and fingers. Our second assumption is that the learning process rests heavily on a simple model of sequence learning, where frequently-encountered sequences of hand movements are encoded declaratively, as chunks. The geometry of the object being explored places constraints on possible movement sequences: our proposal is that representations of possible, or frequently-attested sequences implicitly encode the shape of the explored object, along with its haptic affordances. We evaluate our model in two ways. We assess how much information about the hand's actual location is conveyed by its internal representations of movement sequences. We also assess how effective the model's representations are in a reinforcement learning task, where the agent must learn how to reach a given location on an explored object. Both metrics validate the basic claims of the model. We also show that the model learns better if objects are asymmetrical, or contain tactile landmarks, or if the navigating hand is articulated, which further constrains the movement sequences supported by the explored object.

中文翻译:

通过触觉探索学习3D对象表示的神经网络模型

最初,人类通过触觉来学习物体,这一过程称为触觉探索。在本文中,我们提出了这种学习过程的神经网络模型。该模型实现了两个关键假设。首先,可以将触觉探索视为一种导航,其中探索手扮演自治主体的角色,而被探索的对象就是该主体的本地环境。在此方案中,通过手掌和手指上的滑动传感器将坐席的运动记录在手的坐标系中。我们的第二个假设是学习过程在很大程度上依赖于序列学习的简单模型,其中经常遇到的手部动作序列被声明性地编码为块。被探查对象的几何形状对可能的移动序列施加了限制:我们的建议是,可能的或经常被证明的序列的表示形式隐含地编码了被探查对象的形状以及其触觉承受能力。我们通过两种方式评估模型。我们通过手部运动序列的内部表示来评估传达了多少有关手的实际位置的信息。我们还评估了模型表示在强化学习任务中的有效性,在该学习中,代理必须学习如何到达探索对象上的给定位置。这两个度量标准都验证了模型的基本主张。我们还表明,如果对象不对称或包含触觉界标,或者如果导航手已铰接,则该模型的学习效果会更好,
更新日期:2021-03-17
down
wechat
bug