当前位置: X-MOL 学术Netw. Comput. Neural. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The visual development of hand-centered receptive fields in a neural network model of the primate visual system trained with experimentally recorded human gaze changes
Network: Computation in Neural Systems ( IF 1.1 ) Pub Date : 2016-01-02 , DOI: 10.1080/0954898x.2016.1187311
Juan M. Galeazzi 1 , Joaquín Navajas 2, 3 , Bedeho M. W. Mender 1 , Rodrigo Quian Quiroga 3 , Loredana Minini 1 , Simon M. Stringer 1
Affiliation  

ABSTRACT Neurons have been found in the primate brain that respond to objects in specific locations in hand-centered coordinates. A key theoretical challenge is to explain how such hand-centered neuronal responses may develop through visual experience. In this paper we show how hand-centered visual receptive fields can develop using an artificial neural network model, VisNet, of the primate visual system when driven by gaze changes recorded from human test subjects as they completed a jigsaw. A camera mounted on the head captured images of the hand and jigsaw, while eye movements were recorded using an eye-tracking device. This combination of data allowed us to reconstruct the retinal images seen as humans undertook the jigsaw task. These retinal images were then fed into the neural network model during self-organization of its synaptic connectivity using a biologically plausible trace learning rule. A trace learning mechanism encourages neurons in the model to learn to respond to input images that tend to occur in close temporal proximity. In the data recorded from human subjects, we found that the participant’s gaze often shifted through a sequence of locations around a fixed spatial configuration of the hand and one of the jigsaw pieces. In this case, trace learning should bind these retinal images together onto the same subset of output neurons. The simulation results consequently confirmed that some cells learned to respond selectively to the hand and a jigsaw piece in a fixed spatial configuration across different retinal views.

中文翻译:

通过实验记录的人类注视变化训练灵长类视觉系统的神经网络模型中以手为中心的感受野的视觉发展

摘要 在灵长类动物的大脑中发现了神经元,它们对以手为中心的坐标中特定位置的物体做出反应。一个关键的理论挑战是解释这种以手为中心的神经元反应如何通过视觉体验发展。在本文中,我们展示了如何使用灵长类动物视觉系统的人工神经网络模型 VisNet 在人类测试对象完成拼图时记录的凝视变化驱动时,开发以手为中心的视觉感受野。安装在头部的摄像头捕捉手和拼图的图像,同时使用眼球追踪设备记录眼球运动。这种数据组合使我们能够重建人类执行拼图任务时看到的视网膜图像。然后,在使用生物学上合理的跟踪学习规则自组织突触连接期间,将这些视网膜图像输入到神经网络模型中。跟踪学习机制鼓励模型中的神经元学习对在时间上很接近的输入图像做出响应。在从人类受试者记录的数据中,我们发现参与者的视线经常在手的固定空间配置和其中一个拼图周围的一系列位置中移动。在这种情况下,跟踪学习应该将这些视网膜图像绑定到相同的输出神经元子集上。模拟结果因此证实,一些细胞学会了在不同视网膜视图的固定空间配置中选择性地对手和拼图做出反应。
更新日期:2016-01-02
down
wechat
bug