当前位置: X-MOL 学术Proc. Natl. Acad. Sci. U.S.A. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Stable readout of observed actions from format-dependent activity of monkey's anterior intraparietal neurons.
Proceedings of the National Academy of Sciences of the United States of America ( IF 11.1 ) Pub Date : 2020-07-14 , DOI: 10.1073/pnas.2007018117
Marco Lanzilotto 1, 2 , Monica Maranesi 2 , Alessandro Livi 2, 3 , Carolina Giulia Ferroni 2 , Guy A Orban 2 , Luca Bonini 4
Affiliation  

Humans accurately identify observed actions despite large dynamic changes in their retinal images and a variety of visual presentation formats. A large network of brain regions in primates participates in the processing of others’ actions, with the anterior intraparietal area (AIP) playing a major role in routing information about observed manipulative actions (OMAs) to the other nodes of the network. This study investigated whether the AIP also contributes to invariant coding of OMAs across different visual formats. We recorded AIP neuronal activity from two macaques while they observed videos portraying seven manipulative actions (drag, drop, grasp, push, roll, rotate, squeeze) in four visual formats. Each format resulted from the combination of two actor’s body postures (standing, sitting) and two viewpoints (lateral, frontal). Out of 297 recorded units, 38% were OMA-selective in at least one format. Robust population code for viewpoint and actor’s body posture emerged shortly after stimulus presentation, followed by OMA selectivity. Although we found no fully invariant OMA-selective neuron, we discovered a population code that allowed us to classify action exemplars irrespective of the visual format. This code depends on a multiplicative mixing of signals about OMA identity and visual format, particularly evidenced by a set of units maintaining a relatively stable OMA selectivity across formats despite considerable rescaling of their firing rate depending on the visual specificities of each format. These findings suggest that the AIP integrates format-dependent information and the visual features of others’ actions, leading to a stable readout of observed manipulative action identity.



中文翻译:

从猴子的前壁顶神经元的格式依赖性活动中稳定读出观察到的动作。

尽管视网膜图像和各种视觉呈现格式发生了很大的动态变化,但人类仍可以准确识别观察到的动作。灵长类动物的大型大脑区域网络参与了其他人的动作的处理,而前顶壁内区(AIP)在将有关观察到的操纵性动作(OMA)的信息路由到网络的其他节点方面起着主要作用。这项研究调查了AIP是否也有助于跨不同视觉格式的OMA不变编码。我们记录了两只猕猴的AIP神经元活动,他们观察了以四种视觉格式描绘七个操纵动作(拖动,下降,抓握,推动,滚动,旋转,挤压)的视频。每种格式都是由两个演员的身体姿势(站立,坐着)和两个视点(侧面,正面)结合而成的。在297个记录的单位中,至少有一种格式对38%的OMA具有选择性。提出刺激后不久,就出现了针对视点和演员的身体姿势的健壮人口代码,随后是OMA选择性。尽管我们没有发现完全不变的OMA选择性神经元,但我们发现了一个种群代码,该代码使我们能够将动作示例分类,而与视觉格式无关。该代码取决于有关OMA标识和视觉格式的信号的乘法混合,尤其是通过一组单元在各种格式下保持相对稳定的OMA选择性(尽管根据每种格式的视觉特性对其发射速率进行了相当大的调整)来证明这一点。这些发现表明,AIP整合了格式相关的信息和他人行为的视觉特征,

更新日期:2020-07-14
down
wechat
bug