当前位置: X-MOL 学术arXiv.cs.AI › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Language guided machine action
arXiv - CS - Artificial Intelligence Pub Date : 2020-11-23 , DOI: arxiv-2011.11400
Feng Qi

Here we build a hierarchical modular network called Language guided machine action (LGMA), whose modules process information stream mimicking human cortical network that allows to achieve multiple general tasks such as language guided action, intention decomposition and mental simulation before action execution etc. LGMA contains 3 main systems: (1) primary sensory system that multimodal sensory information of vision, language and sensorimotor. (2) association system involves and Broca modules to comprehend and synthesize language, BA14/40 module to translate between sensorimotor and language, midTemporal module to convert between language and vision, and superior parietal lobe to integrate attended visual object and arm state into cognitive map for future spatial actions. Pre-supplementary motor area (pre-SMA) can converts high level intention into sequential atomic actions, while SMA can integrate these atomic actions, current arm and attended object state into sensorimotor vector to apply corresponding torques on arm via pre-motor and primary motor of arm to achieve the intention. The high-level executive system contains PFC that does explicit inference and guide voluntary action based on language, while BG is the habitual action control center.

中文翻译:

语言指导的机器动作

在这里,我们建立了一个分层的模块化网络,称为语言指导机器动作(LGMA),该模块的模块处理类似于人类皮层网络的信息流,该信息流可实现多种常规任务,例如语言指导动作,意图分解和动作执行之前的心理模拟等。 3个主要系统:(1)主要的感觉系统,即视觉,语言和感觉运动的多峰感觉信息。(2)关联系统涉及Broca模块来理解和合成语言,BA14 / 40模块在感觉运动和语言之间进行翻译,midTemporal模块在语言和视觉之间进行转换,上顶叶将参与的视觉对象和手臂状态整合到认知图中用于未来的空间动作。辅助运动前区域(pre-SMA)可以将高水平意图转换为顺序的原子动作,而SMA可以将这些原子动作,当前手臂和被照物体状态整合到感觉运动矢量中,以通过前置电机和主电机在手臂上施加相应的扭矩达到目的的手臂。高级执行系统包含PFC,PFC进行显式推断并根据语言指导自愿行动,而BG是惯常行动控制中心。
更新日期:2020-11-25
down
wechat
bug