当前位置: X-MOL 学术Int. J. Comput. Vis. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Describing Upper-Body Motions Based on Labanotation for Learning-from-Observation Robots
International Journal of Computer Vision ( IF 11.6 ) Pub Date : 2018-10-05 , DOI: 10.1007/s11263-018-1123-1
Katsushi Ikeuchi , Zhaoyuan Ma , Zengqiang Yan , Shunsuke Kudoh , Minako Nakamura

We have been developing a paradigm that we call learning-from-observation for a robot to automatically acquire a robot program to conduct a series of operations, or for a robot to understand what to do, through observing humans performing the same operations. Since a simple mimicking method to repeat exact joint angles or exact end-effector trajectories does not work well because of the kinematic and dynamic differences between a human and a robot, the proposed method employs intermediate symbolic representations, tasks, for conceptually representing what-to-do through observation. These tasks are subsequently mapped to appropriate robot operations depending on the robot hardware. In the present work, task models for upper-body operations of humanoid robots are presented, which are designed on the basis of Labanotation. Given a series of human operations, we first analyze the upper-body motions and extract certain fixed poses from key frames. These key poses are translated into tasks represented by Labanotation symbols. Then, a robot performs the operations corresponding to those task models. Because tasks based on Labanotation are independent of robot hardware, different robots can share the same observation module, and only different task-mapping modules specific to robot hardware are required. The system was implemented and demonstrated that three different robots can automatically mimic human upper-body operations with a satisfactory level of resemblance.

中文翻译:

描述基于 Labanotation 的上半身运动,用于从观察中学习的机器人

我们一直在开发一种范式,我们称之为从观察中学习,让机器人自动获取机器人程序来进行一系列操作,或者让机器人通过观察人类执行相同的操作来了解该做什么。由于人类和机器人之间的运动学和动力学差异,简单的重复精确关节角度或精确末端执行器轨迹的模拟方法效果不佳,因此所提出的方法采用中间符号表示,任务,从概念上表示要做什么- 通过观察来做。这些任务随后根据机器人硬件映射到适当的机器人操作。在目前的工作中,提出了基于 Labanotation 设计的仿人机器人上半身操作的任务模型。给定一系列人类操作,我们首先分析上半身运动并从关键帧中提取某些固定姿势。这些关键姿势被翻译成由 Labanotation 符号表示的任务。然后,机器人执行与这些任务模型对应的操作。由于基于 Labanotation 的任务独立于机器人硬件,不同的机器人可以共享相同的观察模块,只需要针对机器人硬件的不同任务映射模块。该系统已经实施并证明了三种不同的机器人可以自动模仿人类上半身的操作,并具有令人满意的相似度。机器人执行与这些任务模型相对应的操作。由于基于 Labanotation 的任务独立于机器人硬件,不同的机器人可以共享相同的观察模块,只需要针对机器人硬件的不同任务映射模块。该系统已经实施并证明了三种不同的机器人可以自动模仿人类上半身的操作,并具有令人满意的相似度。机器人执行与这些任务模型相对应的操作。由于基于 Labanotation 的任务独立于机器人硬件,不同的机器人可以共享相同的观察模块,只需要针对机器人硬件的不同任务映射模块。该系统已经实施并证明了三种不同的机器人可以自动模仿人类上半身的操作,并具有令人满意的相似度。
更新日期:2018-10-05
down
wechat
bug