当前位置: X-MOL 学术IEEE Robot. Automation Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
MoGaze: A Dataset of Full-Body Motions that Includes Workspace Geometry and Eye-Gaze
IEEE Robotics and Automation Letters ( IF 5.2 ) Pub Date : 2021-04-01 , DOI: 10.1109/lra.2020.3043167
Philipp Kratzer , Simon Bihlmaier , Niteesh Balachandra Midlagajni , Rohit Prakash , Marc Toussaint , Jim Mainprice

As robots become more present in open human environments, it will become crucial for robotic systems to understand and predict human motion. Such capabilities depend heavily on the quality and availability of motion capture data. However, existing datasets of full-body motion rarely include 1) long sequences of manipulation tasks, 2) the 3D model of the workspace geometry, and 3) eye-gaze, which are all important when a robot needs to predict the movements of humans in close proximity. Hence, in this letter, we present a novel dataset of full-body motion for everyday manipulation tasks, which includes the above. The motion data was captured using a traditional motion capture system based on reflective markers. We additionally captured eye-gaze using a wearable pupil-tracking device. As we show in experiments, the dataset can be used for the design and evaluation of full-body motion prediction algorithms. Furthermore, our experiments show eye-gaze as a powerful predictor of human intent. The dataset includes 180 min of motion capture data with 1627 pick and place actions being performed. It is available at https://humans-to-robots-motion.github.io/mogaze/ MoGaze, Dataset and is planned to be extended to collaborative tasks with two humans in the near future.

中文翻译:

MoGaze:包括工作区几何图形和眼睛注视的全身运动数据集

随着机器人越来越多地出现在开放的人类环境中,机器人系统理解和预测人类运动将变得至关重要。此类功能在很大程度上取决于运动捕捉数据的质量和可用性。然而,现有的全身运动数据集很少包括 1) 长序列的操作任务,2) 工作空间几何的 3D 模型,以及 3) 眼睛凝视,当机器人需要预测人类的运动时,这些都很重要近在咫尺。因此,在这封信中,我们提出了一个新的全身运动数据集,用于日常操作任务,其中包括上述内容。使用基于反射标记的传统运动捕捉系统捕捉运动数据。我们还使用可穿戴的瞳孔跟踪设备捕捉眼睛凝视。正如我们在实验中展示的那样,该数据集可用于全身运动预测算法的设计和评估。此外,我们的实验表明,眼睛凝视是人类意图的有力预测指标。该数据集包括 180 分钟的动作捕捉数据,其中执行了 1627 个拾取和放置动作。它可在 https://humans-to-robots-motion.github.io/mogaze/MoGaze,Dataset 上获得,并计划在不久的将来扩展到与两个人的协作任务。
更新日期:2021-04-01
down
wechat
bug