当前位置: X-MOL 学术Int. J. Robot. Res. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
AIR-Act2Act: Human–human interaction dataset for teaching non-verbal social behaviors to robots
The International Journal of Robotics Research ( IF 7.5 ) Pub Date : 2021-01-28 , DOI: 10.1177/0278364921990671
Woo-Ri Ko 1 , Minsu Jang 1 , Jaeyeon Lee 1 , Jaehong Kim 1
Affiliation  

To better interact with users, a social robot should understand the users’ behavior, infer the intention, and respond appropriately. Machine learning is one way of implementing robot intelligence. It provides the ability to automatically learn and improve from experience instead of explicitly telling the robot what to do. Social skills can also be learned through watching human–human interaction videos. However, human–human interaction datasets are relatively scarce to learn interactions that occur in various situations. Moreover, we aim to use service robots in the elderly care domain; however, there has been no interaction dataset collected for this domain. For this reason, we introduce a human–human interaction dataset for teaching non-verbal social behaviors to robots. It is the only interaction dataset that elderly people have participated in as performers. We recruited 100 elderly people and 2 college students to perform 10 interactions in an indoor environment. The entire dataset has 5,000 interaction samples, each of which contains depth maps, body indexes, and 3D skeletal data that are captured with three Microsoft Kinect v2 sensors. In addition, we provide the joint angles of a humanoid NAO robot which are converted from the human behavior that robots need to learn. The dataset and useful Python scripts are available for download at https://github.com/ai4r/AIR-Act2Act. It can be used to not only teach social skills to robots but also benchmark action recognition algorithms.



中文翻译:

AIR-Act2Act:人与人的交互数据集,用于向机器人教授非语言社交行为

为了更好地与用户互动,社交机器人应了解用户的行为,推断意图并做出适当响应。机器学习是实现机器人智能的一种方式。它提供了自动从经验中学习和改进的能力,而不必明确告诉机器人该怎么做。社交技巧也可以通过观看人与人之间的互动视频来学习。但是,人与人之间的交互数据集相对缺乏了解各种情况下发生的交互的能力。此外,我们的目标是在老年人护理领域使用服务机器人;但是,尚未为此域收集任何交互数据集。因此,我们引入了人与人的交互数据集,用于向机器人教授非语言的社交行为。它是老年人作为表演者参与的唯一交互数据集。我们招募了100名老年人和2名大学生,在室内环境中进行10次互动。整个数据集有5,000个交互样本,每个样本都包含由三个Microsoft Kinect v2传感器捕获的深度图,身体索引和3D骨骼数据。此外,我们提供了类人动物NAO机器人的关节角度,这些关节角度是从机器人需要学习的人类行为转换而来的。数据集和有用的Python脚本可从https://github.com/ai4r/AIR-Act2Act下载。它不仅可以用于向机器人教授社交技能,还可以用于基准动作识别算法。整个数据集有5,000个交互样本,每个样本都包含由三个Microsoft Kinect v2传感器捕获的深度图,身体索引和3D骨骼数据。此外,我们提供了类人动物NAO机器人的关节角度,这些关节角度是从机器人需要学习的人类行为转换而来的。数据集和有用的Python脚本可从https://github.com/ai4r/AIR-Act2Act下载。它不仅可以用于向机器人教授社交技能,还可以用于基准动作识别算法。整个数据集有5,000个交互样本,每个样本都包含由三个Microsoft Kinect v2传感器捕获的深度图,身体索引和3D骨骼数据。此外,我们提供了类人动物NAO机器人的关节角度,这些关节角度是从机器人需要学习的人类行为转换而来的。数据集和有用的Python脚本可从https://github.com/ai4r/AIR-Act2Act下载。它不仅可以用于向机器人教授社交技能,还可以用于基准动作识别算法。数据集和有用的Python脚本可从https://github.com/ai4r/AIR-Act2Act下载。它不仅可以用于向机器人教授社交技能,还可以用于基准动作识别算法。数据集和有用的Python脚本可从https://github.com/ai4r/AIR-Act2Act下载。它不仅可以用于向机器人教授社交技能,还可以用于基准动作识别算法。

更新日期:2021-01-29
down
wechat
bug