当前位置: X-MOL 学术IEEE Trans. Affect. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multimodal Human-Human-Robot Interactions (MHHRI) Dataset for Studying Personality and Engagement
IEEE Transactions on Affective Computing ( IF 9.6 ) Pub Date : 2019-10-01 , DOI: 10.1109/taffc.2017.2737019
Oya Celiktutan , Efstratios Skordos , Hatice Gunes

In this paper we introduce a novel dataset, the Multimodal Human-Human-Robot-Interactions (MHHRI) dataset, with the aim of studying personality simultaneously in human-human interactions (HHI) and human-robot interactions (HRI) and its relationship with engagement. Multimodal data was collected during a controlled interaction study where dyadic interactions between two human participants and triadic interactions between two human participants and a robot took place with interactants asking a set of personal questions to each other. Interactions were recorded using two static and two dynamic cameras as well as two biosensors, and meta-data was collected by having participants fill in two types of questionnaires, for assessing their own personality traits and their perceived engagement with their partners (self labels) and for assessing personality traits of the other participants partaking in the study (acquaintance labels). As a proof of concept, we present baseline results for personality and engagement classification. Our results show that (i) trends in personality classification performance remain the same with respect to the self and the acquaintance labels across the HHI and HRI settings; (ii) for extroversion, the acquaintance labels yield better results as compared to the self labels; (iii) in general, multi-modality yields better performance for the classification of personality traits.

中文翻译:

用于研究个性和参与度的多模态人机交互 (MHHRI) 数据集

在本文中,我们介绍了一个新的数据集,即多模式人机交互(MHHRI)数据集,旨在同时研究人机交互(HHI)和人机交互(HRI)中的个性及其与人机交互的关系。订婚。多模态数据是在受控交互研究期间收集的,其中两个人类参与者之间的二元交互以及两个人类参与者和机器人之间的三元交互发生,交互者互相询问一组个人问题。使用两个静态和两个动态摄像机以及两个生物传感器记录交互,并通过让参与者填写两种类型的问卷来收集元数据,用于评估他们自己的个性特征以及他们与合作伙伴的感知参与(自我标签)以及用于评估参与研究的其他参与者的个性特征(熟人标签)。作为概念证明,我们提供了个性和参与度分类的基线结果。我们的结果表明,(i)在 HHI 和 HRI 设置中,在自我和熟人标签方面,人格分类表现的趋势保持不变;(ii) 对于外向性,熟人标签比自我标签产生更好的结果;(iii) 一般来说,多模态对人格特征的分类产生更好的性能。我们提供了个性和参与度分类的基线结果。我们的结果表明,(i)在 HHI 和 HRI 设置中,在自我和熟人标签方面,人格分类表现的趋势保持不变;(ii) 对于外向性,熟人标签比自我标签产生更好的结果;(iii) 一般来说,多模态对人格特征的分类产生更好的性能。我们展示了个性和参与度分类的基线结果。我们的结果表明,(i)在 HHI 和 HRI 设置中,在自我和熟人标签方面,人格分类表现的趋势保持不变;(ii) 对于外向性,熟人标签比自我标签产生更好的结果;(iii) 一般来说,多模态对人格特征的分类产生更好的性能。
更新日期:2019-10-01
down
wechat
bug