当前位置: X-MOL 学术IEEE Trans. Affect. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The MatchNMingle dataset: a novel multi-sensor resource for the analysis of social interactions and group dynamics in-the-wild during free-standing conversations and speed dates
IEEE Transactions on Affective Computing ( IF 11.2 ) Pub Date : 2018-01-01 , DOI: 10.1109/taffc.2018.2848914
Laura Cabrera-Quiros , Andrew Demetriou , Ekin Gedik , Leander van der Meij , Hayley Hung

We present MatchNMingle, a novel multimodal/multisensor dataset for the analysis of free-standing conversational groups and speed-dates in-the-wild. MatchNMingle leverages the use of wearable devices and overhead cameras to record social interactions of 92 people during real-life speed-dates, followed by a cocktail party. To our knowledge, MatchNMingle has the largest number of participants, longest recording time and largest set of manual annotations for social actions available in this context in a real-life scenario. It consists of 2 hours of data from wearable acceleration, binary proximity, video, audio, personality surveys, frontal pictures and speed-date responses. Participants' positions and group formations were manually annotated; as were social actions (eg. speaking, hand gesture) for 30 minutes at 20fps making it the first dataset to incorporate the annotation of such cues in this context. We present an empirical analysis of the performance of crowdsourcing workers against trained annotators in simple and complex annotation tasks, founding that although efficient for simple tasks, using crowdsourcing workers for more complex tasks like social action annotation led to additional overhead and poor inter-annotator agreement compared to trained annotators (differences up to 0.4 in Fleiss' Kappa coefficients). We also provide example experiments of how MatchNMingle can be used.

中文翻译:

MatchNMingle 数据集:一种新颖的多传感器资源,用于在独立对话和快速约会期间分析野外的社交互动和群体动态

我们展示了 MatchNMingle,这是一种新颖的多模式/多传感器数据集,用于分析独立会话组和野外速度约会。MatchNMingle 利用可穿戴设备和头顶摄像头来记录 92 人在现实生活中的快速约会期间的社交互动,然后是鸡尾酒会。据我们所知,MatchNMingle 拥有最多的参与者、最长的记录时间和最多的手动注释集,用于在现实生活场景中的这种上下文中可用的社交行为。它包含来自可穿戴加速、二进制接近度、视频、音频、个性调查、正面图片和速度日期响应的 2 小时数据。参与者的位置和组队被手动注释;社会行为也是如此(例如,说话,手势)以 20fps 的速度持续 30 分钟,使其成为第一个在此上下文中包含此类提示注释的数据集。我们对众包工作者在简单和复杂的标注任务中与训练有素的标注者的表现进行了实证分析,发现虽然对于简单的任务很有效,但将众包工作者用于更复杂的任务,如社会行为标注会导致额外的开销和糟糕的标注者间一致性与训练有素的注释者相比(Fleiss 的 Kappa 系数差异高达 0.4)。我们还提供了如何使用 MatchNMingle 的示例实验。发现虽然对于简单的任务很有效,但与训练有素的注释者(Fleiss 的 Kappa 系数的差异高达 0.4)相比,使用众包工作者处理更复杂的任务(如社会动作注释)会导致额外的开销和较差的注释者间一致性。我们还提供了如何使用 MatchNMingle 的示例实验。发现虽然对于简单的任务很有效,但与训练有素的注释者(Fleiss 的 Kappa 系数的差异高达 0.4)相比,使用众包工作者处理更复杂的任务(如社会动作注释)会导致额外的开销和较差的注释者间一致性。我们还提供了如何使用 MatchNMingle 的示例实验。
更新日期:2018-01-01
down
wechat
bug