当前位置: X-MOL 学术arXiv.cs.HC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
GestureMap: Supporting Visual Analytics and Quantitative Analysis of Motion Elicitation Data by Learning 2D Embeddings
arXiv - CS - Human-Computer Interaction Pub Date : 2021-03-01 , DOI: arxiv-2103.00912
Hai Dang, Daniel Buschek

This paper presents GestureMap, a visual analytics tool for gesture elicitation which directly visualises the space of gestures. Concretely, a Variational Autoencoder embeds gestures recorded as 3D skeletons on an interactive 2D map. GestureMap further integrates three computational capabilities to connect exploration to quantitative measures: Leveraging DTW Barycenter Averaging (DBA), we compute average gestures to 1) represent gesture groups at a glance; 2) compute a new consensus measure (variance around average gesture); and 3) cluster gestures with k-means. We evaluate GestureMap and its concepts with eight experts and an in-depth analysis of published data. Our findings show how GestureMap facilitates exploring large datasets and helps researchers to gain a visual understanding of elicited gesture spaces. It further opens new directions, such as comparing elicitations across studies. We discuss implications for elicitation studies and research, and opportunities to extend our approach to additional tasks in gesture elicitation.

中文翻译:

GestureMap:通过学习2D嵌入支持视觉分析和运动诱发数据的定量分析

本文介绍了GestureMap,这是一种用于可视化手势的可视化分析工具,可以直接可视化手势的空间。具体而言,变体自动编码器将记录为3D骨架的手势嵌入到交互式2D地图上。GestureMap进一步集成了三种计算功能,以将探索与定量度量联系起来:利用DTW重心平均(DBA),我们将平均手势计算为1)代表手势组的信息一目了然;2)计算新的共识度量(围绕平均手势的方差);3)用k均值对手势进行聚类。我们与八位专家一起评估了GestureMap及其概念,并对发布的数据进行了深入分析。我们的发现表明,GestureMap如何促进探索大型数据集并帮助研究人员获得对所引发手势空间的视觉理解。它进一步打开了新的方向,例如比较研究之间的启发。我们讨论了对启发研究和研究的意义,以及将我们的方法扩展到手势启发中其他任务的机会。
更新日期:2021-03-02
down
wechat
bug