当前位置: X-MOL 学术IEEE Trans. Vis. Comput. Graph. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
GestOnHMD: Enabling Gesture-based Interaction on Low-cost VR Head-Mounted Display
IEEE Transactions on Visualization and Computer Graphics ( IF 4.7 ) Pub Date : 2021-03-22 , DOI: 10.1109/tvcg.2021.3067689
Taizhou Chen 1 , Lantian Xu 1 , Xianshan Xu 1 , Kening Zhu 1
Affiliation  

Low-cost virtual-reality (VR) head-mounted displays (HMDs) with the integration of smartphones have brought the immersive VR to the masses, and increased the ubiquity of VR. However, these systems are often limited by their poor interactivity. In this paper, we present GestOnHMD, a gesture-based interaction technique and a gesture-classification pipeline that leverages the stereo microphones in a commodity smartphone to detect the tapping and the scratching gestures on the front, the left, and the right surfaces on a mobile VR headset. Taking the Google Cardboard as our focused headset, we first conducted a gesture-elicitation study to generate 150 user-defined gestures with 50 on each surface. We then selected 15, 9, and 9 gestures for the front, the left, and the right surfaces respectively based on user preferences and signal detectability. We constructed a data set containing the acoustic signals of 18 users performing these on-surface gestures, and trained the deep-learning classification pipeline for gesture detection and recognition. Lastly, with the real-time demonstration of GestOnHMD, we conducted a series of online participatory-design sessions to collect a set of user-defined gesture-referent mappings that could potentially benefit from GestOnHMD.

中文翻译:

GestOnHMD:在低成本 VR 头戴式显示器上实现基于手势的交互

结合智能手机的低成本虚拟现实(VR)头戴式显示器(HMD)为大众带来了身临其境的虚拟现实,并增加了虚拟现实的无处不在。然而,这些系统通常受到交互性差的限制。在本文中,我们提出了 GestOnHMD,一种基于手势的交互技术和一种手势分类管道,它利用商品智能手机中的立体声麦克风来检测智能手机正面、左侧和右侧表面的敲击和抓挠手势。移动 VR 耳机。将 Google Cardboard 作为我们的重点耳机,我们首先进行了手势诱导研究,以生成 150 个用户定义的手势,每个表面上有 50 个。然后我们根据用户偏好和信号可检测性分别为前、左和右表面选择了 15、9 和 9 个手势。我们构建了一个包含 18 个用户执行这些表面手势的声学信号的数据集,并训练了用于手势检测和识别的深度学习分类管道。最后,通过 GestOnHMD 的实时演示,我们进行了一系列在线参与式设计会议,以收集一组用户定义的手势相关映射,这些映射可能会从 GestOnHMD 中受益。
更新日期:2021-04-16
down
wechat
bug