当前位置: X-MOL 学术IEEE Trans. Vis. Comput. Graph. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Effects of Depth Information on Visual Target Identification Task Performance in Shared Gaze Environments.
IEEE Transactions on Visualization and Computer Graphics ( IF 5.2 ) Pub Date : 2020-02-13 , DOI: 10.1109/tvcg.2020.2973054
Austin Erickson , Nahal Norouzi , Kangsoo Kim , Joseph J. LaViola , Gerd Bruder , Gregory F. Welch

Human gaze awareness is important for social and collaborative interactions. Recent technological advances in augmented reality (AR) displays and sensors provide us with the means to extend collaborative spaces with real-time dynamic AR indicators of one's gaze, for example via three-dimensional cursors or rays emanating from a partner's head. However, such gaze cues are only as useful as the quality of the underlying gaze estimation and the accuracy of the display mechanism. Depending on the type of the visualization, and the characteristics of the errors, AR gaze cues could either enhance or interfere with collaborations. In this paper, we present two human-subject studies in which we investigate the influence of angular and depth errors, target distance, and the type of gaze visualization on participants' performance and subjective evaluation during a collaborative task with a virtual human partner, where participants identified targets within a dynamically walking crowd. First, our results show that there is a significant difference in performance for the two gaze visualizations ray and cursor in conditions with simulated angular and depth errors: the ray visualization provided significantly faster response times and fewer errors compared to the cursor visualization. Second, our results show that under optimal conditions, among four different gaze visualization methods, a ray without depth information provides the worst performance and is rated lowest, while a combination of a ray and cursor with depth information is rated highest. We discuss the subjective and objective performance thresholds and provide guidelines for practitioners in this field.

中文翻译:

深度信息对共享注视环境中视觉目标识别任务性能的影响。

人的凝视意识对于社交和协作互动非常重要。增强现实(AR)显示器和传感器的最新技术进步为我们提供了一种方法,可以通过例如凝视对方头部的三维光标或光线,实时地利用注视的动态AR指示器扩展协作空间。但是,这样的注视提示仅与基础注视估计的质量和显示机制的准确性一样有用。根据可视化的类型和错误的特征,AR凝视提示可能会增强协作或干扰协作。在本文中,我们提出了两项​​人体研究,其中研究了角度和深度误差,目标距离以及注视可视化类型对参与者的影响 与虚拟人类伙伴进行协作任务期间的绩效和主观评估,参与者在动态步行的人群中确定目标。首先,我们的结果表明,在具有模拟角度和深度误差的条件下,两种凝视可视化光线和光标的性能存在显着差异:与光标可视化相比,射线可视化提供了明显更快的响应时间和更少的错误。其次,我们的结果表明,在最佳条件下,在四种不同的凝视可视化方法中,没有深度信息的光线性能最差,并且评级最低,而具有深度信息的光线和光标的组合评级最高。
更新日期:2020-04-22
down
wechat
bug