当前位置: X-MOL 学术IEEE Trans. Vis. Comput. Graph. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
FixationNet: Forecasting Eye Fixations in Task-Oriented Virtual Environments
IEEE Transactions on Visualization and Computer Graphics ( IF 4.7 ) Pub Date : 2021-03-22 , DOI: 10.1109/tvcg.2021.3067779
Zhiming Hu 1 , Andreas Bulling 2 , Sheng Li 1 , Guoping Wang 1
Affiliation  

Human visual attention in immersive virtual reality (VR) is key for many important applications, such as content design, gaze-contingent rendering, or gaze-based interaction. However, prior works typically focused on free-viewing conditions that have limited relevance for practical applications. We first collect eye tracking data of 27 participants performing a visual search task in four immersive VR environments. Based on this dataset, we provide a comprehensive analysis of the collected data and reveal correlations between users' eye fixations and other factors, i.e. users' historical gaze positions, task-related objects, saliency information of the VR content, and users' head rotation velocities. Based on this analysis, we propose FixationNet - a novel learning-based model to forecast users' eye fixations in the near future in VR. We evaluate the performance of our model for free-viewing and task-oriented settings and show that it outperforms the state of the art by a large margin of 19.8% (from a mean error of 2.93° to 2.35°) in free-viewing and of 15.1% (from 2.05° to 1.74°) in task-oriented situations. As such, our work provides new insights into task-oriented attention in virtual environments and guides future work on this important topic in VR research.

中文翻译:

FixationNet:预测面向任务的虚拟环境中的注视点

沉浸式虚拟现实 (VR) 中的人类视觉注意力是许多重要应用的关键,例如内容设计、注视条件渲染或基于注视的交互。然而,先前的工作通常侧重于与实际应用相关性有限的自由观看条件。我们首先收集了在四个沉浸式 VR 环境中执行视觉搜索任务的 27 名参与者的眼动追踪数据。基于该数据集,我们对收集到的数据进行综合分析,揭示用户注视与其他因素之间的相关性,即用户的历史注视位置、任务相关对象、VR 内容的显着性信息和用户的头部旋转速度。基于这一分析,我们提出了 FixationNet——一种新颖的基于学习的模型来预测用户的 在不久的将来 VR 中的眼睛注视。我们评估了我们的模型在自由观看和面向任务的设置中的性能,并表明它在自由观看和面向任务的设置中以 19.8% 的幅度(从 2.93° 到 2.35° 的平均误差)优于现有技术。在以任务为导向的情况下为 15.1%(从 2.05° 到 1.74°)。因此,我们的工作为虚拟环境中以任务为导向的注意力提供了新的见解,并指导了 VR 研究中这一重要主题的未来工作。
更新日期:2021-04-16
down
wechat
bug