当前位置: X-MOL 学术Int. J. Hum. Comput. Stud. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
EyeTAP: Introducing a multimodal gaze-based technique using voice inputs with a comparative analysis of selection techniques
International Journal of Human-Computer Studies ( IF 5.3 ) Pub Date : 2021-06-02 , DOI: 10.1016/j.ijhcs.2021.102676
Mohsen Parisay , Charalambos Poullis , Marta Kersten-Oertel

One of the main challenges of gaze-based interactions is the ability to distinguish normal eye function from a deliberate interaction with the computer system, commonly referred to as ‘Midas touch’. In this paper we propose EyeTAP (Eye tracking point-and-select by Targeted Acoustic Pulse) a contact-free multimodal interaction method for point-and-select tasks. We evaluated the prototype in four user studies with 33 participants and found that EyeTAP is applicable in the presence of ambient noise, results in a faster movement time, and faster task completion time, and has a lower cognitive workload than voice recognition. In addition, although EyeTAP did not generally outperform the dwell-time method, it did have a lower error rate than the dwell-time in one of our experiments. Our study shows that EyeTAP would be useful for users for whom physical movements are restricted or not possible due to a disability or in scenarios where contact-free interactions are necessary. Furthermore, EyeTAP has no specific requirements in terms of user interface design and therefore it can be easily integrated into existing systems.



中文翻译:

EyeTAP:介绍一种使用语音输入的基于多模式凝视的技术,并对选择技术进行比较分析

基于凝视的交互的主要挑战之一是区分正常眼功能与与计算机系统的有意交互(通常称为“点石成金”)的能力。在本文中,我们提出了 EyeTAP(目标声学脉冲眼跟踪点选),一种用于点选任务的非接触式多模态交互方法。我们在 33 名参与者的四项用户研究中评估了原型,发现 EyeTAP 适用于存在环境噪声的情况,导致更快的移动时间和更快的任务完成时间,并且比语音识别具有更低的认知工作量。此外,虽然 EyeTAP 通常并不优于停留时间方法,但在我们的一项实验中,它的错误率确实低于停留时间。我们的研究表明,EyeTAP 对身体活动受限或因残疾或需要非接触式交互而无法进行的用户很有用。此外,EyeTAP 在用户界面设计方面没有特定要求,因此可以轻松集成到现有系统中。

更新日期:2021-06-18
down
wechat
bug