当前位置: X-MOL 学术Int. J. Hum. Comput. Stud. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Omnis Prædictio: Estimating the full spectrum of human performance with stroke gestures
International Journal of Human-Computer Studies ( IF 5.4 ) Pub Date : 2020-05-12 , DOI: 10.1016/j.ijhcs.2020.102466
Luis A. Leiva , Radu-Daniel Vatavu , Daniel Martín-Albo , Réjean Plamondon

Designing effective, usable, and widely adoptable stroke gesture commands for graphical user interfaces is a challenging task that traditionally involves multiple iterative rounds of prototyping, implementation, and follow-up user studies and controlled experiments for evaluation, verification, and validation. An alternative approach is to employ theoretical models of human performance, which can deliver practitioners with insightful information right from the earliest stages of user interface design. However, very few aspects of the large spectrum of human performance with stroke gesture input have been investigated and modeled so far, leaving researchers and practitioners of gesture-based user interface design with a very narrow range of predictable measures of human performance, mostly focused on estimating production time, of which extremely few cases delivered accompanying software tools to assist modeling. We address this problem by introducing “Omnis Prædictio” (Omnisfor short), a generic technique and companion web tool that provides accurate user-independent estimations of any numerical stroke gesture feature, including custom features specified in code. Our experimental results on three public datasets show that our model estimations correlate on average rs>0.9 with groundtruth data. Omnisalso enables researchers and practitioners to understand human performance with stroke gestures on many levels and, consequently, raises the bar for human performance models and estimation techniques for stroke gesture input.



中文翻译:

OmnisPrædictio:通过笔触手势估算出人类的全部表现

为图形用户界面设计有效,可用且广泛采用的笔触手势命令是一项具有挑战性的任务,传统上涉及多次迭代的原型开发,实施以及后续的用户研究和受控实验,以进行评估,验证和确认。一种替代方法是采用人类行为的理论模型,从用户界面设计的最早阶段就可以为从业人员提供深刻的信息。但是,到目前为止,对带有笔划手势输入的大量人类行为进行了调查和建模,使得基于手势的用户界面设计的研究人员和从业人员只能在非常有限的范围内预测人类行为,主要集中在估计生产时间,其中极少数案例提供了随附的软件工具来辅助建模。我们通过引入“ OmnisPrædictio”(Omnis(简称Omnis),这是一种通用技术和随附的网络工具,可为任何数字笔划手势功能(包括代码中指定的自定义功能)提供准确的用户独立估计。我们在三个公共数据集上的实验结果表明,我们的模型估算值平均相关[Rs>0.9与地面数据。Omnis还使研究人员和从业人员能够从多个级别理解笔触手势的人体表现,因此提高了人体表现模型和笔触手势输入估算技术的标准。

更新日期:2020-05-12
down
wechat
bug