当前位置: X-MOL 学术ACM Trans. Interact. Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Comparing and Combining Interaction Data and Eye-tracking Data for the Real-time Prediction of User Cognitive Abilities in Visualization Tasks
ACM Transactions on Interactive Intelligent Systems ( IF 3.6 ) Pub Date : 2020-05-31 , DOI: 10.1145/3301400
Cristina Conati 1 , Sébastien Lallé 1 , Md Abed Rahman 1 , Dereck Toker 1
Affiliation  

Previous work has shown that some user cognitive abilities relevant for processing information visualizations can be predicted from eye-tracking data. Performing this type of user modeling is important for devising visualizations that can detect a user's abilities and adapt accordingly during the interaction. In this article, we extend previous user modeling work by investigating for the first time interaction data as an alternative source to predict cognitive abilities during visualization processing when it is not feasible to collect eye-tracking data. We present an extensive comparison of user models based solely on eye-tracking data, on interaction data, as well as on a combination of the two. Although we found that eye-tracking data generate the most accurate predictions, results show that interaction data can still outperform a majority-class baseline, meaning that adaptation for interactive visualizations could be enabled even when it is not feasible to perform eye tracking, using solely interaction data. Furthermore, we found that interaction data can predict several cognitive abilities with better accuracy at the very beginning of the task than eye-tracking data, which are valuable for delivering adaptation early in the task. We also extend previous work by examining the value of multimodal classifiers combining interaction data and eye-tracking data, with promising results for some of our target user cognitive abilities. Next, we contribute to previous work by extending the type of visualizations considered and the set of cognitive abilities that can be predicted from either eye-tracking data and interaction data. Finally, we evaluate how noise in gaze data impacts prediction accuracy and find that retaining rather noisy gaze datapoints can yield equal or even better predictions than discarding them, a novel and important contribution for devising adaptive visualizations in real settings where eye-tracking data are typically noisier than in laboratory settings.

中文翻译:

比较和结合交互数据和眼动追踪数据,实时预测可视化任务中的用户认知能力

以前的工作表明,一些与处理信息可视化相关的用户认知能力可以从眼动追踪数据中预测出来。执行此类用户建模对于设计可以检测用户能力并在交互过程中进行相应调整的可视化非常重要。在本文中,我们通过首次调查扩展了之前的用户建模工作交互数据当收集眼动追踪数据不可行时,作为在可视化处理期间预测认知能力的替代来源。我们对仅基于眼动数据、交互数据以及两者的组合的用户模型进行了广泛的比较。虽然我们发现眼动追踪数据产生最准确的预测,但结果表明交互数据仍然可以胜过多数类基线,这意味着即使在执行眼动追踪不可行的情况下也可以启用对交互式可视化的适应,仅使用交互数据。此外,我们发现交互数据可以在任务开始时比眼动追踪数据更准确地预测几种认知能力,这对于在任务早期提供适应很有价值。我们还通过检查结合交互数据和眼动追踪数据的多模态分类器的价值来扩展之前的工作,并为我们的一些目标用户认知能力带来了可喜的结果。接下来,我们通过扩展所考虑的可视化类型以及可以从眼球跟踪数据和交互数据预测的认知能力集来为之前的工作做出贡献。最后,我们评估了注视数据中的噪声如何影响预测准确性,并发现保留相当嘈杂的注视数据点可以产生与丢弃它们相同甚至更好的预测,这是在眼球追踪数据通常存在的实际环境中设计自适应可视化的新颖而重要的贡献比实验室环境更嘈杂。在我们的一些目标用户认知能力方面取得了可喜的成果。接下来,我们通过扩展所考虑的可视化类型以及可以从眼球跟踪数据和交互数据预测的认知能力集来为之前的工作做出贡献。最后,我们评估了注视数据中的噪声如何影响预测准确性,并发现保留相当嘈杂的注视数据点可以产生与丢弃它们相同甚至更好的预测,这是在眼球追踪数据通常存在的实际环境中设计自适应可视化的新颖而重要的贡献比实验室环境更嘈杂。在我们的一些目标用户认知能力方面取得了可喜的成果。接下来,我们通过扩展所考虑的可视化类型以及可以从眼球跟踪数据和交互数据预测的认知能力集来为之前的工作做出贡献。最后,我们评估了注视数据中的噪声如何影响预测准确性,并发现保留相当嘈杂的注视数据点可以产生与丢弃它们相同甚至更好的预测,这是在眼球追踪数据通常存在的实际环境中设计自适应可视化的新颖而重要的贡献比实验室环境更嘈杂。我们通过扩展所考虑的可视化类型以及可以从眼动追踪数据和交互数据预测的认知能力集来为之前的工作做出贡献。最后,我们评估了注视数据中的噪声如何影响预测准确性,并发现保留相当嘈杂的注视数据点可以产生与丢弃它们相同甚至更好的预测,这是在眼球追踪数据通常存在的实际环境中设计自适应可视化的新颖而重要的贡献比实验室环境更嘈杂。我们通过扩展所考虑的可视化类型以及可以从眼动追踪数据和交互数据预测的认知能力集来为之前的工作做出贡献。最后,我们评估了注视数据中的噪声如何影响预测准确性,并发现保留相当嘈杂的注视数据点可以产生与丢弃它们相同甚至更好的预测,这是在眼球追踪数据通常存在的实际环境中设计自适应可视化的新颖而重要的贡献比实验室环境更嘈杂。
更新日期:2020-05-31
down
wechat
bug