当前位置: X-MOL 学术IEEE Trans. Vis. Comput. Graph. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
VisTA: Integrating Machine Intelligence with Visualization to Support the Investigation of Think-Aloud Sessions.
IEEE Transactions on Visualization and Computer Graphics ( IF 4.7 ) Pub Date : 2019-08-24 , DOI: 10.1109/tvcg.2019.2934797
Mingming Fan , Ke Wu , Jian Zhao , Yue Li , Winter Wei , Khai N. Truong

Think-aloud protocols are widely used by user experience (UX) practitioners in usability testing to uncover issues in user interface design. It is often arduous to analyze large amounts of recorded think-aloud sessions and few UX practitioners have an opportunity to get a second perspective during their analysis due to time and resource constraints. Inspired by the recent research that shows subtle verbalization and speech patterns tend to occur when users encounter usability problems, we take the first step to design and evaluate an intelligent visual analytics tool that leverages such patterns to identify usability problem encounters and present them to UX practitioners to assist their analysis. We first conducted and recorded think-aloud sessions, and then extracted textual and acoustic features from the recordings and trained machine learning (ML) models to detect problem encounters. Next, we iteratively designed and developed a visual analytics tool, VisTA, which enables dynamic investigation of think-aloud sessions with a timeline visualization of ML predictions and input features. We conducted a between-subjects laboratory study to compare three conditions, i.e., VisTA, VisTASimple (no visualization of the ML's input features), and Baseline (no ML information at all), with 30 UX professionals. The findings show that UX professionals identified more problem encounters when using VisTA than Baseline by leveraging the problem visualization as an overview, anticipations, and anchors as well as the feature visualization as a means to understand what ML considers and omits. Our findings also provide insights into how they treated ML, dealt with (dis)agreement with ML, and reviewed the videos (i.e., play, pause, and rewind).

中文翻译:

VisTA:将机器智能与可视化集成以支持对“大声思考”会话的调查。

用户体验(UX)从业人员在可用性测试中广泛使用“思考方式”协议来发现用户界面设计中的问题。分析大量记录的思考型对话通常很艰辛,并且由于时间和资源的限制,很少有UX从业人员有机会在分析过程中获得第二个观点。受到最近的研究的启发,该研究表明,当用户遇到可用性问题时,往往会出现细微的口头表达和语音模式,我们迈出了第一步,设计和评估了智能的可视化分析工具,该工具利用这种模式来识别遇到的可用性问题,并将其呈现给UX从业人员协助他们的分析。我们首先进行并录制了思考型会议,然后从录音和经过训练的机器学习(ML)模型中提取文本和声音特征,以检测遇到的问题。接下来,我们迭代设计和开发了一个可视化分析工具VisTA,该工具可以通过对ML预测和输入功能的时间线可视化,动态地研究思维对话会话。我们进行了一项受试者间实验室研究,以与30位UX专业人士比较三个条件,即VisTA,VisTASimple(不可视化ML的输入功能)和Baseline(根本不包含ML信息)。研究结果表明,通过使用问题可视化作为概述,预期和锚点以及功能可视化作为了解ML的考虑和遗漏的一种手段,UX专业人员在使用VisTA时比使用Baseline识别出的问题更多。
更新日期:2019-11-01
down
wechat
bug