当前位置: X-MOL 学术Comput. Graph. Forum › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning Contextualized User Preferences for Co-Adaptive Guidance in Mixed-Initiative Topic Model Refinement
Computer Graphics Forum ( IF 2.7 ) Pub Date : 2021-06-29 , DOI: 10.1111/cgf.14301
F. Sperrle 1 , H. Schäfer 1 , D. Keim 1 , M. El‐Assady 1
Affiliation  

Mixed-initiative visual analytics systems support collaborative human-machine decision-making processes. However, many multi-objective optimization tasks, such as topic model refinement, are highly subjective and context-dependent. Hence, systems need to adapt their optimization suggestions throughout the interactive refinement process to provide efficient guidance. To tackle this challenge, we present a technique for learning context-dependent user preferences and demonstrate its applicability to topic model refinement. We deploy agents with distinct associated optimization strategies that compete for the user's acceptance of their suggestions. To decide when to provide guidance, each agent maintains an intelligible, rule-based classifier over context vectorizations that captures the development of quality metrics between distinct analysis states. By observing implicit and explicit user feedback, agents learn in which contexts to provide their specific guidance operation. An agent in topic model refinement might, for example, learn to react to declining model coherence by suggesting to split a topic. Our results confirm that the rules learned by agents capture contextual user preferences. Further, we show that the learned rules are transferable between similar datasets, avoiding common cold-start problems and enabling a continuous refinement of agents across corpora.

中文翻译:

在混合主动主题模型细化中学习用于协同自适应指导的情境化用户偏好

混合主动视觉分析系统支持协作式人机决策过程。然而,许多多目标优化任务,如主题模型细化,是高度主观的和上下文相关的。因此,系统需要在整个交互式细化过程中调整其优化建议,以提供有效的指导。为了应对这一挑战,我们提出了一种学习依赖于上下文的用户偏好的技术,并证明了其对主题模型细化的适用性。我们部署具有不同相关优化策略的代理,这些策略竞争用户对他们建议的接受。为了决定何时提供指导,每个代理都保持一个可理解的、上下文矢量化上的基于规则的分类器,可捕获不同分析状态之间质量指标的发展。通过观察隐式和显式用户反馈,代理学习在哪些上下文中提供其特定的指导操作。例如,主题模型细化中的代理可能会通过建议拆分主题来学习对模型一致性下降的反应。我们的结果证实,代理学习的规则捕获了上下文用户偏好。此外,我们表明学习到的规则可以在相似的数据集之间转移,避免常见的冷启动问题,并能够跨语料库持续改进代理。学会通过建议拆分主题来应对不断下降的模型一致性。我们的结果证实,代理学习的规则捕获了上下文用户偏好。此外,我们表明学习到的规则可以在相似的数据集之间转移,避免常见的冷启动问题,并能够跨语料库持续改进代理。学会通过建议拆分主题来应对模型一致性下降。我们的结果证实,代理学习的规则捕获了上下文用户偏好。此外,我们表明学习到的规则可以在相似的数据集之间转移,避免常见的冷启动问题,并能够跨语料库持续改进代理。
更新日期:2021-06-29
down
wechat
bug