当前位置: X-MOL 学术Inf. Visualization › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A survey of surveys on the use of visualization for interpreting machine learning models
Information Visualization ( IF 2.3 ) Pub Date : 2020-03-19 , DOI: 10.1177/1473871620904671
Angelos Chatzimparmpas 1 , Rafael M. Martins 1 , Ilir Jusufi 1 , Andreas Kerren 1
Affiliation  

Research in machine learning has become very popular in recent years, with many types of models proposed to comprehend and predict patterns and trends in data originating from different domains. As these models get more and more complex, it also becomes harder for users to assess and trust their results, since their internal operations are mostly hidden in black boxes. The interpretation of machine learning models is currently a hot topic in the information visualization community, with results showing that insights from machine learning models can lead to better predictions and improve the trustworthiness of the results. Due to this, multiple (and extensive) survey articles have been published recently trying to summarize the high number of original research papers published on the topic. But there is not always a clear definition of what these surveys cover, what is the overlap between them, which types of machine learning models they deal with, or what exactly is the scenario that the readers will find in each of them. In this article, we present a meta-analysis (i.e. a “survey of surveys”) of manually collected survey papers that refer to the visual interpretation of machine learning models, including the papers discussed in the selected surveys. The aim of our article is to serve both as a detailed summary and as a guide through this survey ecosystem by acquiring, cataloging, and presenting fundamental knowledge of the state of the art and research opportunities in the area. Our results confirm the increasing trend of interpreting machine learning with visualizations in the past years, and that visualization can assist in, for example, online training processes of deep learning models and enhancing trust into machine learning. However, the question of exactly how this assistance should take place is still considered as an open challenge of the visualization community.

中文翻译:

使用可视化解释机器学习模型的调查调查

近年来,机器学习的研究变得非常流行,提出了许多类型的模型来理解和预测来自不同领域的数据的模式和趋势。随着这些模型变得越来越复杂,用户也越来越难以评估和信任他们的结果,因为他们的内部操作大多隐藏在黑匣子中。机器学习模型的解释目前是信息可视化社区的一个热门话题,结果表明,机器学习模型的见解可以导致更好的预测并提高结果的可信度。因此,最近发表了多篇(和广泛的)调查文章,试图总结关于该主题发表的大量原创研究论文。但是,对于这些调查涵盖的内容、它们之间的重叠部分、它们处理的机器学习模型的类型,或者读者会在每个调查中找到的场景究竟是什么,并不总是有明确的定义。在本文中,我们介绍了手动收集的调查论文的元分析(即“调查调查”),这些论文涉及机器学习模型的视觉解释,包括所选调查中讨论的论文。我们这篇文章的目的是通过获取、编目和展示该领域最先进技术和研究机会的基础知识,作为详细总结和指南。我们的结果证实了过去几年用可视化解释机器学习的增长趋势,并且可视化可以帮助,例如,深度学习模型的在线培训过程和增强对机器学习的信任。然而,这种援助究竟应该如何发生的问题仍然被视为可视化社区的一个公开挑战。
更新日期:2020-03-19
down
wechat
bug