当前位置: X-MOL 学术Inf. Process. Manag. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
When classification accuracy is not enough: Explaining news credibility assessment
Information Processing & Management ( IF 7.4 ) Pub Date : 2021-06-12 , DOI: 10.1016/j.ipm.2021.102653
Piotr Przybyła , Axel J. Soto

Dubious credibility of online news has become a major problem with negative consequences for both readers and the whole society. Despite several efforts in the development of automatic methods for measuring credibility in news stories, there has been little previous work focusing on providing explanations that go beyond a black-box decision or score. In this work, we use two machine learning approaches for computing a credibility score for any given news story: one is a linear method trained on stylometric features and the other one is a recurrent neural network. Our goal is to study whether we can explain the rationale behind these automatic methods and improve a reader’s confidence in their credibility assessment. Therefore, we first adapted the classifiers to the constraints of a browser extension so that the text can be analysed while browsing online news. We also propose a set of interactive visualisations to explain to the user the rationale behind the automatic credibility assessment. We evaluated our adapted methods by means of standard machine learning performance metrics and through two user studies. The adapted neural classifier showed better performance on the test data than the stylometric classifier, despite the latter appearing to be easier to interpret by the participants. Also, users were significantly more accurate in their assessment after they interacted with the tool as well as more confident with their decisions.



中文翻译:

当分类准确度不够时:解释新闻可信度评估

网络新闻的可信度已成为主要问题,对读者和整个社会都会产生负面影响。尽管在开发用于衡量新闻报道可信度的自动方法方面做出了一些努力,但以前很少有工作专注于提供超越黑匣子决策或评分的解释。在这项工作中,我们使用两种机器学习方法来计算任何给定新闻报道的可信度得分:一种是基于文体特征训练的线性方法,另一种是循环神经网络。我们的目标是研究我们是否可以解释这些自动方法背后的基本原理,并提高读者对其可信度评估的信心。所以,我们首先使分类器适应浏览器扩展的约束,以便在浏览在线新闻时可以分析文本。我们还提出了一组交互式可视化来向用户解释自动可信度评估背后的基本原理。我们通过标准机器学习性能指标和两项用户研究评估了我们的适应方法。自适应神经分类器在测试数据上表现出比文体分类器更好的性能,尽管后者似乎更容易被参与者解释。此外,用户与该工具交互后,他们的评估更加准确,并且对他们的决定更有信心。我们还提出了一组交互式可视化来向用户解释自动可信度评估背后的基本原理。我们通过标准机器学习性能指标和两项用户研究评估了我们的适应方法。自适应神经分类器在测试数据上表现出比文体分类器更好的性能,尽管后者似乎更容易被参与者解释。此外,用户与该工具交互后,他们的评估更加准确,并且对他们的决定更有信心。我们还提出了一组交互式可视化来向用户解释自动可信度评估背后的基本原理。我们通过标准机器学习性能指标和两项用户研究评估了我们的适应方法。自适应神经分类器在测试数据上表现出比文体分类器更好的性能,尽管后者似乎更容易被参与者解释。此外,用户与该工具交互后,他们的评估更加准确,并且对他们的决定更有信心。尽管后者似乎更容易被参与者解释。此外,用户与该工具交互后,他们的评估更加准确,并且对他们的决定更有信心。尽管后者似乎更容易被参与者解释。此外,用户与该工具交互后,他们的评估更加准确,并且对他们的决定更有信心。

更新日期:2021-06-13
down
wechat
bug