当前位置: X-MOL 学术Decis. Support Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Transparency and accountability in AI decision support: Explaining and visualizing convolutional neural networks for text information
Decision Support Systems ( IF 7.5 ) Pub Date : 2020-04-13 , DOI: 10.1016/j.dss.2020.113302
Buomsoo Kim , Jinsoo Park , Jihae Suh

Proliferating applications of deep learning, along with the prevalence of large-scale text datasets, have revolutionized the natural language processing (NLP) field, thereby driving the recent explosive growth. Nevertheless, it is argued that state-of-the-art studies focus excessively on producing quantitative performances superior to existing models, by playing “the Kaggle game.” Hence, the field requires more effort in solving new problems and proposing novel approaches and architectures. We claim that one of the promising and constructive efforts would be to design transparent and accountable artificial intelligence (AI) systems for text analytics. By doing so, we can enhance the applicability and problem-solving capacity of the system for real-world decision support. It is widely accepted that deep learning models demonstrate remarkable performances compared to existing algorithms. However, they are often criticized for being less interpretable, i.e., the “black box.” In such cases, users tend to hesitate to utilize them for decision-making, especially in crucial tasks. Such complexity obstructs transparency and accountability of the overall system, potentially debilitating the deployment of decision support systems powered by AI. Furthermore, recent regulations are emphasizing fairness and transparency in algorithms to a greater extent, turning explanations more compulsory than voluntary. Thus, to enhance the transparency and accountability of the decision support system and preserve the capacity to model complex text data at the same time, we propose the Explaining and Visualizing Convolutional neural networks for Text information (EVCT) framework. By adopting and ameliorating cutting-edge methods in NLP and image processing, the EVCT framework provides a human-interpretable solution to the problem of text classification while minimizing information loss. Experimental results with large-scale, real-world datasets show that EVCT performs comparably to benchmark models, including widely used deep learning models. In addition, we provide instances of human-interpretable and relevant visualized explanations obtained from applying EVCT to the dataset and possible applications for real-world decision support.



中文翻译:

AI决策支持中的透明度和问责制:解释和可视化卷积神经网络以获取文本信息

深度学习的广泛应用以及大规模文本数据集的普及,彻底改变了自然语言处理(NLP)领域,从而推动了近来爆炸性的增长。然而,有人认为,通过玩“ Kaggle游戏”,最新的研究过于注重于产生优于现有模型的定量性能。因此,该领域需要更多的努力来解决新问题并提出新颖的方法和体系结构。我们声称,有前途和建设性的努力之一将是为文本分析设计透明且负责的人工智能(AI)系统。通过这样做,我们可以增强系统在实际决策支持中的适用性和解决问题的能力。与现有算法相比,深度学习模型具有出色的性能已被广泛接受。但是,他们经常因缺乏解释性而受到批评,即“黑匣子”。在这种情况下,用户往往会犹豫不决地将其用于决策,尤其是在关键任务中。这样的复杂性阻碍了整个系统的透明性和问责制,从而可能削弱由AI驱动的决策支持系统的部署。此外,最近的法规在更大程度上强调算法的公平性和透明性,使解释变得比自愿更强制。因此,为了提高决策支持系统的透明度和问责制,并保留对复杂文本数据进行建模的能力,我们建议 他们经常因缺乏解释性而受到批评,即“黑匣子”。在这种情况下,用户往往会犹豫不决地将其用于决策,尤其是在关键任务中。这种复杂性阻碍了整个系统的透明性和问责制,从而可能削弱由AI驱动的决策支持系统的部署。此外,最近的法规在更大程度上强调算法的公平性和透明性,使解释变得比自愿更强制。因此,为了提高决策支持系统的透明度和问责制,并保留对复杂文本数据进行建模的能力,我们建议 他们经常因缺乏解释性而受到批评,即“黑匣子”。在这种情况下,用户往往会犹豫不决地将其用于决策,尤其是在关键任务中。这样的复杂性阻碍了整个系统的透明性和问责制,从而可能削弱由AI驱动的决策支持系统的部署。此外,最近的法规在更大程度上强调算法的公平性和透明性,使解释变得比自愿更强制。因此,为了提高决策支持系统的透明度和问责制,并保留对复杂文本数据进行建模的能力,我们建议 这样的复杂性阻碍了整个系统的透明性和问责制,从而可能削弱由AI驱动的决策支持系统的部署。此外,最近的法规在更大程度上强调算法的公平性和透明性,使解释变得比自愿更强制。因此,为了提高决策支持系统的透明度和问责制,并保留对复杂文本数据进行建模的能力,我们建议 这样的复杂性阻碍了整个系统的透明性和问责制,从而可能削弱由AI驱动的决策支持系统的部署。此外,最近的法规在更大程度上强调算法的公平性和透明性,使解释变得比自愿更强制。因此,为了提高决策支持系统的透明度和问责制,并保留对复杂文本数据进行建模的能力,我们建议解释和可视化用于文本信息(EVCT)框架的卷积神经网络。通过在NLP和图像处理中采用和改善尖端方法,EVCT框架为文本分类问题提供了一种可人类解释的解决方案,同时最大程度地减少了信息丢失。大规模,真实世界数据集的实验结果表明,EVCT与基准模型(包括广泛使用的深度学习模型)的性能相当。此外,我们提供了将EVCT应用于数据集以及可能用于实际决策支持的应用程序所获得的人类可解释且相关的可视化解释的实例。

更新日期:2020-04-13
down
wechat
bug