当前位置: X-MOL 学术Decis. Support Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A holistic approach to interpretability in financial lending: Models, visualizations, and summary-explanations
Decision Support Systems ( IF 6.7 ) Pub Date : 2021-07-15 , DOI: 10.1016/j.dss.2021.113647
Chaofan Chen 1 , Kangcheng Lin 2 , Cynthia Rudin 3 , Yaron Shaposhnik 4 , Sijia Wang 3 , Tong Wang 5
Affiliation  

Lending decisions are usually made with proprietary models that provide minimally acceptable explanations to users. In a future world without such secrecy, what decision support tools would one want to use for justified lending decisions? This question is timely, since the economy has dramatically shifted due to a pandemic, and a massive number of new loans will be necessary in the short term. We propose a framework for such decisions, including a globally interpretable machine learning model, an interactive visualization of it, and several types of summaries and explanations for any given decision. The machine learning model is a two-layer additive risk model, which resembles a two-layer neural network, but is decomposable into subscales. In this model, each node in the first (hidden) layer represents a meaningful subscale model, and all of the nonlinearities are transparent. Our online visualization tool allows exploration of this model, showing precisely how it came to its conclusion. We provide three types of explanations that are simpler than, but consistent with, the global model: case-based reasoning explanations that use neighboring past cases, a set of features that were the most important for the model's prediction, and summary-explanations that provide a customized sparse explanation for any particular lending decision made by the model. Our framework earned the FICO recognition award for the Explainable Machine Learning Challenge, which was the first public challenge in the domain of explainable machine learning.1



中文翻译:

金融贷款可解释性的整体方法:模型、可视化和摘要解释

贷款决策通常是使用专有模型做出的,这些模型为用户提供了最低限度的可接受解释。在没有这种保密的未来世界中,人们希望使用哪些决策支持工具来做出合理的贷款决策?这个问题是及时的,因为经济因大流行而发生了巨大变化,短期内将需要大量新贷款。我们为此类决策提出了一个框架,包括一个全局可解释的机器学习模型、它的交互式可视化,以及对任何给定决策的几种类型的摘要和解释。机器学习模型是一个两层的加性风险模型,类似于两层神经网络,但可分解为子尺度。在这个模型中,第一个(隐藏)层中的每个节点代表一个有意义的子尺度模型,所有非线性都是透明的。我们的在线可视化工具允许探索这个模型,准确地展示它是如何得出结论的。我们提供了三种类型的解释,它们比全局模型更简单但与全局模型一致:使用相邻过去案例的基于案例的推理解释、对模型预测最重要的一组特征,以及提供对模型做出的任何特定贷款决策的定制稀疏解释。我们的框架获得了FICO 认可奖用于可解释机器学习挑战,这是可解释机器学习领域的第一个公开挑战。1

更新日期:2021-07-15
down
wechat
bug