当前位置: X-MOL 学术J. Oper. Res. Soc. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Transparency, auditability, and explainability of machine learning models in credit scoring
Journal of the Operational Research Society ( IF 3.6 ) Pub Date : 2021-06-21 , DOI: 10.1080/01605682.2021.1922098
Michael Bücker 1 , Gero Szepannek 2 , Alicja Gosiewska 3 , Przemyslaw Biecek 3, 4
Affiliation  

Abstract

A major requirement for credit scoring models is to provide a maximally accurate risk prediction. Additionally, regulators demand these models to be transparent and auditable. Thus, in credit scoring, very simple predictive models such as logistic regression or decision trees are still widely used and the superior predictive power of modern machine learning algorithms cannot be fully leveraged. Significant potential is therefore missed, leading to higher reserves or more credit defaults. This article works out different dimensions that have to be considered for making credit scoring models understandable and presents a framework for making “black box” machine learning models transparent, auditable, and explainable. Following this framework, we present an overview of techniques, demonstrate how they can be applied in credit scoring and how results compare to the interpretability of scorecards. A real world case study shows that a comparable degree of interpretability can be achieved while machine learning techniques keep their ability to improve predictive power.



中文翻译:

信用评分中机器学习模型的透明度、可审计性和可解释性

摘要

信用评分模型的一个主要要求是提供最准确的风险预测。此外,监管机构要求这些模型透明且可审计。因此,在信用评分中,逻辑回归或决策树等非常简单的预测模型仍然被广泛使用,现代机器学习算法的卓越预测能力无法得到充分利用。因此错过了巨大的潜力,导致更高的准备金或更多的信用违约。本文提出了使信用评分模型易于理解所必须考虑的不同维度,并提出了一个使“黑盒”机器学习模型透明、可审计和可解释的框架。按照这个框架,我们概述了技术,演示如何将它们应用于信用评分,以及结果如何与记分卡的可解释性进行比较。一个真实世界的案例研究表明,在机器学习技术保持其提高预测能力的能力的同时,可以实现相当程度的可解释性。

更新日期:2021-06-21
down
wechat
bug