当前位置: X-MOL 学术Expert Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Are you sure? Prediction revision in automated decision‐making
Expert Systems ( IF 3.3 ) Pub Date : 2020-06-12 , DOI: 10.1111/exsy.12577
Nadia Burkart 1 , Sebastian Robert 1 , Marco F. Huber 2, 3
Affiliation  

With the rapid improvements in machine learning and deep learning, decisions made by automated decision support systems (DSS) will increase. Besides the accuracy of predictions, their explainability becomes more important. The algorithms can construct complex mathematical prediction models. This causes insecurity to the predictions. The insecurity rises the need for equipping the algorithms with explanations. To examine how users trust automated DSS, an experiment was conducted. Our research aim is to examine how participants supported by an DSS revise their initial prediction by four varying approaches (treatments) in a between‐subject design study. The four treatments differ in the degree of explainability to understand the predictions of the system. First we used an interpretable regression model, second a Random Forest (considered to be a black box [BB]), third the BB with a local explanation and last the BB with a global explanation. We noticed that all participants improved their predictions after receiving an advice whether it was a complete BB or an BB with an explanation. The major finding was that interpretable models were not incorporated more in the decision process than BB models or BB models with explanations.

中文翻译:

你确定吗?自动决策中的预测修订

随着机器学习和深度学习的快速改进,由自动决策支持系统(DSS)做出的决策将增加。除了预测的准确性外,它们的可解释性变得更加重要。该算法可以构建复杂的数学预测模型。这导致预测不安全。由于不安全,需要为算法配备解释。为了检查用户如何信任自动DSS,进行了一项实验。我们的研究目的是检验在受试者之间的设计研究中,DSS支持的参与者如何通过四种不同的方法(治疗方法)修改其初始预测。四种处理方法在理解系统的预测的可解释性程度上有所不同。首先,我们使用可解释的回归模型 第二个是随机森林(被认为是黑匣子[BB]),第三个是带有局部说明的BB,最后一个是带有全局说明的BB。我们注意到,所有参与者在收到建议后,无论是完整的BB还是带有解释的BB,都改善了他们的预测。主要发现是,与BB模型或带有解释的BB模型相比,可解释模型在决策过程中的纳入不多。
更新日期:2020-06-12
down
wechat
bug