当前位置: X-MOL 学术Pattern Recogn. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A new approach to training more interpretable model with additional segmentation
Pattern Recognition Letters ( IF 3.9 ) Pub Date : 2021-10-06 , DOI: 10.1016/j.patrec.2021.10.003
Sunguk Shin 1 , Youngjoon Kim 1 , Ji Won Yoon 1
Affiliation  

It is not straightforward to understand how the complicated deep learning models work because they are almost black boxes. To address this problem, various approaches have been developed to provide interpretability and applied in black-box deep learning models. However, the traditional interpretable machine learning only helps us to understand the models which have already been trained. Therefore, if the models are not properly trained, it is obvious that the interpretable machine learning will not work well.

中文翻译:


一种通过附加分割来训练更具可解释性的模型的新方法



理解复杂的深度学习模型如何工作并不容易,因为它们几乎是黑匣子。为了解决这个问题,人们开发了各种方法来提供可解释性并应用于黑盒深度学习模型。然而,传统的可解释机器学习只能帮助我们理解已经训练过的模型。因此,如果模型没有经过适当的训练,显然可解释的机器学习将无法很好地发挥作用。
更新日期:2021-10-06
down
wechat
bug