当前位置: X-MOL 学术IEEE Trans. Fuzzy Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Factual and Counterfactual Explanations in Fuzzy Classification Trees
IEEE Transactions on Fuzzy Systems ( IF 10.7 ) Pub Date : 6-1-2022 , DOI: 10.1109/tfuzz.2022.3179582
Guillermo Fernandez 1 , Juan A. Aledo 1 , Jose A. Gamez 1 , Jose M. Puerta 1
Affiliation  

Classification algorithms have recently acquired great popularity due to their efficiency to generate models capable of solving high complexity problems. Specifically, black box models are the ones that offer the best results, since they greatly benefit from the enormous amount of data available to learn models in an increasingly accurate way. However, their main disadvantage compared to other simpler algorithms, e.g., decision trees, is the loss of interpretability for both the model and the individual classifications, which may become a major drawback because of the increasing number of applications where it is advisable and even compulsory to provide an explanation. A well-accepted practice is to build an explainable model that can mimic the behavior of the (more complex) classifier in the neighborhood of the instance to be explained. Nonetheless, the generation of explanations in such white box models is not trivial either, which has generated intense research. It is common to generate two types of explanations, factual explanations and counterfactual explanations, which complement each other to justify why the instance has been classified into a certain class or category. In this work, we propose the definition of factual and counterfactual explanations in the frame of fuzzy decision trees, where multiple branches can be fired at once. Our proposal is centered around the definition of factual explanations that can contain more than a single rule, in contrast to the current standard that is limited to considering a single rule as a factual explanation. Moreover, we introduce the idea of robust factual explanation. Finally, we provide procedures to obtain counterfactual explanations from the instance and also from a factual explanation.

中文翻译:


模糊分类树中的事实和反事实解释



分类算法最近因其高效生成能够解决高复杂性问题的模型而广受欢迎。具体来说,黑盒模型是提供最佳结果的模型,因为它们极大地受益于大量可用数据,可以以越来越准确的方式学习模型。然而,与其他更简单的算法(例如决策树)相比,它们的主要缺点是模型和单个分类都失去了可解释性,这可能成为一个主要缺点,因为建议甚至强制的应用数量不断增加提供解释。一种广为接受的做法是构建一个可解释的模型,该模型可以模仿要解释的实例附近的(更复杂的)分类器的行为。尽管如此,在此类白盒模型中生成解释也并非易事,这引发了深入的研究。通常会生成两种类型的解释,即事实解释和反事实解释,它们相互补充以证明实例被分类到某个类别或类别的原因。在这项工作中,我们在模糊决策树的框架中提出了事实和反事实解释的定义,其中可以同时触发多个分支。我们的提案以事实解释的定义为中心,事实解释可以包含多个单一规则,而当前标准仅限于将单一规则视为事实解释。此外,我们引入了稳健的事实解释的想法。最后,我们提供了从实例和事实解释中获取反事实解释的程序。
更新日期:2024-08-22
down
wechat
bug