当前位置:
X-MOL 学术
›
arXiv.cs.LG
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Interpretable Artificial Intelligence through the Lens of Feature Interaction
arXiv - CS - Machine Learning Pub Date : 2021-03-01 , DOI: arxiv-2103.03103 Michael Tsang, James Enouen, Yan Liu
arXiv - CS - Machine Learning Pub Date : 2021-03-01 , DOI: arxiv-2103.03103 Michael Tsang, James Enouen, Yan Liu
Interpretation of deep learning models is a very challenging problem because
of their large number of parameters, complex connections between nodes, and
unintelligible feature representations. Despite this, many view
interpretability as a key solution to trustworthiness, fairness, and safety,
especially as deep learning is applied to more critical decision tasks like
credit approval, job screening, and recidivism prediction. There is an
abundance of good research providing interpretability to deep learning models;
however, many of the commonly used methods do not consider a phenomenon called
"feature interaction." This work first explains the historical and modern
importance of feature interactions and then surveys the modern interpretability
methods which do explicitly consider feature interactions. This survey aims to
bring to light the importance of feature interactions in the larger context of
machine learning interpretability, especially in a modern context where deep
learning models heavily rely on feature interactions.
中文翻译:
通过功能交互的角度解释可解释的人工智能
深度学习模型的解释是一个非常具有挑战性的问题,因为它们具有大量参数,节点之间的复杂连接以及难以理解的特征表示。尽管如此,许多人仍将可解释性视为可信赖性,公平性和安全性的关键解决方案,尤其是在将深度学习应用于更关键的决策任务(例如信贷批准,工作筛选和累犯预测)时,尤为如此。有大量的好的研究为深度学习模型提供了可解释性。但是,许多常用方法没有考虑称为“特征相互作用”的现象。这项工作首先解释了特征交互的历史和现代重要性,然后调查了确实明确考虑了特征交互的现代可解释性方法。
更新日期:2021-03-05
中文翻译:
通过功能交互的角度解释可解释的人工智能
深度学习模型的解释是一个非常具有挑战性的问题,因为它们具有大量参数,节点之间的复杂连接以及难以理解的特征表示。尽管如此,许多人仍将可解释性视为可信赖性,公平性和安全性的关键解决方案,尤其是在将深度学习应用于更关键的决策任务(例如信贷批准,工作筛选和累犯预测)时,尤为如此。有大量的好的研究为深度学习模型提供了可解释性。但是,许多常用方法没有考虑称为“特征相互作用”的现象。这项工作首先解释了特征交互的历史和现代重要性,然后调查了确实明确考虑了特征交互的现代可解释性方法。