当前位置: X-MOL 学术Theory Pract. Log. Program. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
White-box Induction From SVM Models: Explainable AI with Logic Programming
Theory and Practice of Logic Programming ( IF 1.4 ) Pub Date : 2020-09-21 , DOI: 10.1017/s1471068420000356
FARHAD SHAKERIN , GOPAL GUPTA

We focus on the problem of inducing logic programs that explain models learned by the support vector machine (SVM) algorithm. The top-down sequential covering inductive logic programming (ILP) algorithms (e.g., FOIL) apply hill-climbing search using heuristics from information theory. A major issue with this class of algorithms is getting stuck in local optima. In our new approach, however, the data-dependent hill-climbing search is replaced with a model-dependent search where a globally optimal SVM model is trained first, then the algorithm looks into support vectors as the most influential data points in the model, and induces a clause that would cover the support vector and points that are most similar to that support vector. Instead of defining a fixed hypothesis search space, our algorithm makes use of SHAP, an example-specific interpreter in explainable AI, to determine a relevant set of features. This approach yields an algorithm that captures the SVM model’s underlying logic and outperforms other ILP algorithms in terms of the number of induced clauses and classification evaluation metrics.

中文翻译:

SVM 模型的白盒归纳:可解释的 AI 与逻辑编程

我们专注于归纳逻辑程序的问题,这些程序解释了支持向量机 (SVM) 算法学习的模型。自上而下的顺序覆盖归纳逻辑编程(ILP)算法(例如,FOIL)使用来自信息论的启发式应用爬山搜索。这类算法的一个主要问题是陷入局部最优。然而,在我们的新方法中,依赖于数据的爬山搜索被依赖于模型的搜索所取代,其中首先训练全局最优的 SVM 模型,然后算法将支持向量作为模型中最有影响的数据点,并引出一个子句,该子句将涵盖支持向量和与该支持向量最相似的点。我们的算法不是定义一个固定的假设搜索空间,而是利用 SHAP,可解释的 AI 中特定于示例的解释器,以确定一组相关的特征。这种方法产生的算法可以捕获 SVM 模型的底层逻辑,并且在诱导子句数量和分类评估指标方面优于其他 ILP 算法。
更新日期:2020-09-21
down
wechat
bug