当前位置: X-MOL 学术Mach. Learn. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning from interpretation transition using differentiable logic programming semantics
Machine Learning ( IF 4.3 ) Pub Date : 2021-09-14 , DOI: 10.1007/s10994-021-06058-8
Kun Gao 1 , Hanpin Wang 1, 2 , Yongzhi Cao 1 , Katsumi Inoue 3
Affiliation  

The combination of learning and reasoning is an essential and challenging topic in neuro-symbolic research. Differentiable inductive logic programming is a technique for learning a symbolic knowledge representation from either complete, mislabeled, or incomplete observed facts using neural networks. In this paper, we propose a novel differentiable inductive logic programming system called differentiable learning from interpretation transition (D-LFIT) for learning logic programs through the proposed embeddings of logic programs, neural networks, optimization algorithms, and an adapted algebraic method to compute the logic program semantics. The proposed model has several characteristics, including a small number of parameters, the ability to generate logic programs in a curriculum-learning setting, and linear time complexity for the extraction of trained neural networks. The well-known bottom clause positionalization algorithm is incorporated when the proposed system learns from relational datasets. We compare our model with NN-LFIT, which extracts propositional logic rules from retuned connected networks, the highly accurate rule learner RIPPER, the purely symbolic LFIT system LF1T, and CILP++, which integrates neural networks and the propositionalization method to handle first-order logic knowledge. From the experimental results, we conclude that D-LFIT yields comparable accuracy with respect to the baselines when given complete, incomplete, and mislabeled data. Our experimental results indicate that D-LFIT not only learns symbolic logic programs quickly and precisely but also performs robustly when processing mislabeled and incomplete datasets.



中文翻译:

使用可微逻辑编程语义从解释转换中学习

学习和推理的结合是神经符号研究中一个必不可少且具有挑战性的课题。可微归纳逻辑编程是一种使用神经网络从完整、错误标记或不完整的观察事实中学习符号知识表示的技术。在本文中,我们提出了一种新的可微归纳逻辑编程系统,称为解释转换的可微学习 (D-LFIT),用于通过所提出的逻辑程序嵌入、神经网络、优化算法和自适应代数方法来学习逻辑程序,以计算逻辑程序语义。所提出的模型具有几个特征,包括少量参数、在课程学习环境中生成逻辑程序的能力、用于提取训练神经网络的线性时间复杂度。当所提出的系统从关系数据集中学习时,结合了众所周知的底部子句定位算法。我们将我们的模型与 NN-LFIT 进行比较,后者从重新调整的连接网络中提取命题逻辑规则、高度准确的规则学习器 RIPPER、纯符号 LFIT 系统 LF1T 和 CILP++,后者集成了神经网络和命题化方法来处理一阶逻辑知识。从实验结果中,我们得出结论,当给定完整、不完整和错误标记的数据时,D-LFIT 产生与基线相当的准确性。

更新日期:2021-09-15
down
wechat
bug