当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning Reasoning Strategies in End-to-End Differentiable Proving
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2020-07-13 , DOI: arxiv-2007.06477
Pasquale Minervini, Sebastian Riedel, Pontus Stenetorp, Edward Grefenstette, Tim Rockt\"aschel

Attempts to render deep learning models interpretable, data-efficient, and robust have seen some success through hybridisation with rule-based systems, for example, in Neural Theorem Provers (NTPs). These neuro-symbolic models can induce interpretable rules and learn representations from data via back-propagation, while providing logical explanations for their predictions. However, they are restricted by their computational complexity, as they need to consider all possible proof paths for explaining a goal, thus rendering them unfit for large-scale applications. We present Conditional Theorem Provers (CTPs), an extension to NTPs that learns an optimal rule selection strategy via gradient-based optimisation. We show that CTPs are scalable and yield state-of-the-art results on the CLUTRR dataset, which tests systematic generalisation of neural models by learning to reason over smaller graphs and evaluating on larger ones. Finally, CTPs show better link prediction results on standard benchmarks in comparison with other neural-symbolic models, while being explainable. All source code and datasets are available online, at https://github.com/uclnlp/ctp.

中文翻译:

学习端到端可微证明中的推理策略

通过与基于规则的系统(例如,神经定理证明器 (NTP))的混合,使深度学习模型具有可解释性、数据效率和稳健性的尝试取得了一些成功。这些神经符号模型可以引入可解释的规则并通过反向传播从数据中学习表示,同时为其预测提供逻辑解释。然而,它们受到计算复杂性的限制,因为它们需要考虑所有可能的证明路径来解释一个目标,从而使它们不适合大规模应用。我们提出了条件定理证明器 (CTP),它是 NTP 的扩展,它通过基于梯度的优化来学习最佳规则选择策略。我们表明 CTP 是可扩展的,并在 CLUTRR 数据集上产生了最先进的结果,它通过学习对较小的图进行推理并评估较大的图来测试神经模型的系统泛化。最后,与其他神经符号模型相比,CTP 在标准基准上显示出更好的链接预测结果,同时具有可解释性。所有源代码和数据集均可在线获取,网址为 https://github.com/uclnlp/ctp。
更新日期:2020-08-25
down
wechat
bug