当前位置: X-MOL 学术arXiv.cs.LO › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Logical Neural Networks
arXiv - CS - Logic in Computer Science Pub Date : 2020-06-23 , DOI: arxiv-2006.13155
Ryan Riegel, Alexander Gray, Francois Luus, Naweed Khan, Ndivhuwo Makondo, Ismail Yunus Akhalwaya, Haifeng Qian, Ronald Fagin, Francisco Barahona, Udit Sharma, Shajith Ikbal, Hima Karanam, Sumit Neelam, Ankita Likhyani, Santosh Srivastava

We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning). Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation. Inference is omnidirectional rather than focused on predefined target variables, and corresponds to logical reasoning, including classical first-order logic theorem proving as a special case. The model is end-to-end differentiable, and learning minimizes a novel loss function capturing logical contradiction, yielding resilience to inconsistent knowledge. It also enables the open-world assumption by maintaining bounds on truth values which can have probabilistic semantics, yielding resilience to incomplete knowledge.

中文翻译:

逻辑神经网络

我们提出了一个新的框架,无缝地提供了神经网络(学习)和符号逻辑(知识和推理)的关键特性。每个神经元作为加权实值逻辑中公式的一个组成部分都具有意义,从而产生高度可解释的解开表示。推理是全方位的,而不是专注于预定义的目标变量,并且对应于逻辑推理,包括作为特例证明的经典一阶逻辑定理。该模型是端到端可微的,并且学习最小化了捕捉逻辑矛盾的新损失函数,从而对不一致的知识产生弹性。它还通过维护具有概率语义的真值界限来实现开放世界假设,从而对不完整的知识产生弹性。
更新日期:2020-06-24
down
wechat
bug