当前位置: X-MOL 学术arXiv.cs.LO › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Neural Proof Nets
arXiv - CS - Logic in Computer Science Pub Date : 2020-09-26 , DOI: arxiv-2009.12702
Konstantinos Kogkalidis, Michael Moortgat, Richard Moot

Linear logic and the linear {\lambda}-calculus have a long standing tradition in the study of natural language form and meaning. Among the proof calculi of linear logic, proof nets are of particular interest, offering an attractive geometric representation of derivations that is unburdened by the bureaucratic complications of conventional prooftheoretic formats. Building on recent advances in set-theoretic learning, we propose a neural variant of proof nets based on Sinkhorn networks, which allows us to translate parsing as the problem of extracting syntactic primitives and permuting them into alignment. Our methodology induces a batch-efficient, end-to-end differentiable architecture that actualizes a formally grounded yet highly efficient neuro-symbolic parser. We test our approach on {\AE}Thel, a dataset of type-logical derivations for written Dutch, where it manages to correctly transcribe raw text sentences into proofs and terms of the linear {\lambda}-calculus with an accuracy of as high as 70%.

中文翻译:

神经证明网络

线性逻辑和线性 {\lambda}-演算在自然语言形式和意义的研究中有着悠久的传统。在线性逻辑的证明演算中,证明网特别令人感兴趣,它提供了一种有吸引力的推导几何表示,不受传统证明理论格式的官僚复杂性的影响。基于集合论学习的最新进展,我们提出了一种基于 Sinkhorn 网络的证明网络的神经变体,这使我们能够将解析转化为提取句法原语并将它们排列成对齐的问题。我们的方法引入了一个批量高效、端到端的可微架构,它实现了一个正式接地但高效的神经符号解析器。我们在 {\AE}Thel 上测试我们的方法,
更新日期:2020-09-29
down
wechat
bug