当前位置:
X-MOL 学术
›
arXiv.cs.AI
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Evaluating Relaxations of Logic for Neural Networks: A Comprehensive Study
arXiv - CS - Artificial Intelligence Pub Date : 2021-07-28 , DOI: arxiv-2107.13646 Mattia Medina Grespan, Ashim Gupta, Vivek Srikumar
arXiv - CS - Artificial Intelligence Pub Date : 2021-07-28 , DOI: arxiv-2107.13646 Mattia Medina Grespan, Ashim Gupta, Vivek Srikumar
Symbolic knowledge can provide crucial inductive bias for training neural
models, especially in low data regimes. A successful strategy for incorporating
such knowledge involves relaxing logical statements into sub-differentiable
losses for optimization. In this paper, we study the question of how best to
relax logical expressions that represent labeled examples and knowledge about a
problem; we focus on sub-differentiable t-norm relaxations of logic. We present
theoretical and empirical criteria for characterizing which relaxation would
perform best in various scenarios. In our theoretical study driven by the goal
of preserving tautologies, the Lukasiewicz t-norm performs best. However, in
our empirical analysis on the text chunking and digit recognition tasks, the
product t-norm achieves best predictive performance. We analyze this apparent
discrepancy, and conclude with a list of best practices for defining loss
functions via logic.
中文翻译:
评估神经网络的逻辑松弛:综合研究
符号知识可以为训练神经模型提供关键的归纳偏差,尤其是在低数据情况下。合并这些知识的成功策略涉及将逻辑语句放松为可微分的损失以进行优化。在本文中,我们研究了如何最好地放松表示标记示例和问题知识的逻辑表达式的问题;我们专注于逻辑的亚可微 t 范数松弛。我们提出了理论和经验标准来表征哪种松弛在各种情况下表现最佳。在我们以保留重言式为目标的理论研究中,Lukasiewicz t 范数表现最好。然而,在我们对文本分块和数字识别任务的实证分析中,乘积 t 范数实现了最佳预测性能。
更新日期:2021-07-30
中文翻译:
评估神经网络的逻辑松弛:综合研究
符号知识可以为训练神经模型提供关键的归纳偏差,尤其是在低数据情况下。合并这些知识的成功策略涉及将逻辑语句放松为可微分的损失以进行优化。在本文中,我们研究了如何最好地放松表示标记示例和问题知识的逻辑表达式的问题;我们专注于逻辑的亚可微 t 范数松弛。我们提出了理论和经验标准来表征哪种松弛在各种情况下表现最佳。在我们以保留重言式为目标的理论研究中,Lukasiewicz t 范数表现最好。然而,在我们对文本分块和数字识别任务的实证分析中,乘积 t 范数实现了最佳预测性能。