当前位置: X-MOL 学术IEEE Trans. Neural Netw. Learn. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Breaking Neural Reasoning Architectures With Metamorphic Relation-Based Adversarial Examples
IEEE Transactions on Neural Networks and Learning Systems ( IF 10.2 ) Pub Date : 2021-04-23 , DOI: 10.1109/tnnls.2021.3072166
Alvin Chan 1 , Lei Ma 2 , Felix Juefei-Xu 3 , Yew-Soon Ong 1 , Xiaofei Xie 4 , Minhui Xue 5 , Yang Liu 1
Affiliation  

The ability to read, reason, and infer lies at the heart of neural reasoning architectures. After all, the ability to perform logical reasoning over language remains a coveted goal of Artificial Intelligence. To this end, models such as the Turing-complete differentiable neural computer (DNC) boast of real logical reasoning capabilities, along with the ability to reason beyond simple surface-level matching. In this brief, we propose the first probe into DNC’s logical reasoning capabilities with a focus on text-based question answering (QA). More concretely, we propose a conceptually simple but effective adversarial attack based on metamorphic relations. Our proposed adversarial attack reduces DNCs’ state-of-the-art accuracy from 100% to 1.5% in the worst case, exposing weaknesses and susceptibilities in modern neural reasoning architectures. We further empirically explore possibilities to defend against such attacks and demonstrate the utility of our adversarial framework as a simple scalable method to improve model adversarial robustness.

中文翻译:


用基于变形关系的对抗性例子打破神经推理架构



阅读、推理和推理的能力是神经推理架构的核心。毕竟,对语言进行逻辑推理的能力仍然是人工智能令人垂涎的目标。为此,图灵完备的可微神经计算机(DNC)等模型拥有真正的逻辑推理能力,以及超越简单表面级匹配的推理能力。在本文中,我们建议首次探讨 DNC 的逻辑推理能力,重点关注基于文本的问答 (QA)。更具体地说,我们提出了一种基于变质关系的概念上简单但有效的对抗性攻击。在最坏的情况下,我们提出的对抗性攻击将 DNC 最先进的准确率从 100% 降低到 1.5%,暴露了现代神经推理架构的弱点和敏感性。我们进一步根据经验探索防御此类攻击的可能性,并证明我们的对抗框架作为一种简单的可扩展方法来提高模型对抗鲁棒性的实用性。
更新日期:2021-04-23
down
wechat
bug