当前位置: X-MOL 学术arXiv.cs.SE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
ReluDiff: Differential Verification of Deep Neural Networks
arXiv - CS - Software Engineering Pub Date : 2020-01-10 , DOI: arxiv-2001.03662
Brandon Paulsen, Jingbo Wang, Chao Wang

As deep neural networks are increasingly being deployed in practice, their efficiency has become an important issue. While there are compression techniques for reducing the network's size, energy consumption and computational requirement, they only demonstrate empirically that there is no loss of accuracy, but lack formal guarantees of the compressed network, e.g., in the presence of adversarial examples. Existing verification techniques such as Reluplex, ReluVal, and DeepPoly provide formal guarantees, but they are designed for analyzing a single network instead of the relationship between two networks. To fill the gap, we develop a new method for differential verification of two closely related networks. Our method consists of a fast but approximate forward interval analysis pass followed by a backward pass that iteratively refines the approximation until the desired property is verified. We have two main innovations. During the forward pass, we exploit structural and behavioral similarities of the two networks to more accurately bound the difference between the output neurons of the two networks. Then in the backward pass, we leverage the gradient differences to more accurately compute the most beneficial refinement. Our experiments show that, compared to state-of-the-art verification tools, our method can achieve orders-of-magnitude speedup and prove many more properties than existing tools.

中文翻译:

ReluDiff:深度神经网络的差分验证

随着深度神经网络越来越多地在实践中部署,其效率已成为一个重要问题。虽然存在用于减少网络大小、能耗和计算要求的压缩技术,但它们仅凭经验证明没有精度损失,但缺乏对压缩网络的正式保证,例如,在存在对抗性示例的情况下。现有的验证技术如 Reluplex、ReluVal 和 DeepPoly 提供了形式保证,但它们旨在分析单个网络而不是两个网络之间的关系。为了填补这一空白,我们开发了一种新方法来对两个密切相关的网络进行差分验证。我们的方法包括一个快速但近似的前向间隔分析传递,然后是一个反向传递,它迭代地细化近似值,直到验证了所需的属性。我们有两个主要创新。在前向传递期间,我们利用两个网络的结构和行为相似性来更准确地限制两个网络的输出神经元之间的差异。然后在反向传播中,我们利用梯度差异来更准确地计算最有益的细化。我们的实验表明,与最先进的验证工具相比,我们的方法可以实现数量级的加速并证明比现有工具更多的属性。我们利用两个网络的结构和行为相似性来更准确地限制两个网络的输出神经元之间的差异。然后在反向传播中,我们利用梯度差异来更准确地计算最有益的细化。我们的实验表明,与最先进的验证工具相比,我们的方法可以实现数量级的加速并证明比现有工具更多的属性。我们利用两个网络的结构和行为相似性来更准确地限制两个网络的输出神经元之间的差异。然后在反向传播中,我们利用梯度差异来更准确地计算最有益的细化。我们的实验表明,与最先进的验证工具相比,我们的方法可以实现数量级的加速并证明比现有工具更多的属性。
更新日期:2020-01-31
down
wechat
bug