当前位置: X-MOL 学术Form. Methods Syst. Des. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Reluplex: a calculus for reasoning about deep neural networks
Formal Methods in System Design ( IF 0.7 ) Pub Date : 2021-07-01 , DOI: 10.1007/s10703-021-00363-7
Guy Katz , Clark Barrett , David L. Dill , Kyle Julian , Mykel J. Kochenderfer

Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present a novel, scalable, and efficient technique for verifying properties of deep neural networks (or providing counter-examples). The technique is based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU) activation function, which is a crucial ingredient in many modern neural networks. The verification procedure tackles neural networks as a whole, without making any simplifying assumptions. We evaluated our technique on a prototype deep neural network implementation of the next-generation airborne collision avoidance system for unmanned aircraft (ACAS Xu). Results show that our technique can successfully prove properties of networks that are an order of magnitude larger than the largest networks that could be verified previously.



中文翻译:

Reluplex:用于推理深度神经网络的演算

深度神经网络已成为解决复杂的现实世界问题的一种广泛使用且有效的手段。然而,将它们应用于安全关键系统的一个主要障碍是很难为它们的行为提供正式的保证。我们提出了一种新颖、可扩展且有效的技术来验证深度神经网络的属性(或提供反例)。该技术基于单纯形方法,扩展到处理非凸整流线性单元( ReLU) 激活函数,这是许多现代神经网络中的重要组成部分。验证过程将神经网络作为一个整体来处理,而不做任何简化的假设。我们在下一代无人机机载防撞系统 (ACAS Xu) 的原型深度神经网络实现上评估了我们的技术。结果表明,我们的技术可以成功地证明网络的属性比之前可以验证的最大网络大一个数量级。

更新日期:2021-07-02
down
wechat
bug