当前位置: X-MOL 学术J. Glob. Optim. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Advances in verification of ReLU neural networks
Journal of Global Optimization ( IF 1.8 ) Pub Date : 2020-10-27 , DOI: 10.1007/s10898-020-00949-1
Ansgar Rössig , Milena Petkovic

We consider the problem of verifying linear properties of neural networks. Despite their success in many classification and prediction tasks, neural networks may return unexpected results for certain inputs. This is highly problematic with respect to the application of neural networks for safety-critical tasks, e.g. in autonomous driving. We provide an overview of algorithmic approaches that aim to provide formal guarantees on the behaviour of neural networks. Moreover, we present new theoretical results with respect to the approximation of ReLU neural networks. On the other hand, we implement a solver for verification of ReLU neural networks which combines mixed integer programming with specialized branching and approximation techniques. To evaluate its performance, we conduct an extensive computational study. For that we use test instances based on the ACAS Xu system and the MNIST handwritten digit data set. The results indicate that our approach is very competitive with others, i.e. it outperforms the solvers of Bunel et al. (in: Bengio, Wallach, Larochelle, Grauman, Cesa-Bianchi, Garnett (eds) Advances in neural information processing systems (NIPS 2018), 2018) and Reluplex (Katz et al. in: Computer aided verification—29th international conference, CAV 2017, Heidelberg, Germany, July 24–28, 2017, Proceedings, 2017). In comparison to the solvers ReluVal (Wang et al. in: 27th USENIX security symposium (USENIX Security 18), USENIX Association, Baltimore, 2018a) and Neurify (Wang et al. in: 32nd Conference on neural information processing systems (NIPS), Montreal, 2018b), the number of necessary branchings is much smaller. Our solver is publicly available and able to solve the verification problem for instances which do not have independent bounds for each input neuron.



中文翻译:

ReLU神经网络验证的进展

我们考虑验证神经网络线性特性的问题。尽管神经网络在许多分类和预测任务中都取得了成功,但对于某些输入,神经网络可能会返回意外结果。对于神经网络在安全关键任务(例如自动驾驶)中的应用而言,这是一个很大的问题。我们概述了旨在为神经网络的行为提供形式保证的算法方法。此外,对于ReLU神经网络的逼近,我们提出了新的理论结果。另一方面,我们实现了用于ReLU神经网络验证的求解器,该求解器将混合整数规划与专门的分支和逼近技术结合在一起。为了评估其性能,我们进行了广泛的计算研究。为此,我们使用基于ACAS Xu系统和MNIST手写数字数据集的测试实例。结果表明,我们的方法与其他方法相比具有很强的竞争性,即优于Bunel等人的求解器。(in:Bengio,Wallach,Larochelle,Grauman,Cesa-Bianchi,Garnett(eds)神经信息处理系统的进展(NIPS 2018),2018)和Reluplex(Katz等人:计算机辅助验证-第29届国际会议,CAV 2017年,德国海德堡,2017年7月24日至28日,会议记录,2017年)。与求解器ReluVal(Wang等人:第27届USENIX安全研讨会(USENIX Security 18),USENIX协会,巴尔的摩,2018a)和Neurify(Wang等人:第32届神经信息处理系统(NIPS))相比,蒙特利尔,2018b),必要分支的数量要少得多。

更新日期:2020-10-30
down
wechat
bug