当前位置: X-MOL 学术Mach. Learn. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Reachable sets of classifiers and regression models: (non-)robustness analysis and robust training
Machine Learning ( IF 7.5 ) Pub Date : 2021-04-28 , DOI: 10.1007/s10994-021-05973-0
Anna-Kathrin Kopetzki , Stephan Günnemann

Neural networks achieve outstanding accuracy in classification and regression tasks. However, understanding their behavior still remains an open challenge that requires questions to be addressed on the robustness, explainability and reliability of predictions. We answer these questions by computing reachable sets of neural networks, i.e. sets of outputs resulting from continuous sets of inputs. We provide two efficient approaches that lead to over- and under-approximations of the reachable set. This principle is highly versatile, as we show. First, we use it to analyze and enhance the robustness properties of both classifiers and regression models. This is in contrast to existing works, which are mainly focused on classification. Specifically, we verify (non-)robustness, propose a robust training procedure, and show that our approach outperforms adversarial attacks as well as state-of-the-art methods of verifying classifiers for non-norm bound perturbations. Second, we provide techniques to distinguish between reliable and non-reliable predictions for unlabeled inputs, to quantify the influence of each feature on a prediction, and compute a feature ranking.



中文翻译:

可达的分类器和回归模型集:(非)鲁棒性分析和鲁棒训练

神经网络在分类和回归任务中具有出色的准确性。但是,了解他们的行为仍然是一个公开的挑战,需要对预测的鲁棒性,可解释性和可靠性提出一些问题。我们通过计算可达集来回答这些问题神经网络的概念,即由连续的输入集产生的输出集。我们提供了两种有效的方法,导致可达集的过高和过低。正如我们所展示的,该原理具有很高的通用性。首先,我们使用它来分析和增强分类器和回归模型的鲁棒性。这与主要集中于分类的现有作品形成对比。具体来说,我们验证(非)鲁棒性,提出鲁棒的训练程序,并证明我们的方法优于对抗攻击以及验证非范数约束扰动的分类器的最新方法。其次,我们提供技术来区分未标记输入的可靠预测和不可靠预测,以量化每个特征对预测的影响,

更新日期:2021-04-29
down
wechat
bug