当前位置: X-MOL 学术J. Comput. Sci. Tech. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Predicted Robustness as QoS for Deep Neural Network Models
Journal of Computer Science and Technology ( IF 1.2 ) Pub Date : 2020-09-30 , DOI: 10.1007/s11390-020-0482-6
Yue-Huan Wang , Ze-Nan Li , Jing-Wei Xu , Ping Yu , Taolue Chen , Xiao-Xing Ma

The adoption of deep neural network (DNN) model as the integral part of real-world software systems necessitates explicit consideration of their quality-of-service (QoS). It is well-known that DNN models are prone to adversarial attacks, and thus it is vitally important to be aware of how robust a model’s prediction is for a given input instance. A fragile prediction, even with high confidence, is not trustworthy in light of the possibility of adversarial attacks. We propose that DNN models should produce a robustness value as an additional QoS indicator, along with the confidence value, for each prediction they make. Existing approaches for robustness computation are based on adversarial searching, which are usually too expensive to be excised in real time. In this paper, we propose to predict, rather than to compute, the robustness measure for each input instance. Specifically, our approach inspects the output of the neurons of the target model and trains another DNN model to predict the robustness. We focus on convolutional neural network (CNN) models in the current research. Experiments show that our approach is accurate, with only 10%–34% additional errors compared with the offline heavy-weight robustness analysis. It also significantly outperforms some alternative methods. We further validate the effectiveness of the approach when it is applied to detect adversarial attacks and out-of-distribution input. Our approach demonstrates a better performance than, or at least is comparable to, the state-of-the-art techniques.

中文翻译:

预测鲁棒性作为深度神经网络模型的 QoS

采用深度神经网络 (DNN) 模型作为现实世界软件系统的组成部分,需要明确考虑其服务质量 (QoS)。众所周知,DNN 模型容易受到对抗性攻击,因此了解模型对给定输入实例的预测的鲁棒性至关重要。鉴于存在对抗性攻击的可能性,即使有很高的置信度,脆弱的预测也不可信。我们建议 DNN 模型应该为它们所做的每个预测生成一个稳健性值作为额外的 QoS 指标以及置信度值。现有的鲁棒性计算方法基于对抗性搜索,这通常太昂贵而无法实时切除。在本文中,我们建议预测,而不是计算,每个输入实例的稳健性度量。具体来说,我们的方法检查目标模型神经元的输出并训练另一个 DNN 模型来预测鲁棒性。在当前的研究中,我们专注于卷积神经网络 (CNN) 模型。实验表明,我们的方法是准确的,与离线重量级稳健性分析相比,只有 10%–34% 的额外错误。它还显着优于一些替代方法。我们进一步验证了该方法用于检测对抗性攻击和分布外输入时的有效性。我们的方法表现出比最先进的技术更好的性能,或者至少可以与最先进的技术相媲美。我们的方法检查目标模型神经元的输出,并训练另一个 DNN 模型来预测鲁棒性。在当前的研究中,我们专注于卷积神经网络 (CNN) 模型。实验表明,我们的方法是准确的,与离线重量级稳健性分析相比,只有 10%–34% 的额外错误。它还显着优于一些替代方法。我们进一步验证了该方法用于检测对抗性攻击和分布外输入时的有效性。我们的方法表现出比最先进的技术更好的性能,或者至少可以与最先进的技术相媲美。我们的方法检查目标模型神经元的输出,并训练另一个 DNN 模型来预测鲁棒性。在当前的研究中,我们专注于卷积神经网络 (CNN) 模型。实验表明,我们的方法是准确的,与离线重量级稳健性分析相比,只有 10%–34% 的额外错误。它还显着优于一些替代方法。我们进一步验证了该方法用于检测对抗性攻击和分布外输入时的有效性。我们的方法表现出比最先进的技术更好的性能,或者至少可以与最先进的技术相媲美。与离线重量级稳健性分析相比,只有 10%–34% 的额外错误。它还显着优于一些替代方法。我们进一步验证了该方法用于检测对抗性攻击和分布外输入时的有效性。我们的方法表现出比最先进的技术更好的性能,或者至少可以与最先进的技术相媲美。与离线重量级稳健性分析相比,只有 10%–34% 的额外错误。它还显着优于一些替代方法。我们进一步验证了该方法用于检测对抗性攻击和分布外输入时的有效性。我们的方法表现出比最先进的技术更好的性能,或者至少可以与最先进的技术相媲美。
更新日期:2020-09-30
down
wechat
bug