当前位置: X-MOL 学术IEEE Trans. Softw. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Increasing the Confidence of Deep Neural Networks by Coverage Analysis
IEEE Transactions on Software Engineering ( IF 6.5 ) Pub Date : 2022-03-30 , DOI: 10.1109/tse.2022.3163682
Giulio Rossolini 1 , Alessandro Biondi 1 , Giorgio Buttazzo 1
Affiliation  

The great performance of machine learning algorithms and deep neural networks in several perception and control tasks is pushing the industry to adopt such technologies in safety-critical applications, as autonomous robots and self-driving vehicles. At present, however, several issues need to be solved to make deep learning methods more trustworthy, predictable, safe, and secure against adversarial attacks. Although several methods have been proposed to improve the trustworthiness of deep neural networks, most of them are tailored for specific classes of adversarial examples, hence failing to detect other corner cases or unsafe inputs that heavily deviate from the training samples. This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model robustness against different unsafe inputs. In particular, four coverage analysis methods are proposed and tested in the architecture for evaluating multiple detection logic. Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs, introducing limited extra-execution time and memory requirements.

中文翻译:


通过覆盖率分析提高深度神经网络的置信度



机器学习算法和深度神经网络在多种感知和控制任务中的出色表现正在推动业界在自主机器人和自动驾驶车辆等安全关键应用中采用此类技术。然而,目前还需要解决几个问题,以使深度学习方法更加可信、可预测、安全并能抵御对抗性攻击。尽管已经提出了几种方法来提高深度神经网络的可信度,但大多数方法都是针对特定类别的对抗性示例而定制的,因此无法检测到其他极端情况或严重偏离训练样本的不安全输入。本文提出了一种基于覆盖范式的轻量级监控架构,以增强模型针对不同不安全输入的鲁棒性。特别是,在评估多重检测逻辑的架构中提出并测试了四种覆盖分析方法。实验结果表明,所提出的方法可以有效地检测强大的对抗性示例和分布外输入,从而引入有限的额外执行时间和内存需求。
更新日期:2022-03-30
down
wechat
bug