当前位置: X-MOL 学术arXiv.cs.CR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Exploiting Vulnerabilities in Deep Neural Networks: Adversarial and Fault-Injection Attacks
arXiv - CS - Cryptography and Security Pub Date : 2021-05-05 , DOI: arxiv-2105.03251
Faiq Khalid, Muhammad Abdullah Hanif, Muhammad Shafique

From tiny pacemaker chips to aircraft collision avoidance systems, the state-of-the-art Cyber-Physical Systems (CPS) have increasingly started to rely on Deep Neural Networks (DNNs). However, as concluded in various studies, DNNs are highly susceptible to security threats, including adversarial attacks. In this paper, we first discuss different vulnerabilities that can be exploited for generating security attacks for neural network-based systems. We then provide an overview of existing adversarial and fault-injection-based attacks on DNNs. We also present a brief analysis to highlight different challenges in the practical implementation of adversarial attacks. Finally, we also discuss various prospective ways to develop robust DNN-based systems that are resilient to adversarial and fault-injection attacks.

中文翻译:

利用深层神经网络中的漏洞:对抗性和错误注入攻击

从微型起搏器芯片到飞机防撞系统,最新的网络物理系统(CPS)越来越多地依赖于深度神经网络(DNN)。但是,正如在各种研究中得出的结论一样,DNN极易受到包括对抗性攻击在内的安全威胁的攻击。在本文中,我们首先讨论可以被利用来为基于神经网络的系统生成安全攻击的各种漏洞。然后,我们概述了DNN上现有的基于对抗和基于故障注入的攻击。我们还提供了简短的分析,以强调对抗性攻击的实际实施中的不同挑战。最后,我们还讨论了各种开发基于健壮的DNN的系统的前瞻性方法,这些系统可抵抗攻击和故障注入攻击。
更新日期:2021-05-10
down
wechat
bug