当前位置: X-MOL 学术IEEE J. Emerg. Sel. Top. Circuits Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A New Lightweight In Situ Adversarial Sample Detector for Edge Deep Neural Network
IEEE Journal on Emerging and Selected Topics in Circuits and Systems ( IF 3.7 ) Pub Date : 2021-04-27 , DOI: 10.1109/jetcas.2021.3076101
Si Wang , Wenye Liu , Chip-Hong Chang

The flourishing of Internet of Things (IoT) has rekindled on-premise computing to allow data to be analyzed closer to the source. To support edge Artificial Intelligence (AI), hardware accelerators, open-source AI model compilers and commercially available toolkits have evolved to facilitate the development and deployment of applications that use AI at its core. This paradigm shift in deep learning computations does not, however, reduce the vulnerability of deep neural networks (DNN) against adversarial attacks but introduces a difficult catch-up. This is because existing methodologies rely mainly on off-line analysis to detect adversarial inputs, assuming that the deep learning model is implemented on a 32-bit floating-point graphical processing unit (GPU) instance. In this paper, we propose a new hardware-oriented approach for in-situ detection of adversarial inputs feeding through a spatial DNN accelerator architecture or a third-party DNN Intellectual Property (IP) implemented on the edge. Our method exploits controlled glitch injection into the clock signal of the DNN accelerator to maximize the information gain for the discrimination of adversarial and benign inputs. A light gradient boosting machine (lightGBM) is constructed by analyzing the prediction probability of unmutated and mutated models and the label change inconsistency between the adversarial and benign samples in the training dataset. With negligibly small hardware overhead, the glitch injection circuit and the trained lightGBM detector can be easily implemented alongside with the deep learning model on a Xilinx ZU9EG chip. The effectiveness of the proposed detector is validated against four state-of-the-art adversarial attacks on two different types and scales of DNN models, VGG16 and ResNet50, for a thousand-class visual object recognition application. The results show a significant increase in true positive rate and a substantial reduction in false positive rate on the Fast Gradient Sign Method (FGSM), Iterative-FGSM (I-FGSM), C&W and universal perturbation attacks compared with modern software-oriented adversarial sample detection methods.

中文翻译:


一种用于边缘深度神经网络的新型轻量级原位对抗样本检测器



物联网 (IoT) 的蓬勃发展重新点燃了本地计算的活力,使数据能够在更接近源头的地方进行分析。为了支持边缘人工智能 (AI),硬件加速器、开源 AI 模型编译器和商用工具包不断发展,以促进以 AI 为核心的应用程序的开发和部署。然而,深度学习计算中的这种范式转变并没有减少深度神经网络(DNN)对抗对抗性攻击的脆弱性,而是引入了困难的追赶能力。这是因为现有方法主要依靠离线分析来检测对抗性输入,假设深度学习模型是在 32 位浮点图形处理单元 (GPU) 实例上实现的。在本文中,我们提出了一种新的面向硬件的方法,用于通过空间 DNN 加速器架构或在边缘实现的第三方 DNN 知识产权 (IP) 来原位检测对抗性输入。我们的方法利用受控毛刺注入 DNN 加速器的时钟信号来最大化区分对抗性输入和良性输入的信息增益。通过分析未变异和变异模型的预测概率以及训练数据集中的对抗样本和良性样本之间的标签变化不一致,构建了轻梯度增强机(lightGBM)。硬件开销可以忽略不计,毛刺注入电路和经过训练的 lightGBM 检测器可以与 Xilinx ZU9EG 芯片上的深度学习模型一起轻松实现。 针对千级视觉对象识别应用,对两种不同类型和规模的 DNN 模型 VGG16 和 ResNet50 的四种最先进的对抗性攻击验证了所提出的检测器的有效性。结果表明,与现代面向软件的对抗样本相比,快速梯度符号法 (FGSM)、迭代 FGSM (I-FGSM)、C&W 和通用扰动攻击的真阳性率显着增加,假阳性率大幅降低检测方法。
更新日期:2021-04-27
down
wechat
bug