当前位置: X-MOL 学术IEEE Trans. Inform. Forensics Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Stealthy and Robust Glitch Injection Attack on Deep Learning Accelerator for Target With Variational Viewpoint
IEEE Transactions on Information Forensics and Security ( IF 6.3 ) Pub Date : 12-23-2020 , DOI: 10.1109/tifs.2020.3046858
Wenye Liu , Chip-Hong Chang , Fan Zhang

Deep neural network (DNN) accelerators overcome the power and memory walls for executing neural-net models locally on edge-computing devices to support sophisticated AI applications. The advocacy of “model once, run optimized anywhere” paradigm introduces potential new security threat to edge intelligence that is methodologically different from the well-known adversarial examples. Existing adversarial examples modify the input samples presented to an AI application either digitally or physically to cause a misclassification. Nevertheless, these input-based perturbations are not robust or surreptitious on multi-view target. To generate a good adversarial example for misclassifying a real-world target of variational viewing angle, lighting and distance, a decent number of target’s samples are required to extract the rare anomalies that can cross the decision boundary. The feasible perturbations are substantial and visually perceptible. In this paper, we propose a new glitch injection attack on DNN accelerator that is capable of misclassifying a target under variational viewpoints. The glitches injected into the computation clock signal induce transitory but disruptive errors in the intermediate results of the multiply-and-accumulate (MAC) operations. The attack pattern for each target of interest consists of sparse instantaneous glitches, which can be derived from just one sample of the target. Two modes of attack patterns are derived, and their effectiveness are demonstrated on four representative ImageNet models implemented on the Deep-learning Processing Unit (DPU) of FPGA edge and its DNN development toolchain. The attack success rates are evaluated on 118 objects in 61 diverse sensing conditions, including 25 viewing angles (_60° to 60°), 24 illumination directions and 12 color temperatures. In the covert mode, the success rates of our attack exceed existing stealthy adversarial examples by more than 16.3%, with only two glitches injected into ten thousands to a million cycles for one complete inference. In the robust mode, the attack success rates on all four DNNs are more than 96.2% with an average glitch intensity of 1.4% and a maximum glitch intensity of 10.2%.

中文翻译:


变分视角目标深度学习加速器的隐秘鲁棒毛刺注入攻击



深度神经网络 (DNN) 加速器克服了电源和内存限制,可在边缘计算设备上本地执行神经网络模型,以支持复杂的人工智能应用。 “一次建模,随处运行优化”范式的倡导给边缘智能带来了潜在的新安全威胁,这在方法上与众所周知的对抗性例子不同。现有的对抗性示例以数字方式或物理方式修改呈现给人工智能应用程序的输入样本,从而导致错误分类。然而,这些基于输入的扰动在多视图目标上并不稳健或隐蔽。为了生成一个好的对抗性示例来对变化视角、光照和距离的现实世界目标进行错误分类,需要相当数量的目标样本来提取可能跨越决策边界的罕见异常。可行的扰动是巨大的并且是视觉上可感知的。在本文中,我们提出了一种针对 DNN 加速器的新故障注入攻击,该攻击能够在变分观点下对目标进行错误分类。注入计算时钟信号的毛刺会在乘法累加 (MAC) 运算的中间结果中引起暂时性但破坏性的错误。每个感兴趣目标的攻击模式都由稀疏的瞬时故障组成,这些故障可以仅从目标的一个样本中得出。推导了两种攻击模式,并在 FPGA 边缘的深度学习处理单元 (DPU) 及其 DNN 开发工具链上实现的四个代表性 ImageNet 模型上证明了它们的有效性。 对 118 个物体在 61 种不同传感条件下的攻击成功率进行了评估,包括 25 个视角(_60° 至 60°)、24 个照明方向和 12 个色温。在隐蔽模式下,我们的攻击成功率比现有的隐形对抗示例高出 16.3% 以上,一次完整的推理仅在一万到一百万个周期中注入两个故障。在鲁棒模式下,所有四个 DNN 的攻击成功率均超过 96.2%,平均故障强度为 1.4%,最大故障强度为 10.2%。
更新日期:2024-08-22
down
wechat
bug