当前位置: X-MOL 学术IEEE J. Emerg. Sel. Top. Circuits Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Preventing DNN Model IP Theft via Hardware Obfuscation
IEEE Journal on Emerging and Selected Topics in Circuits and Systems ( IF 4.6 ) Pub Date : 2021-04-28 , DOI: 10.1109/jetcas.2021.3076151
Brunno F. Goldstein , Vinay C. Patil , Victor C. Ferreira , Alexandre S. Nery , Felipe M. G. Franca , Sandip Kundu

Training accurate deep learning (DL) models require large amounts of training data, significant work in labeling the data, considerable computing resources, and substantial domain expertise. In short, they are expensive to develop. Hence, protecting these models, which are valuable storehouses of intellectual properties (IP), against model stealing/cloning attacks is of paramount importance. Today’s mobile processors feature Neural Processing Units (NPUs) to accelerate the execution of DL models. DL models executing on NPUs are vulnerable to hyperparameter extraction via side-channel attacks and model parameter theft via bus monitoring attacks. This paper presents a novel solution to defend against DL IP theft in NPUs during model distribution and deployment/execution via lightweight, keyed model obfuscation scheme. Unauthorized use of such models results in inaccurate classification. In addition, we present an ideal end-to-end deep learning trusted system composed of: 1) model distribution via hardware root-of-trust and public-key cryptography infrastructure (PKI) and 2) model execution via low-latency memory encryption. We demonstrate that our proposed obfuscation solution achieves IP protection objectives without requiring specialized training or sacrificing the model’s accuracy. In addition, the proposed obfuscation mechanism preserves the output class distribution while degrading the model’s accuracy for unauthorized parties, covering any evidence of a hacked model.

中文翻译:

通过硬件混淆防止 DNN 模型 IP 盗窃

训练准确的深度学习 (DL) 模型需要大量训练数据、标记数据方面的大量工作、大量计算资源和丰富的领域专业知识。简而言之,它们的开发成本很高。因此,保护​​这些作为知识产权 (IP) 宝贵仓库的模型免受模型窃取/克隆攻击至关重要。当今的移动处理器采用神经处理单元 (NPU) 来加速深度学习模型的执行。在 NPU 上执行的 DL 模型容易通过侧信道攻击和模型参数窃取通过总线监控攻击进行超参数提取。本文提出了一种新颖的解决方案,用于在模型分发和部署/执行期间防止 NPU 中的 DL IP 盗窃轻量级,键控模型混淆方案。未经授权使用此类模型会导致分类不准确。此外,我们提出了一个理想的端到端深度学习可信系统,包括:1)通过硬件信任根和公钥加密基础设施(PKI)的模型分发和 2)通过低延迟内存加密的模型执行. 我们证明了我们提出的混淆解决方案可以在不需要专门培训或牺牲模型准确性的情况下实现 IP 保护目标。此外,提议的混淆机制保留了输出类别分布,同时降低了模型对未经授权方的准确性,涵盖了模型被黑客入侵的任何证据。
更新日期:2021-06-15
down
wechat
bug