当前位置: X-MOL 学术IEEE Trans. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Enabling Secure NVM-based in-Memory Neural Network Computing by Sparse Fast Gradient Encryption
IEEE Transactions on Computers ( IF 3.7 ) Pub Date : 2020-11-01 , DOI: 10.1109/tc.2020.3017870
Yi Cai , Xiaoming Chen , Lu Tian , Yu Wang , Huazhong Yang

Neural network (NN) computing is energy-consuming on traditional computing systems, owing to the inherent memory wall bottleneck of the von Neumann architecture and the Moore's Law being approaching the end. Non-volatile memories (NVMs) have been demonstrated as promising alternatives for constructing computing-in-memory (CIM) systems to accelerate NN computing. However, NVM-based NN computing systems are vulnerable to the confidentiality attacks because the weight parameters persist in memory when the system is powered off, enabling an adversary with physical access to extract the well-trained NN models. The goal of this article is to find a solution for thwarting the confidentiality attacks. We define and model the weight encryption problem. Then we propose an effective framework, containing a sparse fast gradient encryption (SFGE) method and a runtime encryption scheduling (RES) scheme, to guarantee the confidentiality security of NN models with a negligible performance overhead. Moreover, we improve the SFGE method by incrementally generating the encryption keys. Additionally, we provide variants of the encryption method to better fit quantized models and various mapping strategies. The experiments demonstrate that only encrypting an extremely small proportion of the weights (e.g., 20 weights per layer in ResNet-101), the NN models can be strictly protected.

中文翻译:

通过稀疏快速梯度加密实现基于 NVM 的安全内存神经网络计算

由于冯诺依曼架构固有的内存墙瓶颈和摩尔定律接近尾声,神经网络(NN)计算在传统计算系统上是耗能的。非易失性存储器 (NVM) 已被证明是构建内存计算 (CIM) 系统以加速 NN 计算的有前途的替代方案。然而,基于 NVM 的 NN 计算系统容易受到机密性攻击,因为当系统断电时,权重参数会保留在内存中,使具有物理访问权限的对手能够提取训练有素的 NN 模型。本文的目标是找到一种阻止机密性攻击的解决方案。我们定义和建模权重加密问题。然后我们提出一个有效的框架,包含稀疏快速梯度加密(SFGE)方法和运行时加密调度(RES)方案,以可忽略的性能开销保证NN模型的机密性安全。此外,我们通过增量生成加密密钥来改进 SFGE 方法。此外,我们提供了加密方法的变体,以更好地适应量化模型和各种映射策略。实验表明,仅加密极小比例的权重(例如,ResNet-101 中每层 20 个权重),NN 模型可以受到严格保护。我们提供了加密方法的变体,以更好地适应量化模型和各种映射策略。实验表明,仅加密极小比例的权重(例如,ResNet-101 中每层 20 个权重),NN 模型可以受到严格保护。我们提供了加密方法的变体,以更好地适应量化模型和各种映射策略。实验表明,仅加密极小比例的权重(例如,ResNet-101 中每层 20 个权重),NN 模型可以受到严格保护。
更新日期:2020-11-01
down
wechat
bug