当前位置: X-MOL 学术IEEE Comput. Archit. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Guessing Outputs of Dynamically Pruned CNNs Using Memory Access Patterns
IEEE Computer Architecture Letters ( IF 2.3 ) Pub Date : 2021-08-04 , DOI: 10.1109/lca.2021.3101505
Benjamin Wu , Trishita Tiwari , G. Edward Suh , Aaron B. Wagner

Dynamic activation pruning of convolutional neural networks (CNNs) is a class of techniques that reduce both runtime and memory usage in CNN implementations by skipping unnecessary or low-impact computations in convolutional layers. However, since dynamic pruning results in different sequences of memory accesses depending on the input to the CNN, they potentially open the door to inference-phase side-channel attacks that may leak private data with each input. We demonstrate a memory-based attack inferring a dynamically-pruned CNN’s outputs for various victim CNN models and datasets. We find that an attacker can train their own machine learning model to learn to guess victim image classifications using the victim’s memory access patterns with significantly better than random chance. Moreover, unlike previous related work, our attack: 1) continually leaks user data for each input and 2) does not require adversarial presence during the victim training.

中文翻译:

使用内存访问模式猜测动态剪枝 CNN 的输出

卷积神经网络 (CNN) 的动态激活修剪是一类通过跳过卷积层中不必要或低影响计算来减少 CNN 实现中的运行时间和内存使用量的技术。然而,由于动态修剪会根据 CNN 的输入导致不同的内存访问序列,因此它们可能为推理阶段侧信道攻击打开大门,这些攻击可能会泄露每个输入的私有数据。我们展示了一种基于内存的攻击,它为各种受害者 CNN 模型和数据集推断出动态修剪的 CNN 输出。我们发现攻击者可以训练他们自己的机器学习模型,以学习使用受害者的内存访问模式来猜测受害者图像分类,其概率明显优于随机机会。此外,与之前的相关工作不同,我们的攻击:
更新日期:2021-08-13
down
wechat
bug