当前位置: X-MOL 学术Comput. Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Mind control attack: Undermining deep learning with GPU memory exploitation
Computers & Security ( IF 4.8 ) Pub Date : 2020-11-19 , DOI: 10.1016/j.cose.2020.102115
Sang-Ok Park , Ohmin Kwon , Yonggon Kim , Sang Kil Cha , Hyunsoo Yoon

Modern deep learning frameworks rely heavily on GPUs to accelerate the computation. However, the security implication of GPU device memory exploitation on deep learning frameworks has been largely neglected. In this paper, we argue that GPU device memory manipulation is a novel attack vector against deep learning systems. We present a novel attack method leveraging the attack vector, which makes deep learning predictions no longer different from random guessing by degrading the accuracy of the predictions. To the best of our knowledge, we are the first to show a practical attack that directly exploits deep learning frameworks through GPU memory manipulation. We confirmed that our attack works on three popular deep learning frameworks, TensorFlow, CNTK, and Caffe, running on CUDA. Finally, we propose potential defense mechanisms against our attack, and discuss concerns of GPU memory safety.



中文翻译:

思维控制攻击:利用GPU内存破坏深度学习

现代深度学习框架严重依赖GPU来加速计算。然而,很大程度上忽略了GPU设备内存开发对深度学习框架的安全影响。在本文中,我们认为GPU设备的内存操作是针对深度学习系统的一种新型攻击媒介。我们提出了一种利用攻击向量的新颖攻击方法,该方法通过降低预测的准确性,使深度学习预测与随机猜测不再相同。据我们所知,我们是第一个展示实际攻击的人,该攻击通过GPU内存操作直接利用深度学习框架。我们确认,我们的攻击可在CUDA上运行的三种流行的深度学习框架(TensorFlow,CNTK和Caffe)上起作用。最后,我们提出了针对攻击的潜在防御机制,

更新日期:2020-12-27
down
wechat
bug