当前位置: X-MOL 学术arXiv.cs.ET › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Rethinking Non-idealities in Memristive Crossbars for Adversarial Robustness in Neural Networks
arXiv - CS - Emerging Technologies Pub Date : 2020-08-25 , DOI: arxiv-2008.11298
Abhiroop Bhattacharjee and Priyadarshini Panda

\textit{Deep Neural Networks} (DNNs) have been shown to be prone to adversarial attacks. With a growing need to enable intelligence in embedded devices in this \textit{Internet of Things} (IoT) era, secure hardware implementation of DNNs has become imperative. Memristive crossbars, being able to perform \textit{Matrix-Vector-Multiplications} (MVMs) efficiently, are used to realize DNNs on hardware. However, crossbar non-idealities have always been devalued since they cause errors in performing MVMs, leading to degradation in the accuracy of the DNNs. Several software-based adversarial defenses have been proposed in the past to make DNNs adversarially robust. However, no previous work has demonstrated the advantage conferred by the non-idealities present in analog crossbars in terms of adversarial robustness. In this work, we show that the intrinsic hardware variations manifested through crossbar non-idealities yield adversarial robustness to the mapped DNNs without any additional optimization. We evaluate resilience of state-of-the-art DNNs (VGG8 \& VGG16 networks) using benchmark datasets (CIFAR-10 \& CIFAR-100) across various crossbar sizes towards both hardware and software adversarial attacks. We find that crossbar non-idealities unleash greater adversarial robustness ($>10-20\%$) in DNNs than baseline software DNNs. We further assess the performance of our approach with other state-of-the-art efficiency-driven adversarial defenses and find that our approach performs significantly well in terms of reducing adversarial losses.

中文翻译:

重新思考忆阻横杆中的非理想性以实现神经网络中的对抗性鲁棒性

\textit{Deep Neural Networks} (DNNs) 已被证明容易受到对抗性攻击。随着在这个 \textit{物联网} (IoT) 时代,越来越需要在嵌入式设备中实现智能,DNN 的安全硬件实现已变得势在必行。忆阻交叉开关能够有效地执行 \textit{Matrix-Vector-Multiplications} (MVM),用于在硬件上实现 DNN。然而,纵横非理想性一直被贬低,因为它们会导致执行 MVM 时出错,从而导致 DNN 的准确性下降。过去已经提出了几种基于软件的对抗性防御,以使 DNN 具有对抗性。然而,之前的工作没有证明模拟交叉开关中存在的非理想性在对抗鲁棒性方面带来的优势。在这项工作中,我们表明,通过交叉非理想性表现出来的内在硬件变化对映射的 DNN 产生了对抗性的鲁棒性,而无需任何额外的优化。我们使用基准数据集(CIFAR-10 \& CIFAR-100)跨各种交叉开关尺寸评估最先进的 DNN(VGG8 \& VGG16 网络)对硬件和软件对抗性攻击的弹性。我们发现,与基线软件 DNN 相比,crossbar 非理想在 DNN 中释放出更大的对抗性鲁棒性($>10-20\%$)。我们进一步评估了我们的方法与其他最先进的效率驱动对抗性防御的性能,并发现我们的方法在减少对抗性损失方面表现良好。CIFAR-100) 跨越各种交叉尺寸,针对硬件和软件对抗性攻击。我们发现,与基线软件 DNN 相比,crossbar 非理想在 DNN 中释放出更大的对抗性鲁棒性($>10-20\%$)。我们进一步评估了我们的方法与其他最先进的效率驱动对抗性防御的性能,并发现我们的方法在减少对抗性损失方面表现良好。CIFAR-100) 跨越各种交叉尺寸,以应对硬件和软件对抗性攻击。我们发现,与基线软件 DNN 相比,crossbar 非理想在 DNN 中释放出更大的对抗性鲁棒性($>10-20\%$)。我们进一步评估了我们的方法与其他最先进的效率驱动对抗性防御的性能,并发现我们的方法在减少对抗性损失方面表现良好。
更新日期:2020-08-27
down
wechat
bug