当前位置: X-MOL 学术ACM Trans. Des. Autom. Electron. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adversarial Perturbation Attacks on ML-based CAD
ACM Transactions on Design Automation of Electronic Systems ( IF 2.2 ) Pub Date : 2020-08-21 , DOI: 10.1145/3408288
Kang Liu 1 , Haoyu Yang 2 , Yuzhe Ma 2 , Benjamin Tan 1 , Bei Yu 2 , Evangeline F. Y. Young 2 , Ramesh Karri 1 , Siddharth Garg 1
Affiliation  

There is substantial interest in the use of machine learning (ML)-based techniques throughout the electronic computer-aided design (CAD) flow, particularly those based on deep learning. However, while deep learning methods have surpassed state-of-the-art performance in several applications, they have exhibited intrinsic susceptibility to adversarial perturbations—small but deliberate alterations to the input of a neural network, precipitating incorrect predictions. In this article, we seek to investigate whether adversarial perturbations pose risks to ML-based CAD tools, and if so, how these risks can be mitigated. To this end, we use a motivating case study of lithographic hotspot detection, for which convolutional neural networks (CNN) have shown great promise. In this context, we show the first adversarial perturbation attacks on state-of-the-art CNN-based hotspot detectors; specifically, we show that small (on average 0.5% modified area), functionality preserving, and design-constraint-satisfying changes to a layout can nonetheless trick a CNN-based hotspot detector into predicting the modified layout as hotspot free (with up to 99.7% success in finding perturbations that flip a detector’s output prediction, based on a given set of attack constraints). We propose an adversarial retraining strategy to improve the robustness of CNN-based hotspot detection and show that this strategy significantly improves robustness (by a factor of ~3) against adversarial attacks without compromising classification accuracy.

中文翻译:

基于机器学习的 CAD 的对抗性扰动攻击

人们对在整个电子计算机辅助设计 (CAD) 流程中使用基于机器学习 (ML) 的技术非常感兴趣,尤其是基于深度学习的技术。然而,虽然深度学习方法在一些应用中已经超越了最先进的性能,但它们表现出对对抗性扰动的内在敏感性——对神经网络输入的微小但有意的改变,从而导致了不正确的预测。在本文中,我们试图调查对抗性扰动是否会给基于 ML 的 CAD 工具带来风险,如果是,如何减轻这些风险。为此,我们使用了一个光刻热点检测的激励案例研究,卷积神经网络 (CNN) 已显示出巨大的前景。在这种情况下,我们展示了第一的对最先进的基于 CNN 的热点检测器的对抗性扰动攻击;具体来说,我们展示了对布局的小(平均 0.5% 修改区域)、保留功能和满足设计约束的更改仍然可以欺骗基于 CNN 的热点检测器将修改后的布局预测为无热点(高达 99.7基于给定的一组攻击约束,找到翻转检测器输出预测的扰动的成功百分比)。我们提出了一种对抗性再训练策略来提高基于 CNN 的热点检测的鲁棒性,并表明该策略在不影响分类准确性的情况下显着提高了对抗性攻击的鲁棒性(约 3 倍)。
更新日期:2020-08-21
down
wechat
bug