当前位置: X-MOL 学术Neural Process Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Generate Usable Adversarial Examples via Simulating Additional Light Sources
Neural Processing Letters ( IF 2.6 ) Pub Date : 2022-09-19 , DOI: 10.1007/s11063-022-11024-z
Chen Xi , Guo Wei , Zhang Fan , Du Jiayu

Deep neural networks have been shown to be critically vulnerable under adversarial attacks. This has led to the proliferation of methods to generate different adversarial examples from different perspectives. The adversarial examples generated by these methods rapidly progress towards being harder to perceive, faster to generate, and more effective to attack. Inspired by the cyberspace attack process, this paper analyzes from the perspective of attack path and find meaningless noise perturbations makes these adversarial examples efficient but difficult to apply for an attacker. This paper generates adversarial examples from the original realistic features of the pictures. The purpose of deceiving the deep convolutional network is achieved by simulating the addition of tiny light sources to produce subtle feature effects on the image. The generated adversarial perturbations are no longer meaningless noisy making it a promising avenue for applications theoretically. The proposed method demonstrates in experiments that the generated adversarial examples can still achieve good attack results in deep convolutional networks and can be applied to black-box attacks.



中文翻译:

通过模拟其他光源生成可用的对抗样本

深度神经网络已被证明在对抗性攻击下极为脆弱。这导致了从不同角度生成不同对抗样本的方法的激增。这些方法生成的对抗样本迅速朝着更难感知、更快生成和更有效攻击的方向发展。受网络空间攻击过程的启发,本文从攻击路径的角度进行分析,发现无意义的噪声扰动使得这些对抗样本高效但难以应用于攻击者。本文从图片的原始真实特征生成对抗样本。欺骗深度卷积网络的目的是通过模拟添加微小光源对图像产生细微的特征效果来实现的。产生的对抗性扰动不再是无意义的噪音,这使其成为理论上有前途的应用途径。所提出的方法在实验中表明,生成的对抗样本仍然可以在深度卷积网络中取得良好的攻击效果,并且可以应用于黑盒攻击。

更新日期:2022-09-20
down
wechat
bug