当前位置: X-MOL 学术Pattern Recogn. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Perceptual quality-preserving black-box attack against deep learning image classifiers
Pattern Recognition Letters ( IF 5.1 ) Pub Date : 2021-04-20 , DOI: 10.1016/j.patrec.2021.03.033
Diego Gragnaniello , Francesco Marra , Luisa Verdoliva , Giovanni Poggi

Deep neural networks provide unprecedented performance in all image classification problems, including biometric recognition systems, key elements in all smart city environments. Recent studies, however, have shown their vulnerability to adversarial attacks, spawning intense research in this field. To improve system security, new countermeasures and stronger attacks are proposed by the day. On the attacker’s side, there is growing interest for the realistic black-box scenario, in which the user has no access to the network parameters. The problem is to design efficient attacks which mislead the neural network without compromising image quality. In this work, we propose to perform the black-box attack along a high-saliency and low-distortion path, so as to improve both attack efficiency and image perceptual quality. Experiments on real-world systems prove the effectiveness of the proposed approach both on benchmark tasks and actual biometric applications.



中文翻译:

针对深度学习图像分类器的保留感知质量的黑盒攻击

深度神经网络在所有图像分类问题中都提供了空前的性能,包括生物识别系统,所有智能城市环境中的关键要素。但是,最近的研究表明,它们容易受到对抗性攻击,因此在该领域进行了深入的研究。为了提高系统安全性,当今提出了新的对策和更强的攻击。从攻击者的角度来看,对于现实的黑匣子方案越来越感兴趣,在这种方案中,用户无法访问网络参数。问题是设计有效的攻击,在不影响图像质量的情况下误导神经网络。在这项工作中,我们建议沿着高显着性和低失真的路径执行黑盒攻击,以提高攻击效率和图像感知能力。质量。实际系统上的实验证明了该方法在基准任务和实际生物识别应用中的有效性。

更新日期:2021-05-06
down
wechat
bug