当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Black-box adversarial attacks using Evolution Strategies
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2021-04-30 , DOI: arxiv-2104.15064
Hao Qiu, Leonardo Lucio Custode, Giovanni Iacca

In the last decade, deep neural networks have proven to be very powerful in computer vision tasks, starting a revolution in the computer vision and machine learning fields. However, deep neural networks, usually, are not robust to perturbations of the input data. In fact, several studies showed that slightly changing the content of the images can cause a dramatic decrease in the accuracy of the attacked neural network. Several methods able to generate adversarial samples make use of gradients, which usually are not available to an attacker in real-world scenarios. As opposed to this class of attacks, another class of adversarial attacks, called black-box adversarial attacks, emerged, which does not make use of information on the gradients, being more suitable for real-world attack scenarios. In this work, we compare three well-known evolution strategies on the generation of black-box adversarial attacks for image classification tasks. While our results show that the attacked neural networks can be, in most cases, easily fooled by all the algorithms under comparison, they also show that some black-box optimization algorithms may be better in "harder" setups, both in terms of attack success rate and efficiency (i.e., number of queries).

中文翻译:

使用Evolution策略的黑匣子对抗攻击

在过去的十年中,深度神经网络已被证明在计算机视觉任务中非常强大,从而引发了计算机视觉和机器学习领域的一场革命。但是,深度神经网络通常对输入数据的扰动并不鲁棒。实际上,一些研究表明,稍微改变图像的内容可能会导致受攻击的神经网络的准确性急剧下降。能够生成对抗性样本的几种方法都利用了梯度,在现实世界中,攻击者通常无法使用梯度。与此类攻击相反,出现了另一类称为黑盒对抗攻击的对抗攻击,这种攻击不利用梯度信息,因此更适合于实际攻击场景。在这项工作中,我们在图像分类任务的黑盒对抗攻击的生成过程中比较了三种著名的进化策略。虽然我们的结果表明,在大多数情况下,被比较的所有算法都可以轻易地欺骗被攻击的神经网络,但它们还表明,某些黑盒优化算法在“较困难”的设置中可能会更好,无论从攻击成功角度而言速度和效率(即查询数量)。
更新日期:2021-05-03
down
wechat
bug