当前位置: X-MOL 学术Pattern Recogn. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Black-box Attack against Handwritten Signature Verification with Region-restricted Adversarial Perturbations
Pattern Recognition ( IF 7.5 ) Pub Date : 2021-03-01 , DOI: 10.1016/j.patcog.2020.107689
Haoyang Li , Heng Li , Hansong Zhang , Wei Yuan

Abstract Handwritten signature verification is used to verify the identity of individuals through recognizing their signatures. Adversarial examples can induce misclassification, hence posing a severe threat to signature verification. At present, a variety of adversarial example attacks have been developed for image classification, but they are not that useful for attacking signature verification due to two main reasons. First, adversarial perturbations are likely to be imposed on the background of signature images, making them perceptible to human eyes. Second, perfect knowledge about signature verification systems is actually unavailable to attackers. Therefore, how to generate effective and stealthy signature adversarial examples is still an open issue. To shed insights on this challenging problem, we propose the first black-box adversarial example attack against handwritten signature verification in this paper. Our method has two key designs. First, its perturbations are intentionally restricted to the foreground (i.e., strokes) of signature images, which reduces the risk of being recognized by humans. Second, a gradient-free method is developed to achieve the desired perturbations through iteratively updating their positions and optimizing their intensity. Extensive experiments confirm the three advantages of our method. First, the adversarial perturbations generated by our method are almost invisible, while those generated by existing methods are more well-marked. Second, our method defeats the state-of-the-art signature verification method with a surprisingly high success rate of 92.1%. Last, our method breaks through the defense of background cleaning, although this defense can deactivate almost all the existing adversarial example attacks towards signature verification.

中文翻译:

具有区域限制对抗扰动的针对手写签名验证的黑盒攻击

摘要 手写签名验证用于通过识别个人签名来验证个人身份。对抗样本会导致错误分类,从而对签名验证构成严重威胁。目前,针对图像分类已经开发了多种对抗性示例攻击,但由于两个主要原因,它们对于攻击签名验证并没有那么有用。首先,对抗性扰动可能会被强加在签名图像的背景上,使它们可以被人眼感知。其次,攻击者实际上无法获得关于签名验证系统的完美知识。因此,如何生成有效且隐蔽的签名对抗样本仍然是一个悬而未决的问题。为了深入了解这个具有挑战性的问题,我们在本文中提出了第一个针对手写签名验证的黑盒对抗性示例攻击。我们的方法有两个关键设计。首先,它的扰动被有意限制在签名图像的前景(即笔画),这降低了被人类识别的风险。其次,开发了一种无梯度方法,通过迭代更新它们的位置和优化它们的强度来实现所需的扰动。大量实验证实了我们方法的三个优点。首先,我们的方法产生的对抗性扰动几乎不可见,而现有方法产生的对抗性扰动更加明显。其次,我们的方法以 92.1% 的惊人成功率击败了最先进的签名验证方法。最后的,
更新日期:2021-03-01
down
wechat
bug