当前位置: X-MOL 学术IEEE Trans. Cybern. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adversarial Examples for Hamming Space Search
IEEE Transactions on Cybernetics ( IF 9.4 ) Pub Date : 12-11-2018 , DOI: 10.1109/tcyb.2018.2882908
Erkun Yang , Tongliang Liu , Cheng Deng , Dacheng Tao

Due to its strong representation learning ability and its facilitation of joint learning for representation and hash codes, deep learning-to-hash has achieved promising results and is becoming increasingly popular for the large-scale approximate nearest neighbor search. However, recent studies highlight the vulnerability of deep image classifiers to adversarial examples; this also introduces profound security concerns for deep retrieval systems. Accordingly, in order to study the robustness of modern deep hashing models to adversarial perturbations, we propose hash adversary generation (HAG), a novel method of crafting adversarial examples for Hamming space search. The main goal of HAG is to generate imperceptibly perturbed examples as queries, whose nearest neighbors from a targeted hashing model are semantically irrelevant to the original queries. Extensive experiments prove that HAG can successfully craft adversarial examples with small perturbations to mislead targeted hashing models. The transferability of these perturbations under a variety of settings is also verified. Moreover, by combining heterogeneous perturbations, we further provide a simple yet effective method of constructing adversarial examples for black-box attacks.

中文翻译:


汉明空间搜索的对抗性示例



由于其强大的表示学习能力以及对表示和哈希码联合学习的促进,深度学习到哈希已经取得了可喜的结果,并且在大规模近似最近邻搜索中越来越受欢迎。然而,最近的研究强调了深度图像分类器对对抗性例子的脆弱性;这也给深度检索系统带来了深刻的安全问题。因此,为了研究现代深度哈希模型对对抗性扰动的鲁棒性,我们提出了哈希对抗生成(HAG),这是一种为汉明空间搜索制作对抗性示例的新方法。 HAG 的主要目标是生成难以察觉的扰动示例作为查询,其来自目标哈希模型的最近邻居在语义上与原始查询无关。大量实验证明,HAG 可以成功地制作具有小扰动的对抗性示例,以误导目标哈希模型。这些扰动在各种设置下的可传递性也得到了验证。此外,通过结合异构扰动,我们进一步提供了一种简单而有效的构建黑盒攻击对抗性示例的方法。
更新日期:2024-08-22
down
wechat
bug