当前位置: X-MOL 学术arXiv.cs.CR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Threat of Adversarial Attacks on Deep Learning in Computer Vision: Survey II
arXiv - CS - Cryptography and Security Pub Date : 2021-08-01 , DOI: arxiv-2108.00401
Naveed Akhtar, Ajmal Mian, Navid Kardan, Mubarak Shah

Deep Learning (DL) is the most widely used tool in the contemporary field of computer vision. Its ability to accurately solve complex problems is employed in vision research to learn deep neural models for a variety of tasks, including security critical applications. However, it is now known that DL is vulnerable to adversarial attacks that can manipulate its predictions by introducing visually imperceptible perturbations in images and videos. Since the discovery of this phenomenon in 2013~[1], it has attracted significant attention of researchers from multiple sub-fields of machine intelligence. In [2], we reviewed the contributions made by the computer vision community in adversarial attacks on deep learning (and their defenses) until the advent of year 2018. Many of those contributions have inspired new directions in this area, which has matured significantly since witnessing the first generation methods. Hence, as a legacy sequel of [2], this literature review focuses on the advances in this area since 2018. To ensure authenticity, we mainly consider peer-reviewed contributions published in the prestigious sources of computer vision and machine learning research. Besides a comprehensive literature review, the article also provides concise definitions of technical terminologies for non-experts in this domain. Finally, this article discusses challenges and future outlook of this direction based on the literature reviewed herein and [2].

中文翻译:

计算机视觉深度学习对抗性攻击的威胁:调查二

深度学习 (DL) 是当代计算机视觉领域中使用最广泛的工具。其准确解决复杂问题的能力被用于视觉研究,以学习用于各种任务的深度神经模型,包括安全关键应用程序。然而,现在众所周知,DL 容易受到对抗性攻击的影响,对抗性攻击可以通过在图像和视频中引入视觉上难以察觉的扰动来操纵其预测。自2013年发现这一现象以来~[1],引起了机器智能多个子领域研究人员的极大关注。在 [2] 中,我们回顾了 2018 年之前计算机视觉社区在对抗性攻击深度学习(及其防御)方面的贡献。其中许多贡献激发了该领域的新方向,自从见证第一代方法以来,它已经显着成熟。因此,作为 [2] 的遗产续集,本文献综述重点关注自 2018 年以来该领域的进展。为确保真实性,我们主要考虑发表在著名的计算机视觉和机器学习研究来源中的同行评审贡献。除了全面的文献综述外,本文还为该领域的非专家提供了技术术语的简明定义。最后,本文根据本文和 [2] 回顾的文献讨论了该方向的挑战和未来展望。我们主要考虑发表在著名的计算机视觉和机器学习研究来源中的同行评审贡献。除了全面的文献综述外,本文还为该领域的非专家提供了技术术语的简明定义。最后,本文根据本文和 [2] 回顾的文献讨论了该方向的挑战和未来展望。我们主要考虑发表在著名的计算机视觉和机器学习研究来源中的同行评审贡献。除了全面的文献综述外,本文还为该领域的非专家提供了技术术语的简明定义。最后,本文根据本文和 [2] 回顾的文献讨论了该方向的挑战和未来展望。
更新日期:2021-08-03
down
wechat
bug