Skip to main content
Log in

Adversarial examples: attacks and defenses in the physical world

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

Deep learning technology has become an important branch of artificial intelligence. However, researchers found that deep neural networks, as the core algorithm of deep learning technology, are vulnerable to adversarial examples. The adversarial examples are some special input examples which were added small magnitude and carefully crafted perturbations to yield erroneous results with extremely confidence. Hence, they bring serious security risks to deep-learning-based systems. Furthermore, adversarial examples exist not only in the digital world, but also in the physical world. This paper presents a comprehensive overview of adversarial attacks and defenses in the real physical world. First, we reviewed these works that can successfully generate adversarial examples in the digital world, analyzed the challenges faced by applications in real environments. Then, we compare and summarize the work of adversarial examples on image classification tasks, target detection tasks, and speech recognition tasks. In addition, the relevant feasible defense strategies are summarized. Finally, relying on the reviewed work, we propose potential research directions for the attack and defense of adversarial examples in the physical world.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444

    Article  Google Scholar 

  2. Sutskever I, Vinyals O, Le QV (2014) Sequence to sequence learning with neural networks. Advances in neural information processing systems. Springer, New York, pp 3104–3112

    Google Scholar 

  3. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  4. Senior A, Vanhoucke V, Guyen P, Sainath T et al (2012) Deep neural networks for acoustic modeling in speech recognition. IEEE Signal Process Mag 29:82

    Google Scholar 

  5. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems. Springer, New York, pp 1097–1105

    Google Scholar 

  6. Athalye A, Engstrom L, Ilyas A, Kwok K (2018) Synthesizing robust adversarial examples. In: International conference on machine learning. PMLR, pp 284– 293

  7. Sun L, Tan M, Zhou Z (2018) A survey of practical adversarial example attacks. Cybersecurity 1(1):9

    Article  Google Scholar 

  8. Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2013) Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199

  9. Goodfellow IJ, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples. Comput Sci

  10. Kurakin A, Goodfellow I, Bengio S (2016) Adversarial examples in the physical world

  11. Moosavi-Dezfooli SM, Fawzi A, Fawzi O, Frossard P (2017) Universal adversarial perturbations

  12. Moosavi-Dezfooli S-M, Fawzi A, Frossard P (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2574–2582

  13. Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. In: 2017 IEEE symposium on security and privacy (sp). IEEE, pp 39–57

  14. Xie C, Zhang Z, Zhou Y, Bai S, Wang J, Ren Z, Yuille AL (2019) Improving transferability of adversarial examples with input diversity. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2730–2739

  15. Dong Y, Liao F, Pang T, Su H, Zhu J, Hu X, Li J (2018) Boosting adversarial attacks with momentum. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 9185–9193

  16. Miyato T, Dai AM, Goodfellow I (2016) Adversarial training methods for semi-supervised text classification. arXiv preprint arXiv:1605.07725

  17. Zheng S, Song Y, Leung T, Goodfellow I (2016) Improving the robustness of deep neural networks via stability training. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4480–4488

  18. Buckman J, Roy A, Raffel C, Goodfellow I (2018) Thermometer encoding: one hot way to resist adversarial examples. In: International conference on learning representations

  19. Song Y, Kim T, Nowozin S, Ermon S, Kushman N (2017) Pixeldefend: Leveraging generative models to understand and defend against adversarial examples. arXiv preprint arXiv:1710.10766

  20. Xie C, Wang J, Zhang Z, Ren Z, Yuille A (2017) Mitigating adversarial effects through randomization. arXiv preprint arXiv:1711.01991

  21. Xu W, Evans D, Qi Y (2017) Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155

  22. Akhtar N, Liu J, Mian A (2018) Defense against universal adversarial perturbations. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3389–3398

  23. Zhou H, Li W, Zhu Y, Zhang Y, Yu B, Zhang L, Liu C (2018) Deepbillboard: systematic physical-world testing of autonomous driving systems

  24. Patel N, Krishnamurthy P, Garg S, Khorrami F (2019) Adaptive adversarial videos on roadside billboards: dynamically modifying trajectories of autonomous vehicles. In: 2019 IEEE/RSJ International conference on intelligent robots and systems (IROS)

  25. Sato T, Shen J, Wang N, Jia YJ, Lin X, Chen QA (2020) Security of deep learning based lane keeping system under physical-world adversarial attack

  26. Thys S, Van Ranst W, Goedemé T (2019) Fooling automated surveillance cameras: adversarial patches to attack person detection

  27. Liu A, Liu X, Fan J, Ma Y, Tao D (2019) Perceptual-sensitive gan for generating adversarial patches. In: Proceedings of the AAAI conference on artificial intelligence, vol 33, pp 1028–1035

  28. Li J, Schmidt FR, Kolter JZ (2019) Adversarial camera stickers: a physical camera-based attack on deep learning systems

  29. Zhou Z, Tang D, Wang X, Han W, Liu X, Zhang K (2018) Invisible mask: practical attacks on face recognition with infrared

  30. Cao Y, Xiao C, Cyr B, Zhou Y, Park W, Rampazzi S, Chen QA, Fu K, Mao ZM (2019) Adversarial sensor attack on lidar-based perception in autonomous driving. In: Proceedings of the 2019 ACM SIGSAC conference on computer and communications security, pp 2267–2281

  31. Eykholt K, Evtimov I, Fernandes E, Li B, Rahmati A, Xiao C, Prakash A, Kohno T, Song D (2018) Robust physical-world attacks on deep learning models. In: 2018 IEEE/CVF conference on computer vision and pattern recognition (CVPR)

  32. Sharif M, Bhagavatula S, Bauer L, Reiter MK (2016) Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In: Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp 1528–1540

  33. Papernot N, McDaniel P, Goodfellow I, Jha S, Celik ZB, Swami A (2017) Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia conference on computer and communications security, pp 506–519

  34. Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A (2016) The limitations of deep learning in adversarial settings. In: 2016 IEEE European symposium on security and privacy (EuroS & P). IEEE, pp 372–387

  35. Su J, Vargas DV, Sakurai K (2019) One pixel attack for fooling deep neural networks. IEEE Trans Evol Comput 23(5):828–841

    Article  Google Scholar 

  36. LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324

    Article  Google Scholar 

  37. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556

  38. Girshick R (2015) Fast R-CNN. In: Proceedings of the IEEE international conference on computer vision, pp 1440–1448

  39. Ren S, He K, Girshick R, Sun J (2015) Faster R-CNN: towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp 91–99

  40. He K, Gkioxari G, Dollár P, Girshick R (2017) Mask R-CNN. In: Proceedings of the IEEE international conference on computer vision, pp 2961–2969

  41. Redmon J, Farhadi A (2017) Yolo9000: better, faster, stronger. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7263–7271

  42. Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C-Y, Berg AC (2016) Ssd: Single shot multibox detector, in European conference on computer vision. Springer, pp. 21–37

  43. Wu Z, Lim SN, Davis L, Goldstein T (2019) Making an invisibility cloak: real world adversarial attacks on object detectors

  44. Cao Y, Xiao C, Yang D, Fang J, Yang R, Liu M, Li B (2019) Adversarial objects against lidar-based autonomous driving systems

  45. Tu J, Ren M, Manivasagam S, Liang M, Yang B, Du R, Cheng F, Urtasun R (2020) Physically realizable adversarial examples for lidar object detection

  46. Carlini N, Wagner D (2018) Audio adversarial examples: targeted attacks on speech-to-text. In: 2018 IEEE security and privacy workshops (SPW). IEEE, pp 1–7

  47. Hannun A, Case C, Casper J, Catanzaro B, Diamos G, Elsen E, Prenger R, Satheesh S, Sengupta S, Coates et al A (2014) Deep speech: scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567

  48. Zhang G, Yan C, Ji X, Zhang T, Zhang T, Xu W (2017) Dolphinattack: Inaudible voice commands. In: Proceedings of the 2017 ACM SIGSAC conference on computer and communications security, pp 103–117

  49. Taori R, Kamsetty A, Chu B, Vemuri N (2019) Targeted adversarial examples for black box audio systems. In: 2019 IEEE security and privacy workshops (SPW). IEEE, pp 15–20

  50. Wu T, Tong L, Vorobeychik Y (2019) Defending against physically realizable attacks on image classification

  51. Xu Z, Yu F, Chen X (2020) Lance: A comprehensive and lightweight CNN defense methodology against physical adversarial attacks on embedded multimedia applications. In: 2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC). IEEE, pp 470–475

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (62002074).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Teng Huang or Hongyang Yan.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ren, H., Huang, T. & Yan, H. Adversarial examples: attacks and defenses in the physical world. Int. J. Mach. Learn. & Cyber. 12, 3325–3336 (2021). https://doi.org/10.1007/s13042-020-01242-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-020-01242-z

Keywords

Navigation