skip to main content
research-article

Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain

Published:25 May 2021Publication History
Skip Abstract Section

Abstract

In recent years, machine learning algorithms, and more specifically deep learning algorithms, have been widely used in many fields, including cyber security. However, machine learning systems are vulnerable to adversarial attacks, and this limits the application of machine learning, especially in non-stationary, adversarial environments, such as the cyber security domain, where actual adversaries (e.g., malware developers) exist. This article comprehensively summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques and illuminates the risks they pose. First, the adversarial attack methods are characterized based on their stage of occurrence, and the attacker’ s goals and capabilities. Then, we categorize the applications of adversarial attack and defense methods in the cyber security domain. Finally, we highlight some characteristics identified in recent research and discuss the impact of recent advancements in other adversarial learning domains on future research directions in the cyber security domain. To the best of our knowledge, this work is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain, map them in a unified taxonomy, and use the taxonomy to highlight future research directions.

Skip Supplemental Material Section

Supplemental Material

References

  1. Skylight. 2019. Cylance, I Kill You! Retrieved August 24, 2019 from https://skylightcyber.com/2019/07/18/cylance-i-kill-you.Google ScholarGoogle Scholar
  2. Ahmed Abusnaina, Aminollah Khormali, Hisham Alasmary, Jeman Park, Afsah Anwar, and Aziz Mohaisen. 2019. Adversarial learning attacks on graph-based IoT malware detection systems. In Proc. of CDCS, Vol. 10.Google ScholarGoogle ScholarCross RefCross Ref
  3. Naveed Akhtar and Ajmal S. Mian. 2018. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access 6 (2018), 14410--14430.Google ScholarGoogle ScholarCross RefCross Ref
  4. Ahmed AlEroud and George Karabatis. 2020. Bypassing detection of URL-based phishing attacks using generative adversarial deep neural networks. In Proc. of IWSPA.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani B. Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. In Proc. of EMNLP. 2890--2896.Google ScholarGoogle ScholarCross RefCross Ref
  6. A. Anand, K. Gorde, J. R. Antony Moniz, N. Park, T. Chakraborty, and B. Chu. 2018. Phishing URL detection with oversampling based on text generative adversarial networks. In Proc. of Big Data. 1168--1177.Google ScholarGoogle Scholar
  7. Hyrum S. Anderson, Anant Kharkar, Bobby Filar, David Evans, and Phil Roth. 2018. Learning to evade static PE machine learning malware models via reinforcement learning. arXiv:1801.08917Google ScholarGoogle Scholar
  8. Hyrum S. Anderson, Anant Kharkar, Bobby Filar, and Phil Roth. 2017. Evading machine learning malware detection. In Proc. of Black Hat USA.Google ScholarGoogle Scholar
  9. Hyrum S. Anderson, Jonathan Woodbridge, and Bobby Filar. 2016. DeepDGA: Adversarially-tuned domain generation and detection. In Proc. of AISec. 13--21.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Manos Antonakakis, Roberto Perdisci, Yacin Nadji, Nikolaos Vasiloglou, Saeed Abu-Nimeh, Wenke Lee, and David Dagon. 2012. From throw-away traffic to bots: Detecting the rise of DGA-based malware. In Proc. of USENIX Security. 491--506.Google ScholarGoogle Scholar
  11. Daniel Arp, Michael Spreitzenbarth, Malte Hubner, Hugo Gascon, and Konrad Rieck. 2014. DREBIN: Effective and explainable detection of Android malware in your pocket. In Proc. of NDSS.Google ScholarGoogle ScholarCross RefCross Ref
  12. Anish Athalye, Nicholas Carlini, and David A. Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In Proc. of ICML. 274--283.Google ScholarGoogle Scholar
  13. A. C. Bahnsen, E. C. Bohorquez, S. Villegas, J. Vargas, and F. A. González. 2017. Classifying phishing URLs using recurrent neural networks. In Proc. of eCrime. 1--8.Google ScholarGoogle Scholar
  14. Alejandro Correa Bahnsen, Ivan Torroledo, Luis David Camacho, and Sergio Villegas. 2018. DeepPhish : Simulating Malicious AI. Retrieved March 29, 2021 from https://albahnsen.com/2018/06/03/deepphish-simulating-malicious-ai/.Google ScholarGoogle Scholar
  15. Marco Barreno, Blaine Nelson, Anthony D. Joseph, and J. D. Tygar. 2010. The security of machine learning. Machine Learning 81, 2 (2010), 121--148.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Lejla Batina, Shivam Bhasin, Dirmanto Jap, and Stjepan Picek. 2019. CSI NN: Reverse engineering of neural network architectures through electromagnetic side channel. In Proc. of USENIX Security. 515--532.Google ScholarGoogle Scholar
  17. Daniel Berman, Anna Buczak, Jeffrey Chavis, and Cherita Corbett. 2019. A survey of deep learning methods for cyber security. Information 10 (April 2019), 122.Google ScholarGoogle Scholar
  18. B. Biggio, G. Fumera, and F. Roli. 2014. Security evaluation of pattern classifiers under attack. IEEE Transactions on Knowledge and Data Engineering 26, 4 (April 2014), 984--996.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Battista Biggio and Fabio Roli. 2018. Wild patterns: Ten years after the rise of adversarial machine learning. In Proc. of CS. 2154--2156.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Michael Brückner, Christian Kanzow, and Tobias Scheffer. 2012. Static prediction games for adversarial learning problems. Journal of Machine Learning Research 13, 1 (Sept. 2012), 2617--2654.Google ScholarGoogle Scholar
  21. Wilson Cai, Anish Doshi, and Rafael Valle. 2018. Attacking speaker recognition with deep generative models. arXiv:1801.02384Google ScholarGoogle Scholar
  22. Nicholas Carlini and David Wagner. 2017. Adversarial examples are not easily detected. In Proc. of AISec.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Sen Chen, Minhui Xue, Lingling Fan, Shuang Hao, Lihua Xu, Haojin Zhu, and Bo Li. 2018. Automated poisoning attacks and defenses in malware detection systems: An adversarial machine learning approach. Computers & Security 73 (2018), 326--344.Google ScholarGoogle ScholarCross RefCross Ref
  24. Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. 2017. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv:1712.05526Google ScholarGoogle Scholar
  25. Simon P. Chung and Aloysius K. Mok. 2006. Allergy attack against automatic signature generation. In Recent Advances in Intrusion Detection. Lecture Notes in Computer Science, Vol. 4219. Springer. 61--80.Google ScholarGoogle Scholar
  26. George W. Clark, Michael V. Doran, and William Glisson. 2018. A malicious attack on the machine learning policy of a robotic system. In Proc. of TrustCom/BidDataSE. 516--521.Google ScholarGoogle ScholarCross RefCross Ref
  27. Joseph Clements, Yuzhe Yang, Ankur A. Sharma, Hongxin Hu, and Yingjie Lao. 2019. Rallying adversarial techniques against deep learning for network security. arXiv:1903.11688Google ScholarGoogle Scholar
  28. Hung Dang, Yue Huang, and Ee-Chien Chang. 2017. Evading classifiers by morphing in the dark. In Proc. of CCS. 119--133.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Tianyu Du, Shouling Ji, Jinfeng Li, Qinchen Gu, Ting Wang, and Raheem A. Beyah. 2019. SirenAttack: Generating adversarial audio for end-to-end acoustic systems. arXiv:1901.07846Google ScholarGoogle Scholar
  30. Vasisht Duddu, Debasis Samanta, D. Vijay Rao, and Valentina E. Balas. 2018. Stealing neural networks via timing side channels. arXiv:1812.11720Google ScholarGoogle Scholar
  31. Alessandro Erba, Riccardo Taormina, Stefano Galelli, Marcello Pogliani, Michele Carminati, Stefano Zanero, and Nils Ole Tippenhauer. 2019. Real-time evasion attacks with physical constraints on deep learning-based anomaly detectors in industrial control systems. arXiv:1907.07487Google ScholarGoogle Scholar
  32. Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. 2018. Robust physical-world attacks on deep learning visual classification. In Proc. of CVPR.Google ScholarGoogle ScholarCross RefCross Ref
  33. Cheng Feng, Tingting Li, Zhanxing Zhu, and Deeph Chana. 2017. A deep learning-based framework for conducting stealthy attacks in industrial control systems. arXiv:1709.06397Google ScholarGoogle Scholar
  34. J. Gao, J. Lanchantin, M. L. Soffa, and Y. Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. In Proc. of SPW.Google ScholarGoogle Scholar
  35. Yansong Gao, Change Xu, Derui Wang, Shiping Chen, Damith C. Ranasinghe, and Surya Nepal. 2019. STRIP: A defence against trojan attacks on deep neural networks. In Proc. ofACSAC.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Seyed Mohammad Ghaffarian and Hamid Reza Shahriari. 2017. Software vulnerability analysis and discovery using machine-learning and data-mining techniques: A survey. ACM Computing Surveys 50, 4 (2017), Article 56, 36 pages.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Amin Ghafouri, Yevgeniy Vorobeychik, and Xenofon D. Koutsoukos. 2018. Adversarial regression for detecting attacks in cyber-physical systems. In Proc. of IJCAI. 3769--3775.Google ScholarGoogle Scholar
  38. Yuan Gong and Christian Poellabauer. 2017. Crafting adversarial examples for speech paralinguistics applications. arXiv:1711.03280Google ScholarGoogle Scholar
  39. I. J. Goodfellow, J. Shlens, and C. Szegedy. 2015. Explaining and harnessing adversarial examples. In Proc. of ICLR.Google ScholarGoogle Scholar
  40. K. Grosse, N. Papernot, P. Manoharan, M. Backes, and P. McDaniel. 2016. Adversarial perturbations against deep neural networks for malware classification. arXiv:1606.04435Google ScholarGoogle Scholar
  41. Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, and Patrick McDaniel. 2017. Adversarial examples for malware detection. In Proc. of ESORICS. 62--79.Google ScholarGoogle ScholarCross RefCross Ref
  42. Tianyu Gu, Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. 2019. BadNets: Evaluating backdooring attacks on deep neural networks. IEEE Access 7 (2019), 47230--47244.Google ScholarGoogle ScholarCross RefCross Ref
  43. Luiz G. Hafemann, Robert Sabourin, and Luiz Eduardo Soares de Oliveira. 2019. Characterizing and evaluating adversarial examples for offline handwritten signature verification. IEEE Transactions on Information Forensics and Security 14 (2019), 2153--2166.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Mohammad Hashemi, Greg Cusack, and Eric Keller. 2018. Stochastic substitute training: A gray-box approach to craft adversarial examples against gradient obfuscation defenses. In Proc. of CCS. 25--36.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Warren He, James Wei, Xinyun Chen, Nicholas Carlini, and Dawn Song. 2017. Adversarial example defense: Ensembles of weak defenses are not strong. In Proc. of WOOT.Google ScholarGoogle Scholar
  46. Weiwei Hu and Ying Tan. 2017. Black-box attacks against RNN based malware detection algorithms. arXiv:1705.08131Google ScholarGoogle Scholar
  47. Weiwei Hu and Ying Tan. 2017. Generating adversarial malware examples for black-box attacks based on GAN. arXiv:1702.05983Google ScholarGoogle Scholar
  48. Weizhe Hua, Zhiru Zhang, and G. Edward Suh. 2018. Reverse engineering convolutional neural networks through side-channel information leaks. In Proc. ofDAC. Article 4, 6 pages.Google ScholarGoogle Scholar
  49. Chi-Hsuan Huang, Tsung-Han Lee, Lin-Huang Chang, Jhih-Ren Lin, and Gwoboa Horng. 2019. Adversarial attacks on SDN-based deep learning IDS system. In Proc. of ICMWT. 181--191.Google ScholarGoogle ScholarCross RefCross Ref
  50. Ling Huang, Anthony D. Joseph, Blaine Nelson, Benjamin I. P. Rubinstein, and J. D. Tygar. 2011. Adversarial machine learning. In Proc. of AISec. 43--58.Google ScholarGoogle Scholar
  51. Olakunle Ibitoye, M. Omair Shafiq, and Ashraf Matrawy. 2019. Analyzing adversarial attacks against deep learning for intrusion detection in IoT networks. arXiv:1905.05137Google ScholarGoogle Scholar
  52. Andrew Ilyas, Ajil Jalal, Eirini Asteri, Constantinos Daskalakis, and Alexandros G. Dimakis. 2017. The robust manifold defense: Adversarial training using generative models. arXiv:1712.09196Google ScholarGoogle Scholar
  53. Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, and Mykel J. Kochenderfer. 2017. Reluplex: An efficient SMT solver for verifying deep neural networks. In Proc. of CAV. 97--117.Google ScholarGoogle Scholar
  54. Ziv Katzir and Yuval Elovici. 2018. Quantifying the resilience of machine learning classifiers used for cyber security. Expert Systems with Applications 92 (2018), 419--429.Google ScholarGoogle ScholarCross RefCross Ref
  55. Günter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. 2017. Self-normalizing neural networks. In Proc. of NIPS. 971--980.Google ScholarGoogle Scholar
  56. Clemens Kolbitsch, Paolo Milani Comparetti, Christopher Kruegel, Engin Kirda, Xiaoyong Zhou, and XiaoFeng Wang. 2009. Effective and efficient malware detection at the end host. In Proc. of USENIX Security. 351--366.Google ScholarGoogle Scholar
  57. Bojan Kolosnjaji, Ambra Demontis, Battista Biggio, Davide Maiorca, Giorgio Giacinto, Claudia Eckert, and Fabio Roli. 2018. Adversarial malware binaries: Evading deep learning for malware detection in executables. In Proc. of EUSIPCO. 533--537.Google ScholarGoogle ScholarCross RefCross Ref
  58. Moshe Kravchik and Asaf Shabtai. 2019. Efficient cyber attacks detection in industrial control systems using lightweight neural networks. arXiv:1907.01216Google ScholarGoogle Scholar
  59. Felix Kreuk, Yossi Adi, Moustapha Cissé, and Joseph Keshet. 2018. Fooling end-to-end speaker verification with adversarial examples. In Proc. of ICASSP.1962--1966.Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Felix Kreuk, Assi Barak, Shir Aviv-Reuven, Moran Baruch, Benny Pinkas, and Joseph Keshet. 2018. Adversarial examples on discrete sequences for beating whole-binary malware detection. arXiv:1802.04528Google ScholarGoogle Scholar
  61. Volodymyr Kuleshov, Shantanu Thakoor, Tingfung Lau, and Stefano Ermon. 2018. Adversarial examples for natural language classification problems. Unpublished Manuscript.Google ScholarGoogle Scholar
  62. Aditya Kuppa, Slawomir Grzonkowski, Muhammad Rizwan Asghar, and Nhien-An Le-Khac. 2019. Black box attacks on deep anomaly detectors. In Proc. ofARES. 21:1--21:10.Google ScholarGoogle Scholar
  63. A. Kuppa and N. A. Le-Khac. 2020. Black box attacks on explainable artificial intelligence (XAI) methods in cyber security. In Proc. of IJCNN. 1--8.Google ScholarGoogle Scholar
  64. Qi Lei, Lingfei Wu, Pin-Yu Chen, Alexandros G. Dimakis, Inderjit S. Dhillon, and Michael Witbrock. 2018. Discrete attacks and submodular optimization with applications to text classification. arXiv:1812.00151Google ScholarGoogle Scholar
  65. Jiangnan Li, Jin Young Lee, Yingyuan Yang, Jinyuan Stella Sun, and Kevin Tomsovic. 2020. ConAML: Constrained adversarial machine learning for cyber-physical systems. arXiv:3004.05631Google ScholarGoogle Scholar
  66. Yuanzhang Li, Yaxiao Wang, Ye Wang, Lishan Ke, and Yu-An Tan. 2020. A feature-vector generative adversarial network for evading PDF malware classifiers. Information Sciences 523 (2020), 38--48.Google ScholarGoogle ScholarCross RefCross Ref
  67. Zilong Lin, Yong Shi, and Zhi Xue. 2018. IDSGAN: Generative adversarial networks for attack generation against intrusion detection. arXiv:1809.02077Google ScholarGoogle Scholar
  68. Xiaolei Liu, Xiaojiang Du, Xiaosong Zhang, Qingxin Zhu, Hao Wang, and Mohsen Guizani. 2019. Adversarial samples on Android malware detection systems for IoT systems. Sensors 19, 4 (2019), 974.Google ScholarGoogle ScholarCross RefCross Ref
  69. Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and Xiangyu Zhang. 2018. Trojaning attack on neural networks. In Proc. of NDSS.Google ScholarGoogle ScholarCross RefCross Ref
  70. Zhengzhe Liu, Xiaojuan Qi, and Philip H. S. Torr. 2020. Global texture enhancement for fake face detection in the wild. In Proc. of CVPR.Google ScholarGoogle Scholar
  71. Jiang Ming, Zhi Xin, Pengwei Lan, Dinghao Wu, Peng Liu, and Bing Mao. 2015. Replacement attacks: Automatically impeding behavior-based malware specifications. In Proc. of ACNS. 497--517.Google ScholarGoogle ScholarCross RefCross Ref
  72. Yisroel Mirsky, Tomer Doitshman, Yuval Elovici, and Asaf Shabtai. 2018. Kitsune: An ensemble of autoencoders for online network intrusion detection. In Proc. of NDSS.Google ScholarGoogle ScholarCross RefCross Ref
  73. Luis Muñoz González, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C. Lupu, and Fabio Roli. 2017. Towards poisoning of deep learning algorithms with back-gradient optimization. In Proc. ofAISec. 27--38.Google ScholarGoogle Scholar
  74. Blaine Nelson, Marco Barreno, Fuching Jack Chi, Anthony D. Joseph, Benjamin I. P. Rubinstein, Udam Saini, Charles A. Sutton, J. Doug Tygar, and Kai Xia. 2008. Exploiting machine learning to subvert your spam filter. In Proc. of LEET.Google ScholarGoogle Scholar
  75. Ozan Özdenizci, Ye Wang, Toshiaki Koike-Akino, and Deniz Erdogmus. 2019. Adversarial deep learning in EEG biometrics. IEEE Signal Processing Letters 26 (2019), 710--714.Google ScholarGoogle ScholarCross RefCross Ref
  76. O. M. Parkhi, A. Vedaldi, and A. Zisserman. 2015. Deep face recognition. In Proc. of BMVC.Google ScholarGoogle Scholar
  77. Jonathan Peck, Joris Roels, Bart Goossens, and Yvan Saeys. 2017. Lower bounds on the robustness to adversarial perturbations. In Proc. ofNIPS.Google ScholarGoogle Scholar
  78. Vinay Uday Prabhu and John Whaley. 2017. Vulnerability of deep learning-based gait biometric recognition to adversarial perturbations. In Proc. of CV-COPS.Google ScholarGoogle Scholar
  79. Shilin Qiu, Qihe Liu, Shijie Zhou, and Chunjiang Wu. 2019. Review of artificial intelligence adversarial attack and defense technologies. Applied Sciences 9 (March 2019), 909.Google ScholarGoogle Scholar
  80. Edward Raff, Jon Barker, Jared Sylvester, Robert Brandon, Bryan Catanzaro, and Charles K. Nicholas. 2018. Malware detection by eating a whole EXE. In Proc. of AAAI Workshops. 268--276.Google ScholarGoogle Scholar
  81. Konrad Rieck, Philipp Trinius, Carsten Willems, and Thorsten Holz. 2011. Automatic analysis of malware behavior using machine learning. Journal of Computer Security 19, 4 (2011), 639--668.Google ScholarGoogle ScholarDigital LibraryDigital Library
  82. Maria Rigaki and Ahmed Elragal. 2017. Adversarial deep learning against intrusion detection classifiers. In Proc. of NATO IST-152.Google ScholarGoogle Scholar
  83. Gary Robinson. 2003. A statistical approach to the spam problem. Linux Journal 2003, 107 (Jan. 2003), 3.Google ScholarGoogle Scholar
  84. Ishai Rosenberg and Ehud Gudes. 2016. Bypassing system calls-based intrusion detection systems. Concurrency and Computation: Practice and Experience 29, 16 (Nov. 2016), e4023.Google ScholarGoogle Scholar
  85. Ishai Rosenberg and Shai Meir. 2020. Bypassing NGAV for fun and profit. In Proc. of Black Hat Europe.Google ScholarGoogle Scholar
  86. I. Rosenberg, S. Meir, J. Berrebi, I. Gordon, G. Sicard, and E. Omid David. 2020. Generating end-to-end adversarial examples for malware classifiers using explainability. In Proc. ofIJCNN. 1--10.Google ScholarGoogle Scholar
  87. Ishai Rosenberg, Asaf Shabtai, Yuval Elovici, and Lior Rokach. 2018. Query-efficient black-box attack against sequence-based malware classifiers. arXiv:1804.08778Google ScholarGoogle Scholar
  88. Ishai Rosenberg, Asaf Shabtai, Yuval Elovici, and Lior Rokach. 2019. Defense methods against adversarial examples for recurrent neural networks. arXiv:1901.09963Google ScholarGoogle Scholar
  89. Ishai Rosenberg, Asaf Shabtai, Lior Rokach, and Yuval Elovici. 2018. Generic black-box end-to-end attack against state of the art API call based malware classifiers. In Proc. of RAID. 490--510.Google ScholarGoogle ScholarCross RefCross Ref
  90. Andrew Ross and Finale Doshi-Velez. 2018. Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In Proc. of AAAI.Google ScholarGoogle Scholar
  91. Joshua Saxe and Konstantin Berlin. 2015. Deep neural network based malware detection using two dimensional binary program features. In Proc. of MALWARE.Google ScholarGoogle ScholarDigital LibraryDigital Library
  92. Stefano Schiavoni, Federico Maggi, Lorenzo Cavallaro, and Stefano Zanero. 2014. Phoenix: DGA-based botnet tracking and intelligence. In Proc. of DIMVA. 192--211.Google ScholarGoogle ScholarCross RefCross Ref
  93. Tegjyot Singh Sethi and Mehmed Kantardzic. 2018. Data driven exploratory attacks on black box classifiers in adversarial domains. Neurocomputing 289 (2018), 129--143.Google ScholarGoogle ScholarDigital LibraryDigital Library
  94. Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter. 2016. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proc. ofCCS. 1528--1540.Google ScholarGoogle Scholar
  95. Hossein Shirazi, Bruhadeshwar Bezawada, Indrakshi Ray, and Charles Anderson. 2019. Adversarial sampling attacks against phishing detection. In Data and Applications Security and Privacy XXXIII. Lecture Notes in Computer Science, Vol. 11559. Springer, 83--101.Google ScholarGoogle Scholar
  96. Ilia Shumailov, Yiren Zhao, Daniel Bates, Nicolas Papernot, Robert Mullins, and Ross Anderson. 2020. Sponge examples: Energy-latency attacks on neural networks. arXiv:2006.03463Google ScholarGoogle Scholar
  97. Lior Sidi, Asaf Nadler, and Asaf Shabtai. 2019. MaskDGA: A black-box evasion technique against DGA classifiers and adversarial defenses. arXiv:1902.08909Google ScholarGoogle Scholar
  98. Sobhan Soleymani, Ali Dabouei, J. Dawson, and N.M. Nasrabadi. 2019. Defending against adversarial iris examples using wavelet decomposition. arXiv:1908.03176Google ScholarGoogle Scholar
  99. Sobhan Soleymani, Ali Dabouei, Jeremy Dawson, and Nasser M. Nasrabadi. 2019. Adversarial examples to fool iris recognition systems. arXiv:1906.09300Google ScholarGoogle Scholar
  100. Felix Specht, Jens Otto, Oliver Niggemann, and Barbara Hammer. 2018. Generation of adversarial examples to prevent misclassification of deep neural network based condition monitoring systems for cyber-physical production systems. In Proc. of INDIN. 760--765.Google ScholarGoogle ScholarCross RefCross Ref
  101. Nedim Srndic and Pavel Laskov. 2014. Practical evasion of a learning-based classifier: A case study. In Proc. of SP. 197--211.Google ScholarGoogle Scholar
  102. Rock Stevens, Octavian Suciu, Andrew Ruef, Sanghyun Hong, Michael W. Hicks, and Tudor Dumitras. 2017. Summoning demons: The pursuit of exploitable bugs in machine learning. arXiv:1701.04739Google ScholarGoogle Scholar
  103. Jack W. Stokes, De Wang, Mady Marinescu, Marc Marino, and Brian Bussone. 2017. Attack and defense of dynamic analysis-based, adversarial neural malware classification models. arXiv:1712.05919Google ScholarGoogle Scholar
  104. Octavian Suciu, Scott E. Coull, and Jeffrey Johns. 2018. Exploring adversarial examples in malware detection. In Proc. of ALEC. 11--16.Google ScholarGoogle Scholar
  105. Octavian Suciu, Radu Marginean, Yigitcan Kaya, Hal Daume III, and Tudor Dumitras. 2018. When does machine learning FAIL? Generalized transferability for evasion and poisoning attacks. In Proc. of USENIX Security. 1299--1316.Google ScholarGoogle Scholar
  106. Yi Sun, Xiaogang Wang, and Xiaoou Tang. 2014. Deep learning face representation from predicting 10,000 classes. In Proc. ofCVPR. 1891--1898.Google ScholarGoogle ScholarDigital LibraryDigital Library
  107. Shayan Taheri, Milad Salem, and Jiann-Shiun Yuan. 2019. RazorNet: Adversarial training and noise training on a deep neural network fooled by a shallow neural network. Big Data and Cognitive Computing 3 (July 2019), 43.Google ScholarGoogle Scholar
  108. Florian Tramer, Nicholas Carlini, Wieland Brendel, and Aleksander Madry. 2020. On adaptive attacks to adversarial example defenses. arxiv:2002.08347Google ScholarGoogle Scholar
  109. Florian Tramèr, Fan Zhang, Ari Juels, Michael K. Reiter, and Thomas Ristenpart. 2016. Stealing machine learning models via prediction APIs. In Proc. of USENIX Security. 601--618.Google ScholarGoogle Scholar
  110. Martino Trevisan and Idilio Drago. 2019. Robust URL classification with generative adversarial networks. ACM SIGMETRICS Performance Evaluation Review 46, 3 (Jan. 2019), 143--146.Google ScholarGoogle ScholarDigital LibraryDigital Library
  111. Rakesh Verma and Keith Dyer. 2015. On the character of phishing URLs: Accurate and robust statistical learning classifiers. In Proc. ofCODASPY.Google ScholarGoogle ScholarDigital LibraryDigital Library
  112. B. Wang and N. Z. Gong. 2018. Stealing hyperparameters in machine learning. In Proc. of SP. 36--52.Google ScholarGoogle Scholar
  113. Bolun Wang, Yuanshun Yao, Shawn Shan, Huiying Li, Bimal Viswanath, Haitao Zheng, and Ben Y. Zhao. 2019. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In Proc. ofSP.Google ScholarGoogle Scholar
  114. Bolun Wang, Yuanshun Yao, Bimal Viswanath, Haitao Zheng, and Ben Y. Zhao. 2018. With great training comes great vulnerability: Practical attacks against transfer learning. In Proc. of USENIX Security. 1281--1297.Google ScholarGoogle Scholar
  115. Z. Wang. 2018. Deep learning-based intrusion detection with adversaries. IEEE Access 6 (2018), 38367--38384.Google ScholarGoogle ScholarCross RefCross Ref
  116. Arkadiusz Warzynski and Grzegorz Kolaczek. 2018. Intrusion detection systems vulnerability on adversarial examples. In Proc. of INISTA. 1--4.Google ScholarGoogle ScholarCross RefCross Ref
  117. Tsui-Wei Weng, Huan Zhang, Pin-Yu Chen, Jinfeng Yi, Dong Su, Yupeng Gao, Cho-Jui Hsieh, and Luca Daniel. 2018. Evaluating the robustness of neural networks: An extreme value theory approach. In Proc. of ICLR.Google ScholarGoogle Scholar
  118. Q. Xiao, K. Li, D. Zhang, and W. Xu. 2018. Security risks in deep learning implementations. In Proc. of SPW. 123--128.Google ScholarGoogle Scholar
  119. Peng Xu, Bojan Kolosnjaji, Claudia Eckert, and Apostolis Zarras. 2020. MANIS: Evading malware detection system on graph structure. In Proc. of ACMAC.Google ScholarGoogle Scholar
  120. Weilin Xu, David Evans, and Yanjun Qi. 2018. Feature squeezing: Detecting adversarial examples in deep neural networks. In Proc. of NDSS.Google ScholarGoogle ScholarCross RefCross Ref
  121. Weilin Xu, Yanjun Qi, and David Evans. 2016. Automatically evading classifiers: A case study on PDF malware classifiers. In Proc. of NDSS.Google ScholarGoogle ScholarCross RefCross Ref
  122. S. Yadav, A. K. K. Reddy, A. L. N. Reddy, and S. Ranjan. 2012. Detecting algorithmically generated domain-flux attacks with DNS traffic analysis. IEEE/ACM Transactions on Networking 20, 5 (Oct. 2012), 1663--1677.Google ScholarGoogle ScholarDigital LibraryDigital Library
  123. Sandeep Yadav, Ashwath Kumar Krishna Reddy, A. L. Narasimha Reddy, and Supranamaya Ranjan. 2010. Detecting algorithmically generated malicious domain names. In Proc. ofIMC. 48--61.Google ScholarGoogle ScholarDigital LibraryDigital Library
  124. Shakiba Yaghoubi and Georgios Fainekos. 2019. Gray-box adversarial testing for control systems with machine learning components. In Proc. ofHSCC. 179--184.Google ScholarGoogle ScholarDigital LibraryDigital Library
  125. K. Yang, J. Liu, C. Zhang, and Y. Fang. 2018. Adversarial examples against the deep learning based network intrusion detection systems. In Proc. of MILCOM. 559--564.Google ScholarGoogle Scholar

Index Terms

  1. Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Computing Surveys
          ACM Computing Surveys  Volume 54, Issue 5
          June 2022
          719 pages
          ISSN:0360-0300
          EISSN:1557-7341
          DOI:10.1145/3467690
          Issue’s Table of Contents

          Copyright © 2021 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 25 May 2021
          • Revised: 1 February 2021
          • Accepted: 1 February 2021
          • Received: 1 July 2020
          Published in csur Volume 54, Issue 5

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format .

        View HTML Format