Abstract
In recent years, machine learning algorithms, and more specifically deep learning algorithms, have been widely used in many fields, including cyber security. However, machine learning systems are vulnerable to adversarial attacks, and this limits the application of machine learning, especially in non-stationary, adversarial environments, such as the cyber security domain, where actual adversaries (e.g., malware developers) exist. This article comprehensively summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques and illuminates the risks they pose. First, the adversarial attack methods are characterized based on their stage of occurrence, and the attacker’ s goals and capabilities. Then, we categorize the applications of adversarial attack and defense methods in the cyber security domain. Finally, we highlight some characteristics identified in recent research and discuss the impact of recent advancements in other adversarial learning domains on future research directions in the cyber security domain. To the best of our knowledge, this work is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain, map them in a unified taxonomy, and use the taxonomy to highlight future research directions.
Supplemental Material
Available for Download
Supplemental movie, appendix, image and software files for, Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain
- Skylight. 2019. Cylance, I Kill You! Retrieved August 24, 2019 from https://skylightcyber.com/2019/07/18/cylance-i-kill-you.Google Scholar
- Ahmed Abusnaina, Aminollah Khormali, Hisham Alasmary, Jeman Park, Afsah Anwar, and Aziz Mohaisen. 2019. Adversarial learning attacks on graph-based IoT malware detection systems. In Proc. of CDCS, Vol. 10.Google ScholarCross Ref
- Naveed Akhtar and Ajmal S. Mian. 2018. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access 6 (2018), 14410--14430.Google ScholarCross Ref
- Ahmed AlEroud and George Karabatis. 2020. Bypassing detection of URL-based phishing attacks using generative adversarial deep neural networks. In Proc. of IWSPA.Google ScholarDigital Library
- Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani B. Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. In Proc. of EMNLP. 2890--2896.Google ScholarCross Ref
- A. Anand, K. Gorde, J. R. Antony Moniz, N. Park, T. Chakraborty, and B. Chu. 2018. Phishing URL detection with oversampling based on text generative adversarial networks. In Proc. of Big Data. 1168--1177.Google Scholar
- Hyrum S. Anderson, Anant Kharkar, Bobby Filar, David Evans, and Phil Roth. 2018. Learning to evade static PE machine learning malware models via reinforcement learning. arXiv:1801.08917Google Scholar
- Hyrum S. Anderson, Anant Kharkar, Bobby Filar, and Phil Roth. 2017. Evading machine learning malware detection. In Proc. of Black Hat USA.Google Scholar
- Hyrum S. Anderson, Jonathan Woodbridge, and Bobby Filar. 2016. DeepDGA: Adversarially-tuned domain generation and detection. In Proc. of AISec. 13--21.Google ScholarDigital Library
- Manos Antonakakis, Roberto Perdisci, Yacin Nadji, Nikolaos Vasiloglou, Saeed Abu-Nimeh, Wenke Lee, and David Dagon. 2012. From throw-away traffic to bots: Detecting the rise of DGA-based malware. In Proc. of USENIX Security. 491--506.Google Scholar
- Daniel Arp, Michael Spreitzenbarth, Malte Hubner, Hugo Gascon, and Konrad Rieck. 2014. DREBIN: Effective and explainable detection of Android malware in your pocket. In Proc. of NDSS.Google ScholarCross Ref
- Anish Athalye, Nicholas Carlini, and David A. Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In Proc. of ICML. 274--283.Google Scholar
- A. C. Bahnsen, E. C. Bohorquez, S. Villegas, J. Vargas, and F. A. González. 2017. Classifying phishing URLs using recurrent neural networks. In Proc. of eCrime. 1--8.Google Scholar
- Alejandro Correa Bahnsen, Ivan Torroledo, Luis David Camacho, and Sergio Villegas. 2018. DeepPhish : Simulating Malicious AI. Retrieved March 29, 2021 from https://albahnsen.com/2018/06/03/deepphish-simulating-malicious-ai/.Google Scholar
- Marco Barreno, Blaine Nelson, Anthony D. Joseph, and J. D. Tygar. 2010. The security of machine learning. Machine Learning 81, 2 (2010), 121--148.Google ScholarDigital Library
- Lejla Batina, Shivam Bhasin, Dirmanto Jap, and Stjepan Picek. 2019. CSI NN: Reverse engineering of neural network architectures through electromagnetic side channel. In Proc. of USENIX Security. 515--532.Google Scholar
- Daniel Berman, Anna Buczak, Jeffrey Chavis, and Cherita Corbett. 2019. A survey of deep learning methods for cyber security. Information 10 (April 2019), 122.Google Scholar
- B. Biggio, G. Fumera, and F. Roli. 2014. Security evaluation of pattern classifiers under attack. IEEE Transactions on Knowledge and Data Engineering 26, 4 (April 2014), 984--996.Google ScholarDigital Library
- Battista Biggio and Fabio Roli. 2018. Wild patterns: Ten years after the rise of adversarial machine learning. In Proc. of CS. 2154--2156.Google ScholarDigital Library
- Michael Brückner, Christian Kanzow, and Tobias Scheffer. 2012. Static prediction games for adversarial learning problems. Journal of Machine Learning Research 13, 1 (Sept. 2012), 2617--2654.Google Scholar
- Wilson Cai, Anish Doshi, and Rafael Valle. 2018. Attacking speaker recognition with deep generative models. arXiv:1801.02384Google Scholar
- Nicholas Carlini and David Wagner. 2017. Adversarial examples are not easily detected. In Proc. of AISec.Google ScholarDigital Library
- Sen Chen, Minhui Xue, Lingling Fan, Shuang Hao, Lihua Xu, Haojin Zhu, and Bo Li. 2018. Automated poisoning attacks and defenses in malware detection systems: An adversarial machine learning approach. Computers & Security 73 (2018), 326--344.Google ScholarCross Ref
- Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. 2017. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv:1712.05526Google Scholar
- Simon P. Chung and Aloysius K. Mok. 2006. Allergy attack against automatic signature generation. In Recent Advances in Intrusion Detection. Lecture Notes in Computer Science, Vol. 4219. Springer. 61--80.Google Scholar
- George W. Clark, Michael V. Doran, and William Glisson. 2018. A malicious attack on the machine learning policy of a robotic system. In Proc. of TrustCom/BidDataSE. 516--521.Google ScholarCross Ref
- Joseph Clements, Yuzhe Yang, Ankur A. Sharma, Hongxin Hu, and Yingjie Lao. 2019. Rallying adversarial techniques against deep learning for network security. arXiv:1903.11688Google Scholar
- Hung Dang, Yue Huang, and Ee-Chien Chang. 2017. Evading classifiers by morphing in the dark. In Proc. of CCS. 119--133.Google ScholarDigital Library
- Tianyu Du, Shouling Ji, Jinfeng Li, Qinchen Gu, Ting Wang, and Raheem A. Beyah. 2019. SirenAttack: Generating adversarial audio for end-to-end acoustic systems. arXiv:1901.07846Google Scholar
- Vasisht Duddu, Debasis Samanta, D. Vijay Rao, and Valentina E. Balas. 2018. Stealing neural networks via timing side channels. arXiv:1812.11720Google Scholar
- Alessandro Erba, Riccardo Taormina, Stefano Galelli, Marcello Pogliani, Michele Carminati, Stefano Zanero, and Nils Ole Tippenhauer. 2019. Real-time evasion attacks with physical constraints on deep learning-based anomaly detectors in industrial control systems. arXiv:1907.07487Google Scholar
- Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. 2018. Robust physical-world attacks on deep learning visual classification. In Proc. of CVPR.Google ScholarCross Ref
- Cheng Feng, Tingting Li, Zhanxing Zhu, and Deeph Chana. 2017. A deep learning-based framework for conducting stealthy attacks in industrial control systems. arXiv:1709.06397Google Scholar
- J. Gao, J. Lanchantin, M. L. Soffa, and Y. Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. In Proc. of SPW.Google Scholar
- Yansong Gao, Change Xu, Derui Wang, Shiping Chen, Damith C. Ranasinghe, and Surya Nepal. 2019. STRIP: A defence against trojan attacks on deep neural networks. In Proc. ofACSAC.Google ScholarDigital Library
- Seyed Mohammad Ghaffarian and Hamid Reza Shahriari. 2017. Software vulnerability analysis and discovery using machine-learning and data-mining techniques: A survey. ACM Computing Surveys 50, 4 (2017), Article 56, 36 pages.Google ScholarDigital Library
- Amin Ghafouri, Yevgeniy Vorobeychik, and Xenofon D. Koutsoukos. 2018. Adversarial regression for detecting attacks in cyber-physical systems. In Proc. of IJCAI. 3769--3775.Google Scholar
- Yuan Gong and Christian Poellabauer. 2017. Crafting adversarial examples for speech paralinguistics applications. arXiv:1711.03280Google Scholar
- I. J. Goodfellow, J. Shlens, and C. Szegedy. 2015. Explaining and harnessing adversarial examples. In Proc. of ICLR.Google Scholar
- K. Grosse, N. Papernot, P. Manoharan, M. Backes, and P. McDaniel. 2016. Adversarial perturbations against deep neural networks for malware classification. arXiv:1606.04435Google Scholar
- Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, and Patrick McDaniel. 2017. Adversarial examples for malware detection. In Proc. of ESORICS. 62--79.Google ScholarCross Ref
- Tianyu Gu, Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. 2019. BadNets: Evaluating backdooring attacks on deep neural networks. IEEE Access 7 (2019), 47230--47244.Google ScholarCross Ref
- Luiz G. Hafemann, Robert Sabourin, and Luiz Eduardo Soares de Oliveira. 2019. Characterizing and evaluating adversarial examples for offline handwritten signature verification. IEEE Transactions on Information Forensics and Security 14 (2019), 2153--2166.Google ScholarDigital Library
- Mohammad Hashemi, Greg Cusack, and Eric Keller. 2018. Stochastic substitute training: A gray-box approach to craft adversarial examples against gradient obfuscation defenses. In Proc. of CCS. 25--36.Google ScholarDigital Library
- Warren He, James Wei, Xinyun Chen, Nicholas Carlini, and Dawn Song. 2017. Adversarial example defense: Ensembles of weak defenses are not strong. In Proc. of WOOT.Google Scholar
- Weiwei Hu and Ying Tan. 2017. Black-box attacks against RNN based malware detection algorithms. arXiv:1705.08131Google Scholar
- Weiwei Hu and Ying Tan. 2017. Generating adversarial malware examples for black-box attacks based on GAN. arXiv:1702.05983Google Scholar
- Weizhe Hua, Zhiru Zhang, and G. Edward Suh. 2018. Reverse engineering convolutional neural networks through side-channel information leaks. In Proc. ofDAC. Article 4, 6 pages.Google Scholar
- Chi-Hsuan Huang, Tsung-Han Lee, Lin-Huang Chang, Jhih-Ren Lin, and Gwoboa Horng. 2019. Adversarial attacks on SDN-based deep learning IDS system. In Proc. of ICMWT. 181--191.Google ScholarCross Ref
- Ling Huang, Anthony D. Joseph, Blaine Nelson, Benjamin I. P. Rubinstein, and J. D. Tygar. 2011. Adversarial machine learning. In Proc. of AISec. 43--58.Google Scholar
- Olakunle Ibitoye, M. Omair Shafiq, and Ashraf Matrawy. 2019. Analyzing adversarial attacks against deep learning for intrusion detection in IoT networks. arXiv:1905.05137Google Scholar
- Andrew Ilyas, Ajil Jalal, Eirini Asteri, Constantinos Daskalakis, and Alexandros G. Dimakis. 2017. The robust manifold defense: Adversarial training using generative models. arXiv:1712.09196Google Scholar
- Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, and Mykel J. Kochenderfer. 2017. Reluplex: An efficient SMT solver for verifying deep neural networks. In Proc. of CAV. 97--117.Google Scholar
- Ziv Katzir and Yuval Elovici. 2018. Quantifying the resilience of machine learning classifiers used for cyber security. Expert Systems with Applications 92 (2018), 419--429.Google ScholarCross Ref
- Günter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. 2017. Self-normalizing neural networks. In Proc. of NIPS. 971--980.Google Scholar
- Clemens Kolbitsch, Paolo Milani Comparetti, Christopher Kruegel, Engin Kirda, Xiaoyong Zhou, and XiaoFeng Wang. 2009. Effective and efficient malware detection at the end host. In Proc. of USENIX Security. 351--366.Google Scholar
- Bojan Kolosnjaji, Ambra Demontis, Battista Biggio, Davide Maiorca, Giorgio Giacinto, Claudia Eckert, and Fabio Roli. 2018. Adversarial malware binaries: Evading deep learning for malware detection in executables. In Proc. of EUSIPCO. 533--537.Google ScholarCross Ref
- Moshe Kravchik and Asaf Shabtai. 2019. Efficient cyber attacks detection in industrial control systems using lightweight neural networks. arXiv:1907.01216Google Scholar
- Felix Kreuk, Yossi Adi, Moustapha Cissé, and Joseph Keshet. 2018. Fooling end-to-end speaker verification with adversarial examples. In Proc. of ICASSP.1962--1966.Google ScholarDigital Library
- Felix Kreuk, Assi Barak, Shir Aviv-Reuven, Moran Baruch, Benny Pinkas, and Joseph Keshet. 2018. Adversarial examples on discrete sequences for beating whole-binary malware detection. arXiv:1802.04528Google Scholar
- Volodymyr Kuleshov, Shantanu Thakoor, Tingfung Lau, and Stefano Ermon. 2018. Adversarial examples for natural language classification problems. Unpublished Manuscript.Google Scholar
- Aditya Kuppa, Slawomir Grzonkowski, Muhammad Rizwan Asghar, and Nhien-An Le-Khac. 2019. Black box attacks on deep anomaly detectors. In Proc. ofARES. 21:1--21:10.Google Scholar
- A. Kuppa and N. A. Le-Khac. 2020. Black box attacks on explainable artificial intelligence (XAI) methods in cyber security. In Proc. of IJCNN. 1--8.Google Scholar
- Qi Lei, Lingfei Wu, Pin-Yu Chen, Alexandros G. Dimakis, Inderjit S. Dhillon, and Michael Witbrock. 2018. Discrete attacks and submodular optimization with applications to text classification. arXiv:1812.00151Google Scholar
- Jiangnan Li, Jin Young Lee, Yingyuan Yang, Jinyuan Stella Sun, and Kevin Tomsovic. 2020. ConAML: Constrained adversarial machine learning for cyber-physical systems. arXiv:3004.05631Google Scholar
- Yuanzhang Li, Yaxiao Wang, Ye Wang, Lishan Ke, and Yu-An Tan. 2020. A feature-vector generative adversarial network for evading PDF malware classifiers. Information Sciences 523 (2020), 38--48.Google ScholarCross Ref
- Zilong Lin, Yong Shi, and Zhi Xue. 2018. IDSGAN: Generative adversarial networks for attack generation against intrusion detection. arXiv:1809.02077Google Scholar
- Xiaolei Liu, Xiaojiang Du, Xiaosong Zhang, Qingxin Zhu, Hao Wang, and Mohsen Guizani. 2019. Adversarial samples on Android malware detection systems for IoT systems. Sensors 19, 4 (2019), 974.Google ScholarCross Ref
- Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and Xiangyu Zhang. 2018. Trojaning attack on neural networks. In Proc. of NDSS.Google ScholarCross Ref
- Zhengzhe Liu, Xiaojuan Qi, and Philip H. S. Torr. 2020. Global texture enhancement for fake face detection in the wild. In Proc. of CVPR.Google Scholar
- Jiang Ming, Zhi Xin, Pengwei Lan, Dinghao Wu, Peng Liu, and Bing Mao. 2015. Replacement attacks: Automatically impeding behavior-based malware specifications. In Proc. of ACNS. 497--517.Google ScholarCross Ref
- Yisroel Mirsky, Tomer Doitshman, Yuval Elovici, and Asaf Shabtai. 2018. Kitsune: An ensemble of autoencoders for online network intrusion detection. In Proc. of NDSS.Google ScholarCross Ref
- Luis Muñoz González, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C. Lupu, and Fabio Roli. 2017. Towards poisoning of deep learning algorithms with back-gradient optimization. In Proc. ofAISec. 27--38.Google Scholar
- Blaine Nelson, Marco Barreno, Fuching Jack Chi, Anthony D. Joseph, Benjamin I. P. Rubinstein, Udam Saini, Charles A. Sutton, J. Doug Tygar, and Kai Xia. 2008. Exploiting machine learning to subvert your spam filter. In Proc. of LEET.Google Scholar
- Ozan Özdenizci, Ye Wang, Toshiaki Koike-Akino, and Deniz Erdogmus. 2019. Adversarial deep learning in EEG biometrics. IEEE Signal Processing Letters 26 (2019), 710--714.Google ScholarCross Ref
- O. M. Parkhi, A. Vedaldi, and A. Zisserman. 2015. Deep face recognition. In Proc. of BMVC.Google Scholar
- Jonathan Peck, Joris Roels, Bart Goossens, and Yvan Saeys. 2017. Lower bounds on the robustness to adversarial perturbations. In Proc. ofNIPS.Google Scholar
- Vinay Uday Prabhu and John Whaley. 2017. Vulnerability of deep learning-based gait biometric recognition to adversarial perturbations. In Proc. of CV-COPS.Google Scholar
- Shilin Qiu, Qihe Liu, Shijie Zhou, and Chunjiang Wu. 2019. Review of artificial intelligence adversarial attack and defense technologies. Applied Sciences 9 (March 2019), 909.Google Scholar
- Edward Raff, Jon Barker, Jared Sylvester, Robert Brandon, Bryan Catanzaro, and Charles K. Nicholas. 2018. Malware detection by eating a whole EXE. In Proc. of AAAI Workshops. 268--276.Google Scholar
- Konrad Rieck, Philipp Trinius, Carsten Willems, and Thorsten Holz. 2011. Automatic analysis of malware behavior using machine learning. Journal of Computer Security 19, 4 (2011), 639--668.Google ScholarDigital Library
- Maria Rigaki and Ahmed Elragal. 2017. Adversarial deep learning against intrusion detection classifiers. In Proc. of NATO IST-152.Google Scholar
- Gary Robinson. 2003. A statistical approach to the spam problem. Linux Journal 2003, 107 (Jan. 2003), 3.Google Scholar
- Ishai Rosenberg and Ehud Gudes. 2016. Bypassing system calls-based intrusion detection systems. Concurrency and Computation: Practice and Experience 29, 16 (Nov. 2016), e4023.Google Scholar
- Ishai Rosenberg and Shai Meir. 2020. Bypassing NGAV for fun and profit. In Proc. of Black Hat Europe.Google Scholar
- I. Rosenberg, S. Meir, J. Berrebi, I. Gordon, G. Sicard, and E. Omid David. 2020. Generating end-to-end adversarial examples for malware classifiers using explainability. In Proc. ofIJCNN. 1--10.Google Scholar
- Ishai Rosenberg, Asaf Shabtai, Yuval Elovici, and Lior Rokach. 2018. Query-efficient black-box attack against sequence-based malware classifiers. arXiv:1804.08778Google Scholar
- Ishai Rosenberg, Asaf Shabtai, Yuval Elovici, and Lior Rokach. 2019. Defense methods against adversarial examples for recurrent neural networks. arXiv:1901.09963Google Scholar
- Ishai Rosenberg, Asaf Shabtai, Lior Rokach, and Yuval Elovici. 2018. Generic black-box end-to-end attack against state of the art API call based malware classifiers. In Proc. of RAID. 490--510.Google ScholarCross Ref
- Andrew Ross and Finale Doshi-Velez. 2018. Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In Proc. of AAAI.Google Scholar
- Joshua Saxe and Konstantin Berlin. 2015. Deep neural network based malware detection using two dimensional binary program features. In Proc. of MALWARE.Google ScholarDigital Library
- Stefano Schiavoni, Federico Maggi, Lorenzo Cavallaro, and Stefano Zanero. 2014. Phoenix: DGA-based botnet tracking and intelligence. In Proc. of DIMVA. 192--211.Google ScholarCross Ref
- Tegjyot Singh Sethi and Mehmed Kantardzic. 2018. Data driven exploratory attacks on black box classifiers in adversarial domains. Neurocomputing 289 (2018), 129--143.Google ScholarDigital Library
- Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter. 2016. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proc. ofCCS. 1528--1540.Google Scholar
- Hossein Shirazi, Bruhadeshwar Bezawada, Indrakshi Ray, and Charles Anderson. 2019. Adversarial sampling attacks against phishing detection. In Data and Applications Security and Privacy XXXIII. Lecture Notes in Computer Science, Vol. 11559. Springer, 83--101.Google Scholar
- Ilia Shumailov, Yiren Zhao, Daniel Bates, Nicolas Papernot, Robert Mullins, and Ross Anderson. 2020. Sponge examples: Energy-latency attacks on neural networks. arXiv:2006.03463Google Scholar
- Lior Sidi, Asaf Nadler, and Asaf Shabtai. 2019. MaskDGA: A black-box evasion technique against DGA classifiers and adversarial defenses. arXiv:1902.08909Google Scholar
- Sobhan Soleymani, Ali Dabouei, J. Dawson, and N.M. Nasrabadi. 2019. Defending against adversarial iris examples using wavelet decomposition. arXiv:1908.03176Google Scholar
- Sobhan Soleymani, Ali Dabouei, Jeremy Dawson, and Nasser M. Nasrabadi. 2019. Adversarial examples to fool iris recognition systems. arXiv:1906.09300Google Scholar
- Felix Specht, Jens Otto, Oliver Niggemann, and Barbara Hammer. 2018. Generation of adversarial examples to prevent misclassification of deep neural network based condition monitoring systems for cyber-physical production systems. In Proc. of INDIN. 760--765.Google ScholarCross Ref
- Nedim Srndic and Pavel Laskov. 2014. Practical evasion of a learning-based classifier: A case study. In Proc. of SP. 197--211.Google Scholar
- Rock Stevens, Octavian Suciu, Andrew Ruef, Sanghyun Hong, Michael W. Hicks, and Tudor Dumitras. 2017. Summoning demons: The pursuit of exploitable bugs in machine learning. arXiv:1701.04739Google Scholar
- Jack W. Stokes, De Wang, Mady Marinescu, Marc Marino, and Brian Bussone. 2017. Attack and defense of dynamic analysis-based, adversarial neural malware classification models. arXiv:1712.05919Google Scholar
- Octavian Suciu, Scott E. Coull, and Jeffrey Johns. 2018. Exploring adversarial examples in malware detection. In Proc. of ALEC. 11--16.Google Scholar
- Octavian Suciu, Radu Marginean, Yigitcan Kaya, Hal Daume III, and Tudor Dumitras. 2018. When does machine learning FAIL? Generalized transferability for evasion and poisoning attacks. In Proc. of USENIX Security. 1299--1316.Google Scholar
- Yi Sun, Xiaogang Wang, and Xiaoou Tang. 2014. Deep learning face representation from predicting 10,000 classes. In Proc. ofCVPR. 1891--1898.Google ScholarDigital Library
- Shayan Taheri, Milad Salem, and Jiann-Shiun Yuan. 2019. RazorNet: Adversarial training and noise training on a deep neural network fooled by a shallow neural network. Big Data and Cognitive Computing 3 (July 2019), 43.Google Scholar
- Florian Tramer, Nicholas Carlini, Wieland Brendel, and Aleksander Madry. 2020. On adaptive attacks to adversarial example defenses. arxiv:2002.08347Google Scholar
- Florian Tramèr, Fan Zhang, Ari Juels, Michael K. Reiter, and Thomas Ristenpart. 2016. Stealing machine learning models via prediction APIs. In Proc. of USENIX Security. 601--618.Google Scholar
- Martino Trevisan and Idilio Drago. 2019. Robust URL classification with generative adversarial networks. ACM SIGMETRICS Performance Evaluation Review 46, 3 (Jan. 2019), 143--146.Google ScholarDigital Library
- Rakesh Verma and Keith Dyer. 2015. On the character of phishing URLs: Accurate and robust statistical learning classifiers. In Proc. ofCODASPY.Google ScholarDigital Library
- B. Wang and N. Z. Gong. 2018. Stealing hyperparameters in machine learning. In Proc. of SP. 36--52.Google Scholar
- Bolun Wang, Yuanshun Yao, Shawn Shan, Huiying Li, Bimal Viswanath, Haitao Zheng, and Ben Y. Zhao. 2019. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In Proc. ofSP.Google Scholar
- Bolun Wang, Yuanshun Yao, Bimal Viswanath, Haitao Zheng, and Ben Y. Zhao. 2018. With great training comes great vulnerability: Practical attacks against transfer learning. In Proc. of USENIX Security. 1281--1297.Google Scholar
- Z. Wang. 2018. Deep learning-based intrusion detection with adversaries. IEEE Access 6 (2018), 38367--38384.Google ScholarCross Ref
- Arkadiusz Warzynski and Grzegorz Kolaczek. 2018. Intrusion detection systems vulnerability on adversarial examples. In Proc. of INISTA. 1--4.Google ScholarCross Ref
- Tsui-Wei Weng, Huan Zhang, Pin-Yu Chen, Jinfeng Yi, Dong Su, Yupeng Gao, Cho-Jui Hsieh, and Luca Daniel. 2018. Evaluating the robustness of neural networks: An extreme value theory approach. In Proc. of ICLR.Google Scholar
- Q. Xiao, K. Li, D. Zhang, and W. Xu. 2018. Security risks in deep learning implementations. In Proc. of SPW. 123--128.Google Scholar
- Peng Xu, Bojan Kolosnjaji, Claudia Eckert, and Apostolis Zarras. 2020. MANIS: Evading malware detection system on graph structure. In Proc. of ACMAC.Google Scholar
- Weilin Xu, David Evans, and Yanjun Qi. 2018. Feature squeezing: Detecting adversarial examples in deep neural networks. In Proc. of NDSS.Google ScholarCross Ref
- Weilin Xu, Yanjun Qi, and David Evans. 2016. Automatically evading classifiers: A case study on PDF malware classifiers. In Proc. of NDSS.Google ScholarCross Ref
- S. Yadav, A. K. K. Reddy, A. L. N. Reddy, and S. Ranjan. 2012. Detecting algorithmically generated domain-flux attacks with DNS traffic analysis. IEEE/ACM Transactions on Networking 20, 5 (Oct. 2012), 1663--1677.Google ScholarDigital Library
- Sandeep Yadav, Ashwath Kumar Krishna Reddy, A. L. Narasimha Reddy, and Supranamaya Ranjan. 2010. Detecting algorithmically generated malicious domain names. In Proc. ofIMC. 48--61.Google ScholarDigital Library
- Shakiba Yaghoubi and Georgios Fainekos. 2019. Gray-box adversarial testing for control systems with machine learning components. In Proc. ofHSCC. 179--184.Google ScholarDigital Library
- K. Yang, J. Liu, C. Zhang, and Y. Fang. 2018. Adversarial examples against the deep learning based network intrusion detection systems. In Proc. of MILCOM. 559--564.Google Scholar
Index Terms
- Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain
Recommendations
Machine Learning under Attack: Vulnerability Exploitation and Security Measures
IH&MMSec '16: Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia SecurityLearning to discriminate between secure and hostile patterns is a crucial problem for species to survive in nature. Mimetism and camouflage are well-known examples of evolving weapons and defenses in the arms race between predators and preys. It is thus ...
Adversarial examples: A survey of attacks and defenses in deep learning-enabled cybersecurity systems
AbstractOver the last few years, the adoption of machine learning in a wide range of domains has been remarkable. Deep learning, in particular, has been extensively used to drive applications and services in specializations such as computer vision, ...
Highlights- A taxonomy of cybersecurity applications is established.
- Adversarial machine learning is systematically overviewed.
- An extensive, curated list of cybersecurity-related datasets is provided.
- Methods for generating adversarial ...
On learning and recognition of secure patterns
AISec '14: Proceedings of the 2014 Workshop on Artificial Intelligent and Security WorkshopLearning and recognition of secure patterns is a well-known problem in nature. Mimicry and camouflage are widely-spread techniques in the arms race between predators and preys. All of the information acquired by our senses is therefore not necessarily ...
Comments