Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Perspective
  • Published:

Towards quantum enhanced adversarial robustness in machine learning

Abstract

Machine learning algorithms are powerful tools for data-driven tasks such as image classification and feature detection. However, their vulnerability to adversarial examples—input samples manipulated to fool the algorithm—remains a serious challenge. The integration of machine learning with quantum computing has the potential to yield tools offering not only better accuracy and computational efficiency, but also superior robustness against adversarial attacks. Indeed, recent work has employed quantum-mechanical phenomena to defend against adversarial attacks, spurring the rapid development of the field of quantum adversarial machine learning (QAML) and potentially yielding a new source of quantum advantage. Despite promising early results, there remain challenges in building robust real-world QAML tools. In this Perspective, we discuss recent progress in QAML and identify key challenges. We also suggest future research directions that could determine the route to practicality for QAML approaches as quantum computing hardware scales up and noise levels are reduced.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Adversarial ML.
Fig. 2: Attacking and defending quantum classifiers.
Fig. 3: Quantum adversarial ML framework.
Fig. 4: Fault-tolerant implementation.

Similar content being viewed by others

References

  1. LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).

    Article  Google Scholar 

  2. Biggio, B. et al. Evasion attacks against machine learning at test time. In Proc. Joint European Conference on Machine Learning and Knowledge Discovery in Databases 387–402 (Springer, 2013).

  3. Szegedy, C. et al. Intriguing properties of neural networks. Preprint at https://arxiv.org/abs/1312.6199 (2013).

  4. Huang, L., Joseph, A. D., Nelson, B., Rubinstein, B. I. & Tygar, J. D. Adversarial machine learning. In Proc. 4th ACM Workshop on Security and Artificial Intelligence AISec11 43–58 (Association for Computing Machinery, 2011).

  5. Kurakin, A., Goodfellow, I. & Bengio, S. Adversarial machine learning at scale. Preprint at https://arxiv.org/abs/1611.01236 (2016).

  6. Su, J., Vargas, D. V. & Sakurai, K. One pixel attack for fooling deep neural networks. IEEE Trans. Evolution. Comput. 23, 828–841 (2019).

    Article  Google Scholar 

  7. Athalye, A., Carlini, N. & Wagner, D. Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In Proc. International Conference on Machine Learning 274–283 (PMLR, 2018).

  8. Goodfellow, I. J., Shlens, J. & Szegedy, C. Explaining and harnessing adversarial examples. Preprint at https://arxiv.org/abs/1412.6572 (2014).

  9. Kurakin, A., Goodfellow, I. J. & Bengio, S. in Artificial Intelligence Safety and Security 99–112 (Chapman & Hall, 2018).

  10. Eykholt, K. et al. Robust physical-world attacks on deep learning visual classification. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 1625–1634 (IEEE, 2018).

  11. Carlini, N. & Wagner, D. Adversarial examples are not easily detected: bypassing ten detection methods. In Proc. 10th ACM Workshop on Artificial Intelligence and Security 3–14 (Association for Computing Machinery, 2017).

  12. Wong, E., Rice, L. & Kolter, J. Z. Fast is better than free: revisiting adversarial training. Preprint at https://arxiv.org/abs/2001.03994 (2020).

  13. Madry, A., Makelov, A., Schmidt, L., Tsipras, D. & Vladu, A. Towards deep learning models resistant to adversarial attacks. Preprint at https://arxiv.org/abs/1706.06083 (2017).

  14. Goodfellow, I., McDaniel, P. & Papernot, N. Making machine learning robust against adversarial inputs. Commun. ACM 61, 56–66 (2018).

    Article  Google Scholar 

  15. Miller, D. J., Xiang, Z. & Kesidis, G. Adversarial learning targeting deep neural network classification: a comprehensive review of defenses against attacks. Proc. IEEE 108, 402–433 (2020).

    Article  Google Scholar 

  16. Ilyas, A. et al. Adversarial examples are not bugs, they are features. In Advances in Neural Information Processing Systems 32 (Association for Computing Machinery, 2019).

  17. Sharif, M., Bhagavatula, S., Bauer, L. & Reiter, M. K. Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In Proc. 2016 ACM SIGSAC Conference on Computer and Communications Security CCS16 1528–1540 (Association for Computing Machinery, 2016).

  18. Biamonte, J. et al. Quantum machine learning. Nature 549, 195–202 (2017).

    Article  Google Scholar 

  19. Lu, S., Duan, L.-M. & Deng, D.-L. Quantum adversarial machine learning. Phys. Rev. Res. 2, 033212 (2020).

    Article  Google Scholar 

  20. Liu, N. & Wittek, P. Vulnerability of quantum classification to adversarial perturbations. Phys. Rev. A 101, 062331 (2020).

    Article  MathSciNet  Google Scholar 

  21. Du, Y., Hsieh, M.-H., Liu, T., Tao, D. & Liu, N. Quantum noise protects quantum classifiers against adversaries. Phys. Rev. Res. 3, 023153 (2021).

    Article  Google Scholar 

  22. Guan, J., Fang, W. & Ying, M. Robustness verification of quantum classifiers. In Proc. International Conference on Computer Aided Verification 151–174 (Springer, 2021).

  23. Weber, M., Liu, N., Li, B., Zhang, C. & Zhao, Z. Optimal provable robustness of quantum classification via quantum hypothesis testing. npj Quantum Inf. 7, 76 (2021).

    Article  Google Scholar 

  24. Ren, W. et al. Experimental quantum adversarial learning with programmable superconducting qubits. Nat. Comput. Sci. 2, 711–717 (2022).

    Article  Google Scholar 

  25. Liao, H., Convy, I., Huggins, W. J. & Whaley, K. B. Robust in practice: adversarial attacks on quantum machine learning. Phys. Rev. A 103, 042427 (2021).

    Article  MathSciNet  Google Scholar 

  26. Kehoe, A., Wittek, P., Xue, Y. & Pozas-Kerstjens, A. Defence against adversarial attacks using classical and quantum-enhanced Boltzmann machines. Mach. Learn. Sci. Technol. 2, 045006 (2021).

    Article  Google Scholar 

  27. West, M. et al. Benchmarking adversarially robust quantum machine learning at scale. Preprint at https://arxiv.org/abs/2211.12681 (2022).

  28. Beer, K. et al. Training deep quantum neural networks. Nat. Commun. 11, 808 (2020).

    Article  Google Scholar 

  29. Havlíček, V. et al. Supervised learning with quantum-enhanced feature spaces. Nature 567, 209–212 (2019).

    Article  Google Scholar 

  30. Dallaire-Demers, P.-L. & Killoran, N. Quantum generative adversarial networks. Phys. Rev. A 98, 012324 (2018).

    Article  Google Scholar 

  31. Lu, S. & Braunstein, S. L. Quantum decision tree classifier. Quantum Inf. Process. 13, 757–770 (2014).

    Article  MathSciNet  MATH  Google Scholar 

  32. Romero, J., Olson, J. P. & Aspuru-Guzik, A. Quantum autoencoders for efficient compression of quantum data. Quantum Sci. Technol. 2, 045001 (2017).

    Article  Google Scholar 

  33. Ristè, D. et al. Demonstration of quantum advantage in machine learning. npj Quantum Inf. 3, 16 (2017).

    Article  Google Scholar 

  34. Huang, H.-Y. et al. Quantum advantage in learning from experiments. Science 376, 1182–1186 (2022).

    Article  MathSciNet  Google Scholar 

  35. Ledoux, M. The Concentration of Measure Phenomenon (American Mathematical Society, 2001).

  36. Caro, M. C., Gil-Fuster, E., Meyer, J. J., Eisert, J. & Sweke, R. Encoding-dependent generalization bounds for parametrized quantum circuits. Quantum 5, 582 (2021).

    Article  Google Scholar 

  37. Caro, M. C. et al. Generalization in quantum machine learning from few training data. Nat. Commun. 13, 4919 (2022).

    Article  Google Scholar 

  38. Banchi, L., Pereira, J. & Pirandola, S. Generalization in quantum machine learning: a quantum information standpoint. PRX Quantum 2, 040321 (2021).

    Article  Google Scholar 

  39. Gong, W. & Deng, D. Universal adversarial examples and perturbations for quantum classifiers. Natl Sci. Rev. 9, nwab130 (2022).

    Google Scholar 

  40. LaRose, R. & Coyle, B. Robust data encodings for quantum classifiers. Phys. Rev. A 102, 032420 (2020).

    Article  Google Scholar 

  41. Creswell, A. et al. Generative adversarial networks: an overview. IEEE Signal Process. Mag. 35, 53–65 (2018).

    Article  Google Scholar 

  42. Chen, P.-Y., Zhang, H., Sharma, Y., Yi, J. & Hsieh, C.-J. ZOO: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proc. 10th ACM Workshop on Artificial Intelligence and Security 15–26 (Association for Computing Machinery, 2017).

  43. Lecun, Y., Bottou, L., Bengio, Y. & Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998).

    Article  Google Scholar 

  44. Jiang, S., Lu, S. & Deng, D.-L. Adversarial machine learning phases of matter. Preprint at https://arxiv.org/abs/1910.13453 (2019).

  45. Guo, C., Rana, M., Cisse, M. & Van Der Maaten, L. Countering adversarial images using input transformations. Preprint at https://arxiv.org/abs/1711.00117 (2017).

  46. Buckman, J., Roy, A., Raffel, C. & Goodfellow, I. Thermometer encoding: one hot way to resist adversarial examples. In International Conference on Learning Representations (ICLR, 2018).

  47. Feinman, R., Curtin, R. R., Shintre, S. & Gardner, A. B. Detecting adversarial samples from artifacts. Preprint at https://arxiv.org/abs/1703.00410 (2017).

  48. Salman, H. et al. Provably robust deep learning via adversarially trained smoothed classifiers. In Advances in Neural Information Processing Systems 32 (Association for Computing Machinery, 2019).

  49. Zhang, H. et al. Theoretically principled trade-off between robustness and accuracy. In Proc. International Conference on Machine Learning 7472–7482 (PMLR, 2019).

  50. Lecuyer, M., Atlidakis, V., Geambasu, R., Hsu, D. & Jana, S. Certified robustness to adversarial examples with differential privacy. In Proc. 2019 IEEE Symposium on Security and Privacy (SP) 656–672 (IEEE, 2019).

  51. Cohen, J., Rosenfeld, E. & Kolter, Z. Certified adversarial robustness via randomized smoothing. In Proc. International Conference on Machine Learning 1310–1320 (PMLR, 2019).

  52. Wong, E., Schmidt, F., Metzen, J. H. & Kolter, J. Z. Scaling provable adversarial defenses. In Advances in Neural Information Processing Systems 31 (Association for Computing Machinery, 2018).

  53. Raghunathan, A., Steinhardt, J. & Liang, P. Certified defenses against adversarial examples. Preprint at https://arxiv.org/abs/1801.09344 (2018).

  54. Tran, H.-D., Bak, S., Xiang, W. & Johnson, T. T. in Computer Aided Verification (eds Lahiri, S. K. & Wang, C.) 18–42 (Springer, 2020).

  55. Elboher, Y. Y., Gottschlich, J. & Katz, G. An abstraction-based framework for neural network verification. In Proc. International Conference on Computer Aided Verification 43–65 (Springer, 2020).

  56. Fremont, D. J., Chiu, J., Margineantu, D. D., Osipychev, D. & Seshia, S. A. Formal analysis and redesign of a neural network-based aircraft taxiing system with verifai. CoRR 2005.07173 (2020).

  57. Huang, X., Kwiatkowska, M., Wang, S. & Wu, M. Safety verification of deep neural networks. In Computer Aided Verification: 29th International Conference, CAV 2017, Heidelberg, Germany, July 24-28, 2017, Proceedings, Part I 3–29 (Springer International Publishing, 2017).

  58. Zhou, L. & Ying, M. Differential privacy in quantum computation. In Proc. 2017 IEEE 30th Computer Security Foundations Symposium (CSF) 249–262 (IEEE, 2017).

  59. Dwork, C. Differential privacy: a survey of results. In Proc. International Conference on Theory and Applications of Models of Computation 1–19 (Springer, 2008).

  60. Helstrom, C. W. Detection theory and quantum mechanics. Inf. Control 10, 254–291 (1967).

    Article  Google Scholar 

  61. Holevo, A. S. Statistical decision theory for quantum systems. J. Multivariate Anal. 3, 337–394 (1973).

    Article  MathSciNet  MATH  Google Scholar 

  62. Bai, T., Luo, J., Zhao, J., Wen, B. & Wang, Q. Recent advances in adversarial training for adversarial robustness. Preprint at https://arxiv.org/abs/2102.01356 (2021).

  63. Kang, D., Sun, Y., Brown, T., Hendrycks, D. & Steinhardt, J. Transfer of adversarial robustness between perturbation types. Preprint at https://arxiv.org/abs/1905.01034 (2019).

  64. Tsipras, D., Santurkar, S., Engstrom, L., Turner, A. & Madry, A. Robustness may be at odds with accuracy. Preprint at https://arxiv.org/abs/1805.12152 (2018).

  65. Liu, Y., Arunachalam, S. & Temme, K. A rigorous and robust quantum speed-up in supervised machine learning. Nat. Phys. 17, 1013–1017 (2021).

    Article  Google Scholar 

  66. Schuld, M. & Petruccione, F. In Machine Learning with Quantum Computers 2926, 217–245 (2021).

  67. Li, W., Lu, Z. & Deng, D. Quantum neural network classifiers: a tutorial. SciPost Phys. Lect. Notes 61 (2022).

  68. Henderson, M., Shakya, S., Pradhan, S. & Cook, T. Quanvolutional neural networks: powering image recognition with quantum circuits. Quantum Mach. Intell. 2, 1–9 (2020).

    Article  Google Scholar 

  69. Dilip, R., Liu, Y. J., Smith, A. & Pollmann, F. Data compression for quantum machine learning. Phys. Rev. Res. 4, 043007 (2022).

    Article  Google Scholar 

  70. McClean, J. R., Boixo, S., Smelyanskiy, V. N., Babbush, R. & Neven, H. Barren plateaus in quantum neural network training landscapes. Nat. Commun. 9, 4812 (2018).

    Article  Google Scholar 

  71. Goodfellow, I. et al. Generative adversarial networks. Commun. ACM 63, 139–144 (2020).

    Article  Google Scholar 

  72. Hu, W. & Tan, Y. In Data Mining and Big Data: 7th International Conference, DMBD 2022, Beijing, China, November 21–24, 2022, Proceedings, Part II 409–423 (Springer, 2023).

  73. Xiao, C. et al. Generating adversarial examples with adversarial networks. Preprint at https://arxiv.org/abs/1801.02610 (2018).

  74. Samangouei, P., Kabkab, M. & Chellappa, R. Defense-GAN: protecting classifiers against adversarial attacks using generative models. Preprint at https://arxiv.org/abs/1805.06605 (2018).

  75. Jin, G., Shen, S., Zhang, D., Dai, F. & Zhang, Y. APE-GAN: Adversarial perturbation elimination with GAN. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 3842–3846 (2019).

  76. Lloyd, S. & Weedbrook, C. Quantum generative adversarial learning. Phys. Rev. Lett. 121, 040502 (2018).

    Article  MathSciNet  Google Scholar 

  77. Zoufal, C., Lucchi, A. & Woerner, S. Quantum Generative Adversarial Networks for learning and loading random distributions. npj Quantum Inf. 5, 103 (2019).

    Article  Google Scholar 

  78. Peters, E. et al. Machine learning of high dimensional data on a noisy quantum processor. npj Quantum Inf. 7, 161 (2021).

    Article  Google Scholar 

  79. White, G. A. L., Hill, C. D., Pollock, F. A., Hollenberg, L. C. L. & Modi, K. Demonstration of non-Markovian process characterisation and control on a quantum processor. Nat. Commun. 11, 6301 (2020).

    Article  Google Scholar 

  80. Fowler, A. G., Mariantoni, M., Martinis, J. M. & Cleland, A. N. Surface codes: towards practical large-scale quantum computation. Phys. Rev. A 86, 032324 (2012).

    Article  Google Scholar 

  81. Gambetta, J. Expanding the IBM Quantum roadmap to anticipate the future of quantum-centric supercomputing (IBM, 2022); https://research.ibm.com/blog/ibm-quantum-roadmap-2025

  82. Our quantum computing journey (Quantumai); https://quantumai.google/learn/map

  83. Scaling IonQ’s quantum computers: the roadmap (IonQ, 2020); https://ionq.com/posts/december-09-2020-scaling-quantum-computer-roadmap

  84. Mooney, G. J., Hill, C. D. & Hollenberg, L. C. L. Cost-optimal single-qubit gate synthesis in the Clifford hierarchy. Quantum 5, 396 (2021).

    Article  Google Scholar 

  85. Campbell, E. & O’Gorman, J. An efficient magic state approach to small angle rotations. Quantum Sci. Technol. https://doi.org/10.1088/2058-9565/1/1/015007 (2016).

  86. Campbell, E. T. & Howard, M. Unified framework for magic state distillation and multiqubit gate synthesis with reduced resource cost. Phys Rev. A 95.2, 022316 (2017).

    Article  Google Scholar 

  87. Gicev, S., Hollenberg, L. C. & Usman, M. A scalable and fast artificial neural network syndrome decoder for surface codes. Preprint at https://arxiv.org/abs/2110.05854 (2021).

Download references

Acknowledgements

We acknowledge useful discussions with S. Gicev and G. Mooney. M.T.W. acknowledges the support of the Australian Government Research Training Program Scholarship. S.M.E. is in part supported by Australian Research Council (ARC) Discovery Early Career Researcher Award (DECRA) DE220100680. We acknowledge the support from Australian Army Research through the Quantum Technology Challenge (QTC22). The computational resources were provided by the National Computing Infrastructure (NCI) and Pawsey Supercomputing Center through the National Computational Merit Allocation Scheme (NCMAS).

Author information

Authors and Affiliations

Authors

Contributions

M.U. conceived and supervised the project. M.T.W. and M.U. wrote the manuscript, with input from all authors.

Corresponding authors

Correspondence to Maxwell T. West or Muhammad Usman.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Machine Intelligence thanks Dong-Ling Deng, Christa Zoufal and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

West, M.T., Tsang, SL., Low, J.S. et al. Towards quantum enhanced adversarial robustness in machine learning. Nat Mach Intell 5, 581–589 (2023). https://doi.org/10.1038/s42256-023-00661-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s42256-023-00661-1

This article is cited by

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics