当前位置: X-MOL 学术Natl. Sci. Rev. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Universal Adversarial Examples and Perturbations for Quantum Classifiers
National Science Review ( IF 20.6 ) Pub Date : 2021-07-15 , DOI: 10.1093/nsr/nwab130
Weiyuan Gong 1 , Dong-Ling Deng 1, 2
Affiliation  

Quantum machine learning explores the interplay between machine learning and quantum physics, which may lead to unprecedented perspectives for both fields. In fact, recent works have shown strong evidences that quantum computers could outperform classical computers in solving certain notable machine learning tasks. Yet, quantum learning systems may also suffer from the vulnerability problem: adding a tiny carefully-crafted perturbation to the legitimate input data would cause the systems to make incorrect predictions at a notably high confidence level. In this paper, we study the universality of adversarial examples and perturbations for quantum classifiers. Through concrete examples involving classifications of real-life images and quantum phases of matter, we show that there exist universal adversarial examples that can fool a set of different quantum classifiers. We prove that for a set of k classifiers with each receiving input data of n qubits, an $O(\frac{\ln k}{2^n})$ increase of the perturbation strength is enough to ensure a moderate universal adversarial risk. In addition, for a given quantum classifier we show that there exist universal adversarial perturbations, which can be added to different legitimate samples and make them to be adversarial examples for the classifier. Our results reveal the universality perspective of adversarial attacks for quantum machine learning systems, which would be crucial for practical applications of both near-term and future quantum technologies in solving machine learning problems.

中文翻译:

量子分类器的通用对抗样本和扰动

量子机器学习探索机器学习和量子物理学之间的相互作用,这可能会为这两个领域带来前所未有的视角。事实上,最近的研究表明,量子计算机在解决某些重要的机器学习任务方面可以胜过经典计算机。然而,量子学习系统也可能存在漏洞问题:向合法输入数据添加精心设计的微小扰动会导致系统在非常高的置信度水平上做出错误的预测。在本文中,我们研究了量子分类器的对抗样本和扰动的普遍性。通过涉及现实生活图像分类和物质量子相的具体例子,我们表明存在可以欺骗一组不同量子分类器的通用对抗性示例。我们证明,对于一组 k 个分类器,每个分类器接收 n 个量子位的输入数据,$O(\frac{\ln k}{2^n})$ 的扰动强度增加足以确保适度的普遍对抗风险. 此外,对于给定的量子分类器,我们表明存在普遍的对抗扰动,可以将其添加到不同的合法样本中,使它们成为分类器的对抗样本。我们的结果揭示了针对量子机器学习系统的对抗性攻击的普遍性观点,这对于近期和未来量子技术在解决机器学习问题中的实际应用至关重要。我们证明,对于一组 k 个分类器,每个分类器接收 n 个量子位的输入数据,$O(\frac{\ln k}{2^n})$ 的扰动强度增加足以确保适度的普遍对抗风险. 此外,对于给定的量子分类器,我们表明存在普遍的对抗扰动,可以将其添加到不同的合法样本中,使它们成为分类器的对抗样本。我们的结果揭示了针对量子机器学习系统的对抗性攻击的普遍性观点,这对于近期和未来量子技术在解决机器学习问题中的实际应用至关重要。我们证明,对于一组 k 个分类器,每个分类器接收 n 个量子位的输入数据,$O(\frac{\ln k}{2^n})$ 的扰动强度增加足以确保适度的普遍对抗风险. 此外,对于给定的量子分类器,我们表明存在普遍的对抗扰动,可以将其添加到不同的合法样本中,使它们成为分类器的对抗样本。我们的结果揭示了针对量子机器学习系统的对抗性攻击的普遍性观点,这对于近期和未来量子技术在解决机器学习问题中的实际应用至关重要。对于给定的量子分类器,我们表明存在普遍的对抗扰动,可以将其添加到不同的合法样本中,使它们成为分类器的对抗样本。我们的结果揭示了针对量子机器学习系统的对抗性攻击的普遍性观点,这对于近期和未来量子技术在解决机器学习问题中的实际应用至关重要。对于给定的量子分类器,我们表明存在普遍的对抗扰动,可以将其添加到不同的合法样本中,使它们成为分类器的对抗样本。我们的结果揭示了针对量子机器学习系统的对抗性攻击的普遍性观点,这对于近期和未来量子技术在解决机器学习问题中的实际应用至关重要。
更新日期:2021-07-15
down
wechat
bug