当前位置:
X-MOL 学术
›
arXiv.cs.LG
›
论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Experimental quantum adversarial learning with programmable superconducting qubits
arXiv - CS - Machine Learning Pub Date : 2022-04-04 , DOI: arxiv-2204.01738 Wenhui Ren, Weikang Li, Shibo Xu, Ke Wang, Wenjie Jiang, Feitong Jin, Xuhao Zhu, Jiachen Chen, Zixuan Song, Pengfei Zhang, Hang Dong, Xu Zhang, Jinfeng Deng, Yu Gao, Chuanyu Zhang, Yaozu Wu, Bing Zhang, Qiujiang Guo, Hekang Li, Zhen Wang, Jacob Biamonte, Chao Song, Dong-Ling Deng, H. Wang
arXiv - CS - Machine Learning Pub Date : 2022-04-04 , DOI: arxiv-2204.01738 Wenhui Ren, Weikang Li, Shibo Xu, Ke Wang, Wenjie Jiang, Feitong Jin, Xuhao Zhu, Jiachen Chen, Zixuan Song, Pengfei Zhang, Hang Dong, Xu Zhang, Jinfeng Deng, Yu Gao, Chuanyu Zhang, Yaozu Wu, Bing Zhang, Qiujiang Guo, Hekang Li, Zhen Wang, Jacob Biamonte, Chao Song, Dong-Ling Deng, H. Wang
Quantum computing promises to enhance machine learning and artificial
intelligence. Different quantum algorithms have been proposed to improve a wide
spectrum of machine learning tasks. Yet, recent theoretical works show that,
similar to traditional classifiers based on deep classical neural networks,
quantum classifiers would suffer from the vulnerability problem: adding tiny
carefully-crafted perturbations to the legitimate original data samples would
facilitate incorrect predictions at a notably high confidence level. This will
pose serious problems for future quantum machine learning applications in
safety and security-critical scenarios. Here, we report the first experimental
demonstration of quantum adversarial learning with programmable superconducting
qubits. We train quantum classifiers, which are built upon variational quantum
circuits consisting of ten transmon qubits featuring average lifetimes of 150
$\mu$s, and average fidelities of simultaneous single- and two-qubit gates
above 99.94% and 99.4% respectively, with both real-life images (e.g., medical
magnetic resonance imaging scans) and quantum data. We demonstrate that these
well-trained classifiers (with testing accuracy up to 99%) can be practically
deceived by small adversarial perturbations, whereas an adversarial training
process would significantly enhance their robustness to such perturbations. Our
results reveal experimentally a crucial vulnerability aspect of quantum
learning systems under adversarial scenarios and demonstrate an effective
defense strategy against adversarial attacks, which provide a valuable guide
for quantum artificial intelligence applications with both near-term and future
quantum devices.
中文翻译:
具有可编程超导量子位的实验性量子对抗学习
量子计算有望增强机器学习和人工智能。已经提出了不同的量子算法来改进广泛的机器学习任务。然而,最近的理论工作表明,与基于深度经典神经网络的传统分类器类似,量子分类器会受到漏洞问题的困扰:在合法的原始数据样本中添加精心设计的微小扰动会在非常高的置信度下促进错误预测. 这将为未来的量子机器学习应用在安全和安全关键场景中带来严重问题。在这里,我们报告了使用可编程超导量子位进行量子对抗学习的首次实验演示。我们训练量子分类器,它们建立在由十个传输量子比特组成的变分量子电路上,平均寿命为 150 美元/亩,同时单量子比特和双量子比特门的平均保真度分别高于 99.94% 和 99.4%,两张真实图像(例如,医学磁共振成像扫描)和量子数据。我们证明了这些训练有素的分类器(测试准确率高达 99%)实际上可以被小的对抗性扰动所欺骗,而对抗性训练过程将显着提高它们对这种扰动的鲁棒性。我们的结果通过实验揭示了对抗性场景下量子学习系统的一个关键漏洞方面,并展示了一种针对对抗性攻击的有效防御策略,
更新日期:2022-04-04
中文翻译:
具有可编程超导量子位的实验性量子对抗学习
量子计算有望增强机器学习和人工智能。已经提出了不同的量子算法来改进广泛的机器学习任务。然而,最近的理论工作表明,与基于深度经典神经网络的传统分类器类似,量子分类器会受到漏洞问题的困扰:在合法的原始数据样本中添加精心设计的微小扰动会在非常高的置信度下促进错误预测. 这将为未来的量子机器学习应用在安全和安全关键场景中带来严重问题。在这里,我们报告了使用可编程超导量子位进行量子对抗学习的首次实验演示。我们训练量子分类器,它们建立在由十个传输量子比特组成的变分量子电路上,平均寿命为 150 美元/亩,同时单量子比特和双量子比特门的平均保真度分别高于 99.94% 和 99.4%,两张真实图像(例如,医学磁共振成像扫描)和量子数据。我们证明了这些训练有素的分类器(测试准确率高达 99%)实际上可以被小的对抗性扰动所欺骗,而对抗性训练过程将显着提高它们对这种扰动的鲁棒性。我们的结果通过实验揭示了对抗性场景下量子学习系统的一个关键漏洞方面,并展示了一种针对对抗性攻击的有效防御策略,