当前位置: X-MOL 学术Mach. Learn. Sci. Technol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Defence against adversarial attacks using classical and quantum-enhanced Boltzmann machinesThis work is dedicated to the memory of Peter Wittek.
Machine Learning: Science and Technology ( IF 6.3 ) Pub Date : 2021-07-13 , DOI: 10.1088/2632-2153/abf834
Aidan Kehoe 1 , Peter Wittek 2, 3, 4, 5 , Yanbo Xue 6 , Alejandro Pozas-Kerstjens 7
Affiliation  

We provide a robust defence to adversarial attacks on discriminative algorithms. Neural networks are naturally vulnerable to small, tailored perturbations in the input data that lead to wrong predictions. On the contrary, generative models attempt to learn the distribution underlying a dataset, making them inherently more robust to small perturbations. We use Boltzmann machines for discrimination purposes as attack-resistant classifiers, and compare them against standard state-of-the-art adversarial defences. We find improvements ranging from 5% to 72% against attacks with Boltzmann machines on the MNIST dataset. We furthermore complement the training with quantum-enhanced sampling from the D-Wave 2000Q annealer, finding results comparable with classical techniques and with marginal improvements in some cases. These results underline the relevance of probabilistic methods in constructing neural networks and highlight a novel scenario of practical relevance where quantum computers, even with limited hardware capabilities, could provide advantages over classical computers.



中文翻译:

使用经典和量子增强玻尔兹曼机防御对抗性攻击这项工作是为了纪念 Peter Wittek。

我们为对判别算法的对抗性攻击提供了强大的防御。神经网络自然容易受到输入数据中小的、定制的扰动的影响,这些扰动会导致错误的预测。相反,生成模型试图学习数据集背后的分布,从而使它们对小扰动具有更强的鲁棒性。我们将玻尔兹曼机用于区分目的作为抗攻击分类器,并将它们与标准的最先进对抗性防御进行比较。我们发现对 MNIST 数据集上的玻尔兹曼机攻击有 5% 到 72% 的改进。此外,我们还使用来自 D-Wave 2000Q 退火器的量子增强采样来补充训练,发现结果与经典技术相当,并且在某些情况下略有改进。

更新日期:2021-07-13
down
wechat
bug