当前位置: X-MOL 学术Quantum Inf. Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Quantum learning Boolean linear functions w.r.t. product distributions
Quantum Information Processing ( IF 2.5 ) Pub Date : 2020-04-20 , DOI: 10.1007/s11128-020-02661-1
Matthias C. Caro

The problem of learning Boolean linear functions from quantum examples w.r.t. the uniform distribution can be solved on a quantum computer using the Bernstein–Vazirani algorithm (Bernstein and Vazirani, in: Kosaraju (ed) Proceedings of the twenty-fifth annual ACM symposium on theory of computing, ACM, New York, 1993. https://doi.org/10.1145/167088.167097). A similar strategy can be applied in the case of noisy quantum training data, as was observed in Grilo et al. (Learning with errors is easy with quantum samples, 2017). However, extensions of these learning algorithms beyond the uniform distribution have not yet been studied. We employ the biased quantum Fourier transform introduced in Kanade et al. (Learning dnfs under product distributions via \(\mu \)-biased quantum Fourier sampling, 2018) to develop efficient quantum algorithms for learning Boolean linear functions on n bits from quantum examples w.r.t. a biased product distribution. Our first procedure is applicable to any (except full) bias and requires \(\mathcal {O}(\ln (n))\) quantum examples. The number of quantum examples used by our second algorithm is independent of n, but the strategy is applicable only for small bias. Moreover, we show that the second procedure is stable w.r.t. noisy training data and w.r.t. faulty quantum gates. This also enables us to solve a version of the learning problem in which the underlying distribution is not known in advance. Finally, we prove lower bounds on the classical and quantum sample complexities of the learning problem. Whereas classically, \(\varOmega (n)\) examples are necessary independently of the bias, we are able to establish a quantum sample complexity lower bound of \(\varOmega (\ln (n))\) only under an assumption of large bias. Nevertheless, this allows for a discussion of the performance of our suggested learning algorithms w.r.t. sample complexity. With our analysis, we contribute to a more quantitative understanding of the power and limitations of quantum training data for learning classical functions.

中文翻译:

乘积分布的量子学习布尔线性函数

可以通过使用Bernstein-Vazirani算法(Bernstein和Vazirani,在:Kosaraju(ed),第25届ACM年度理论研讨会)上在量子计算机上从均匀分布的量子例子中学习布尔线性函数的问题。计算,ACM,纽约,1993年。https://doi.org/10.1145/167088.167097)。就像在Grilo等人(2006年)中观察到的那样,在有噪声的量子训练数据的情况下可以应用类似的策略。(使用量子样本可以轻松地学习错误,2017年)。但是,尚未研究这些学习算法在均匀分布之外的扩展。我们采用了Kanade等人引入的有偏量子傅立叶变换。(通过\(\ mu \)学习产品分布下的dnfs-biased Quantum Fourier Sample,2018),以开发有效的量子算法,从带有偏差积分布的量子示例中的n个比特上学习布尔线性函数。我们的第一个过程适用于任何(完全除外的)偏差,并且需要\(\ mathcal {O}(\ ln(n))\)量子示例。我们第二种算法使用的量子示例数与n不相关,但该策略仅适用于小偏差。此外,我们表明第二种方法是稳定的,有噪声的训练数据和有缺陷的量子门。这也使我们能够解决学习问题的一种形式,其中事先不知道基础分布。最后,我们证明了学习问题的经典和量子样本复杂度的下界。传统上,\(\ varOmega(n)\)示例是必需的,而与偏差无关,但我们能够建立\(\ varOmega(\ ln(n))\)的量子样本复杂度下界仅在大偏差的假设下。然而,这允许讨论我们建议的学习算法的性能以及样本的复杂性。通过我们的分析,我们有助于更定量地理解量子训练数据用于学习经典函数的功能和局限性。
更新日期:2020-04-20
down
wechat
bug