Abstract
Generative modeling using samples drawn from the probability distribution constitutes a powerful approach for unsupervised machine learning. Quantum mechanical systems can produce probability distributions that exhibit quantum correlations which are difficult to capture using classical models. We show theoretically that such quantum-inspired correlations provide a powerful resource for generative modeling. In particular, we provide an unconditional proof of separation in expressive power between a class of widely used generative models, known as Bayesian networks, and its minimal quantum-inspired extension. We show that this expressivity enhancement is associated with quantum nonlocality and quantum contextuality. Furthermore, we numerically test this separation on standard machine-learning data sets and show that it holds for practical problems. The possibility of quantum-inspired enhancement demonstrated in this work not only sheds light on the design of useful quantum machine-learning protocols but also provides inspiration to draw on ideas from quantum foundations to improve purely classical algorithms.
6 More- Received 23 January 2021
- Revised 17 February 2022
- Accepted 17 March 2022
DOI:https://doi.org/10.1103/PhysRevX.12.021037
Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI.
Published by the American Physical Society
Physics Subject Headings (PhySH)
Popular Summary
Since the inception of machine learning, the field has been linked with that of physics; many of the most successful machine-learning models are inspired by physical systems. Recently, there has been a growing interest in how quantum physics can enhance machine-learning algorithms, and several numerical simulations and proof-of-principle experiments have suggested this possibility. However, very little has been understood about the origin of any potential advantage. Here, we use concepts from foundational quantum research to directly show what salient features of quantum physics can give rise to better performing machine-learning algorithms.
As a concrete example, we focus on a widely used class of classical machine-learning models known as Bayesian networks. These models are known to have a computationally equivalent formulation as Bayesian quantum circuits. We make a minimal extension of the Bayesian quantum circuits by adding a single quantum operation that draws inspiration from the concept of “basis enhancement” in quantum measurement theories. We show that these “quantum-inspired” models, although only minimally extended, are more expressive than the original models in the sense of computational linguistics. We are able to directly link the expressivity enhancement to quantum nonlocality and contextuality, which are some of the most fundamental and counterintuitive concepts of quantum theory. We demonstrate that these foundational quantum ideas not only explain the power of those models simulatable on classical computers but are also potential sources of advantage for models that can be run only on quantum computers. Furthermore, we show that the enhancements are present in real-world settings by numerically testing them on various standard machine-learning tasks.
This work opens a new research direction for using ideas from quantum foundations for the understanding and design of improved machine-learning algorithms for simulating quantum mechanical systems.