1 Introduction

As the challenge of scaling traditional transistor-based Central Processing Unit (CPU) technology continues to increase, experimental physicists and high-tech companies have begun to explore radically different computational technologies, such as quantum computers [14, 41, 62], quantum annealers [43, 45] and coherent Ising machines [40, 47, 59]. The goal of all of these technologies is to leverage the dynamical evolution of a physical system to perform a computation that is challenging to emulate using traditional CPU technology, the most notable example being the simulation of quantum physics [29]. Despite their entirely disparate physical implementations, optimization of quadratic functions over binary variables (e.g., the Quadratic Unconstrained Binary Optimization (QUBO) and Ising models [13]) has emerged as a challenging computational task that a wide variety of novel hardware platforms can address. As these technologies mature, it may be possible for this specialized hardware to rapidly solve challenging combinatorial problems, such as Max-Cut [38] or Max-Clique [53], and preliminary studies have suggested that some classes of Constraint Satisfaction Problems can be effectively encoded in such devices because of their combinatorial structure [8, 9, 67, 72].

At this time, understanding the computational advantage that these hardware platforms may bring to established optimization algorithms remains an open question. For example, it is unclear if the primary benefit will be dramatically reduced runtimes due to highly specialized hardware implementations [31, 76, 77] or if the behavior of the underlying analog computational model will bring intrinsic algorithmic advantages [3, 26]. A compelling example is gate-based quantum computation (QC), where a significant body of theoretical work has found key computational advantages that exploit quantum properties [18, 34, 71]. Indeed, such advantages have recently been demonstrated on quantum computing hardware for the first time [5]. Highlighting similar advantages on other computational platforms, both in theory and in practice, remains a central challenge for novel physics-inspired computing models [36, 46, 51].

Focusing on quantum annealing (QA), this work provides new insights on the properties of this computing model and identifies problem structures where it can provide a computational advantage over a broad range of established solution methods. The central contribution of this work is the analysis of tricky optimization problems (i.e., Biased Ferromagnets, Frustrated Biased Ferromagnets, and Corrupted Biased Ferromagnets) that are challenging for established optimization approaches but are easy for QA hardware, such as D-Wave’s 2000Q platform. This result suggests that there are classes of optimization problems where QA can effectively identify global solution structure while established heuristics struggle to escape local minima. Two auxiliary contributions that resulted from this pursuit are the identification of the Corrupted Biased Ferromagnet problem, which appears to be a useful benchmark problem beyond this particular study, and demonstration of the most significant performance gains of a quantum annealing platform to the established state-of-the-art alternatives, to the best of our knowledge.

This work begins with a brief introduction to both the mathematical foundations of the Ising model, Section 2, and quantum annealing, Section 3. It then reviews a variety of algorithms than can be used to solve such models in Section 4. The primary result of the paper is presented in carefully designed structure detection experiments in Section 5. Open challenges relating to developing hybrid algorithms are discussed in Section 6, and Section 7 concludes the paper.

2 A brief introduction to ising models

This section introduces the notations of the paper and provides a brief introduction to Ising models, a core mathematical abstraction of QA. The Ising model refers to the class of graphical models where the nodes, \({\mathcal {N}} = \left \{1,\dots , N\right \}\), represent spin variables (i.e., \(\sigma _{i} \in \{-1,1\} ~\forall i \in {\mathcal {N}}\)), and the edges, \({\mathcal {E}} \subseteq {\mathcal {N}} \times {\mathcal {N}}\), represent pairwise interactions of spin variables (i.e., \(\sigma _{i} \sigma _{j} ~\forall i,j \in {\mathcal {E}}\)). A local field \(\boldsymbol {h}_{i} ~\forall i \in {\mathcal {N}}\) is specified for each node, and an interaction strength \(\boldsymbol {J}_{ij} ~\forall i,j \in {\mathcal {E}}\) is specified for each edge. The energy of the Ising model is then defined as:

$$ \begin{array}{@{}rcl@{}} E(\sigma) &= \underset{i,j \in {\mathcal{E}}}{\sum} \boldsymbol{J}_{ij} \sigma_{i} \sigma_{j} + \underset{i \in {\mathcal{N}}}{\sum} \boldsymbol{h}_{i} \sigma_{i} \end{array} $$
(1)

Originally introduced in statistical physics as a model for describing phase transitions in ferromagnetic materials [32], the Ising model is currently used in numerous and diverse application fields such as neuroscience [39, 68], bio-polymers [63], gene regulatory networks [55], image segmentation [64], statistical learning [52, 74, 75], and sociology [25].

This work focuses on finding the lowest possible energy of the Ising model, known as a ground state, that is, finding the globally optimal solution of the following discrete optimization problem:

$$ \begin{array}{@{}rcl@{}} && \min: E(\sigma)\\ && \text{s.t.: } \sigma_{i} \in \{-1, 1\} ~\forall i \in {\mathcal{N}} \end{array} $$
(2)

The coupling parameters of Ising models are categorized into two groups based on their sign: the ferromagnetic interactions Jij < 0, which encourage neighboring spins to take the same value, i.e., σiσj = 1, and anti-ferromagnetic interactions Jij > 0, which encourage neighboring spins to take opposite values, i.e., σiσj = − 1.

Frustration

The notion of frustration is central to the study of Ising models and refers to any instance of (2) where the optimal solution does not achieve the minimum of all local interactions [19]. Namely, the optimal solution of a frustrated Ising model, σ, satisfies the following property:

$$ \begin{array}{@{}rcl@{}} E(\sigma^{*}) > \underset{i,j \in {\mathcal{E}}}{\sum} - |\boldsymbol{J}_{ij}| - \underset{i \in {\mathcal{N}}}{\sum} |\boldsymbol{h}_{i}| \end{array} $$
(3)

Gauge Transformations

A valuable property of the Ising model is the gauge transformation, which characterizes an equivalence class of Ising models. Consider the optimal solution of Ising model S, σs. One can construct a new Ising model T where the optimal solution is the target state σt by applying the following parameter transformation:

$$ \begin{array}{@{}rcl@{}} \boldsymbol{J}^{t}_{ij} &=& \boldsymbol{J}_{ij}^{s} {\boldsymbol{\sigma}_{i}^{s}} {\boldsymbol{\sigma}_{j}^{s}} {\boldsymbol{\sigma}_{i}^{t}} {\boldsymbol{\sigma}_{j}^{t}} ~\forall i,j \in {\mathcal{E}} \end{array} $$
(4a)
$$ \begin{array}{@{}rcl@{}} {\boldsymbol{h}_{i}^{t}} &=& {\boldsymbol{h}_{i}^{s}} {\boldsymbol{\sigma}_{i}^{s}} {\boldsymbol{\sigma}_{i}^{t}} ~\forall i \in {\mathcal{N}} \end{array} $$
(4b)

This S-to-T manipulation is referred to as a gauge transformation. Using this property, one can consider the class of Ising models where the optimal solution is \(\sigma _{i} = -1 ~\forall i \in {\mathcal {N}}\) or any arbitrary vector of − 1,1 values without loss of generality.

Classes of Ising Models

Ising models are often categorized by the properties of their optimal solutions with two notable categories being Ferromagnets (FM) and Spin glasses. Ferromagnetic Ising models are unfrustrated models possessing one or two optimal solutions. The traditional FM model is obtained by setting Jij = − 1,hi = 0. The optimal solutions have a structure with all spins pointing in the same direction, i.e., σi = 1 or σi = − 1, which mimics the behavior of physical magnets at low temperatures. In contrast to FMs, Spin glasses are highly frustrated systems that exhibit an intricate geometry of optimal solutions that tend to take the form of a hierarchy of isosceles sets [61]. Spin glasses are challenging for greedy and local search algorithms [7] due to the nature of their energy landscape [24, 60]. A typical Spin glass instance can be achieved using random interactions graphs with P(Jij = − 1) = 0.5,P(Jij = 1) = 0.5, and hi = 0.

Bijection of Ising and Boolean Optimization

It is valuable to observe that there is a bijection between Ising optimization (i.e., σ ∈{− 1,1}) and Boolean optimization (i.e., x ∈{0,1}). The transformation of σ-to-x is given by:

$$ \begin{array}{@{}rcl@{}} \sigma_{i} &=& 2x_{i} - 1 ~\forall i \in {\mathcal{N}} \end{array} $$
(5a)
$$ \begin{array}{@{}rcl@{}} \sigma_{i}\sigma_{j} &=& 4x_{i}x_{j} - 2x_{i} - 2x_{j} + 1 ~\forall i,j \in {\mathcal{E}} \end{array} $$
(5b)

and the inverse x-to-σ is given by:

$$ \begin{array}{@{}rcl@{}} x_{i} &=& \frac{\sigma_{i} + 1}{2} ~\forall i \in {\mathcal{N}} \end{array} $$
(6a)
$$ \begin{array}{@{}rcl@{}} x_{i} x_{j} &=& \frac{\sigma_{i} \sigma_{j} + \sigma_{i} + \sigma_{j} + 1}{4} ~\forall i,j \in {\mathcal{E}} \end{array} $$
(6b)

Consequently, any results from solving Ising models are also immediately applicable to the class of optimization problems referred to as Pseudo-Boolean Optimization or Quadratic Unconstrained Binary Optimization (QUBO):

$$ \begin{array}{@{}rcl@{}} && \min: \underset{i,j \in {\mathcal{E}}}{\sum} \boldsymbol{c}_{ij} x_{i} x_{j} + \underset{i \in {\mathcal{N}}}{\sum} \boldsymbol{c}_{i} x_{i} + \boldsymbol{c} \\ && \text{s.t.: } x_{i} \in \{0, 1\} ~\forall i \in {\mathcal{N}} \end{array} $$
(7)

In contrast to gate-based QC, which is Turing complete, QA specializes in optimizing Ising models. The next section provides a brief introduction of how quantum mechanics are leveraged by QA to perform Ising model optimization.

3 Foundations of quantum annealing

Quantum annealing is an analog computing technique for minimizing discrete or continuous functions that takes advantage of the exotic properties of quantum systems. This technique is particularly well-suited for finding optimal solutions of Ising models and has drawn significant interest due to hardware realizations via controllable quantum dynamical systems [43]. Quantum annealing is composed of two key elements: leveraging quantum state to lift the minimization problem into an exponentially larger space, and slowly interpolating (i.e., annealing) between an initial easy problem and the target problem. The quantum lifting begins by introducing for each spin σi ∈{− 1,1} a 2N × 2N dimensional matrix \(\widehat {\sigma }_{i}\) expressible as a Kronecker product of N matrices of dimension 2 × 2:

$$ \begin{array}{@{}rcl@{}} \widehat{\sigma}_{i} = \underbrace{\left( \begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right) \mathop{\otimes} {\cdots} \mathop{\otimes} \left( \begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right)}_{\text{1 to $i-1$}} \mathop{\otimes} \underbrace{\left( \begin{array}{ll} 1 & ~ ~ ~ 0 \\ 0 & -1 \end{array}\right)}_{\text{$i^{\text{th}}$ term}} \mathop{\otimes} \underbrace{\left( \begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right) \mathop{\otimes} {\cdots} \mathop{\otimes} \left( \begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right)}_{\text{$i+1$ to \textit{N}}} \end{array} $$
(8)

In this lifted representation, the value of a spin σi is identified with the two possible eigenvalues 1 and − 1 of the matrix \(\widehat {\sigma }_{i}\). The quantum counterpart of the energy function defined in (1) is the 2N × 2N matrix obtained by substituting spins with the \(\widehat {\sigma }\) matrices in the algebraic expression of the energy:

$$ \begin{array}{@{}rcl@{}} & \widehat{E} = \underset{i,j \in {\mathcal{E}}}{\sum} \boldsymbol{J}_{ij} \widehat{\sigma}_{i} \widehat{\sigma}_{j} + \underset{i \in {\mathcal{N}}}{\sum} \boldsymbol{h}_{i} \widehat{\sigma}_{i} \end{array} $$
(9)

Notice that the eigenvalues of the matrix in (9) are the 2N possible energy values obtained by evaluating the energy E(σ) from (1) for all possible configurations of spins. This implies that finding the lowest eigenvalue of \(\widehat {E}\) is tantamount to solving the minimization problem in (2). This lifting is clearly impractical from the classical computing context as it transforms a minimization problem over 2N configurations into computing the minimum eigenvalue of a 2N × 2N matrix. The key motivation for this approach is that it is possible to construct quantum systems with only N quantum bits that attempt to find the minimum eigenvalue of this matrix.

The annealing process provides a way of steering a quantum system into the a priori unknown eigenvector that minimizes the energy of (9) [28, 45]. The core idea is to initialize the quantum system at the minimal eigenvector of a simple energy matrix \(\widehat {E}_{0}\), for which an explicit formula is known. After the system is initialized, the energy matrix is interpolated from the easy problem to the target problem slowly over time. Specifically, the energy matrix at a point during the anneal is given by \(\widehat {E}_{a}({\varGamma }) = (1-{\varGamma })\widehat {E}_{0} + {\varGamma } \widehat {E}\), with Γ varying from 0 to 1. When the anneal is complete, Γ = 1 and the interactions in the quantum system are described by the target energy matrix. The annealing time is the physical time taken by the system to evolve from Γ = 0 to Γ = 1. For suitable starting energy matrices \(\widehat {E}_{0}\) and a sufficiently slow annealing time, theoretical results have demonstrated that a quantum system continuously remains at the minimal eigenvector of the interpolating matrix \(\widehat {E}_{a}({\varGamma })\) [3] and therefore achieves the minimum energy (i.e., a global optima) of the target problem. Realizing this optimality result in practice has proven difficult due to corruption of the quantum system from the external environment. Nevertheless, quantum annealing can serve as a heuristic for finding high-quality solutions to the Ising models, i.e., (2).

3.1 Quantum annealing hardware

Interest in the QA model is due in large part to D-Wave Systems, which has developed the first commercially available QA hardware platform [43]. Given the computational challenges of classically simulating QA, this novel computing device represents the only viable method for studying QA at non-trivial scales, e.g., problems with more than 1000 qubits [11, 22]. At the most basic level, the D-Wave platform allows the user to program an Ising model by providing the parameters J,h in (1) and returns a collection of variable assignments from multiple annealing runs, which reflect optimal or near-optimal solutions to the input problem.

This seemingly simple interface is, however, hindered by a variety of constraints imposed by D-Wave’s 2000Q hardware implementation. The most notable hardware restriction is the Chimera connectivity graph depicted in Fig. 1, where each edge indicates if the hardware supports a coupling term Jij between a pair of qubits i and j. This sparse graph is a stark contrast to traditional quadratic optimization tools, where it is assumed that every pair of variables can interact.

Fig. 1
figure 1

A 2-by-2 Chimera graph illustrating the variable product limitations of D-Wave’s 2000Q processor

The second notable hardware restriction is a limited coefficient programming range. On the D-Wave 2000Q platform the parameters are constrained within the continuous parameter ranges of − 1 ≤Jij ≤ 1 and − 2 ≤hi ≤ 2. At first glance these ranges may not appear to be problematic because the energy function (1) can be rescaled into the hardware’s operating range without any loss of generality. However, operational realities of analog computing devices make the parameter values critically important to the overall performance of the hardware. These challenges include: persistent coefficient biases, which are an artifact of hardware slowly drifting out of calibration between re-calibration cycles; programming biases, which introduce some minor errors in the J,h values that were requested; and environmental noise, which disrupts the quantum behavior of the hardware and results in a reduction of solution quality. Overall, these hardware constraints have made the identification of QA-based performance gains notoriously challenging [16, 42, 54, 58, 65].

Despite the practical challenges in using D-Wave’s hardware platform, extensive experiments have suggested that QA can outperform some established local search methods (e.g., simulated annealing) on carefully designed Ising models [4, 22, 49]. However, demonstrating an unquestionable computational advantage over state-of-the-art methods on contrived and practical problems remains an open challenge.

4 Methods for ising model optimization

The focus of this work is to compare and contrast the behavior of QA to a broad range of established optimization algorithms. To that end, this work considers three core algorithmic categories: (1) complete search methods from the mathematical programming community; (2) local search methods developed by the statistical physics community; and (3) quantum annealing as realized by D-Wave’s hardware platform. The comparison includes both state-of-the-art solution methods from the D-Wave benchmarking literature (e.g., Hamze-Freitas-Selby [69], Integer Linear Programming [16]) and simple straw-man approaches (e.g., Greedy, Glauber Dynamics [33], Min-Sum [30, 60]) to highlight the solution quality of minimalist optimization approaches. This section provides high-level descriptions of the algorithms; implementation details are available as open-source software [17, 69].

4.1 Complete search

Unconstrained Boolean optimization, as in (7), has been the subject of mathematical programming research for several decades [10, 12]. This work considers the two most canonical formulations based on Integer Quadratic Programming and Integer Linear Programming.

Integer Quadratic Programming (IQP)

This formulation consists of using black-box commercial optimization tools to solve (7) directly. This model was leveraged in some of the first QA benchmarking studies [58] and received some criticism [66]. However, the results presented here suggest that this model has become more competitive due to the steady progress of commercial optimization solvers.

Integer Linear Programming (ILP)

This formulation is a slight variation of the IQP model where the variable products xixj are lifted into a new variable xij and constraints are added to capture the conjunction xij = xixj as follows:

$$ \begin{array}{@{}rcl@{}} && \min: \underset{i,j \in {\mathcal{E}}}{\sum} \boldsymbol{c}_{ij} x_{ij} + \underset{i \in {\mathcal{N}}}{\sum} \boldsymbol{c}_{i} x_{i} + \boldsymbol{c} \end{array} $$
(10a)
$$ \begin{array}{@{}rcl@{}} && \text{s.t.: }\\ && x_{ij} \geq x_{i} + x_{j} - 1, ~x_{ij} \leq x_{i}, ~x_{ij} \leq x_{j} ~\forall i,j \in {\mathcal{E}} \end{array} $$
(10b)
$$ \begin{array}{@{}rcl@{}} && x_{i} \in \{0, 1\} ~\forall i \in {\mathcal{N}}, ~x_{ij} \in \{0, 1\} ~\forall i,j \in {\mathcal{E}} \end{array} $$

This formulation was also leveraged in some of the first QA benchmarking studies [20, 66] and [10], which suggest this is the best formulation for sparse graphs, as is the case with the D-Wave Chimera graph. However, this work indicates that IQP solvers have improved sufficiently and this conclusion should be revisited.

4.2 Local search

Although complete search algorithms are helpful in the validation of QA hardware [6, 16], it is broadly accepted that local search algorithms are the most appropriate point of computational comparison to QA methods [1]. Given that a comprehensive enumeration of local search methods would be a monumental undertaking, this work focuses on representatives from four distinct algorithmic categories including greedy, message passing, Markov Chain Monte Carlo, and large neighborhood search.

Greedy (GRD)

The first heuristic algorithm considered by this work is a Steepest Coordinate Decent (SCD) greedy initialization approach. This algorithm assigns the variables one-by-one, always taking the assignment that minimizes the objective value. Specifically, the SCD approach begins with unassigned values, i.e., \(\sigma _{i} = 0 ~\forall i \in {\mathcal {N}}\), and then repeatedly applies the following assignment rule until all of the variables have been assigned a value of − 1 or 1:

$$ \begin{array}{@{}rcl@{}} i, v &=& \underset{i \in {\mathcal{N}}, v \in \{-1, 1\}}{\text{argmin}} E(\sigma_{1}, \ldots, \sigma_{i-1}, v, \sigma_{i+1}, \ldots,\sigma_{N}) \end{array} $$
(11a)
$$ \begin{array}{@{}rcl@{}} \sigma_{i} &=& v \end{array} $$
(11b)

In each application, ties in the argmin are broken at random, giving rise to a potentially stochastic outcome of the heuristic. Once all of the variables have been assigned, the algorithm is repeated until a runtime limit is reached and only the best solution found is returned. Although this approach is very simple, it can be effective in Ising models with minimal amounts of frustration.

Message Passing (MP)

The second algorithm considered by this work is a message-based Min-Sum (MS) algorithm [30, 60], which is an adaptation of the celebrated Belief Propagation algorithm for solving minimization problems on networks. A key property of the MS approach is its ability to identify the global minimum of cost functions with a tree dependency structure between the variables; i.e., if no cycles are formed by the interactions in \(\mathcal {E}\). In the more general case of loopy dependency structures [60], MS provides a heuristic minimization method. It is nevertheless a popular technique favored in communication systems for its low computational cost and notable performance on random tree-like networks [73].

For the optimization model considered here, as in (2), the MS messages, \(\epsilon _{i \rightarrow j}\), are computed iteratively along directed edges \(i \rightarrow j\) and \(j \rightarrow i\) for each edge \((i,j)\in \mathcal {E}\), according to the Min-Sum equations:

$$ \begin{array}{@{}rcl@{}} {\epsilon}_{i \rightarrow j}^{t+1} = \text{SSL}(2\boldsymbol{J}_{ij},2\boldsymbol{h}_{i} + \underset{k \in \mathcal{E}(i) \setminus j}{\sum}{\epsilon}_{k \rightarrow j}^{t} ) \end{array} $$
(12a)
$$ \begin{array}{@{}rcl@{}} \text{SSL}(x,y) = \min(x,y)-\min(-x,y) -x \end{array} $$
(12b)

Here, \(\mathcal {E}(i) \setminus j\) denotes the neighbors of i without j and SSL denotes the Symmetric Saturated Linear transfer function. Once a fix-point of (12a) is obtained or a prescribed runtime limit is reached, the MS algorithm outputs a configuration based on the following formula:

$$ \begin{array}{@{}rcl@{}} \sigma_{i} = - \text{sign}\left( 2\boldsymbol{h}_{i} + \underset{k \in \mathcal{E}(i)}{\sum}\epsilon_{k \rightarrow j} \right) \end{array} $$
(13)

By convention, if the argument of the sign function is 0, a value of 1 or − 1 is assigned randomly with equal probability.

Markov Chain Monte Carlo (MCMC)

MCMC algorithms include a wide range of methods to generate samples from complex probability distributions. A natural Markov Chain for the Ising model is given by Glauber dynamics, where the value of each variable is updated according to its conditional probability distribution. Glauber dynamics is often used as a method for producing samples from Ising models at finite temperature [33]. This work considers the so-called Zero Temperature Glauber Dynamics (GD) algorithm, which is the optimization variant of the Glauber dynamics sampling method, and which is also used in physics as a simple model for describing avalanche phenomena in magnetic materials [23]. From the optimization perspective, this approach is a single-variable greedy local search algorithm.

A step t of the GD algorithm consists in checking each variable \(i\in \mathcal {N}\) in a random order and comparing the objective cost of the current configuration σt to the configuration with the variable \({{\sigma }_{i}^{t}}\) being flipped. If the objective value is lower in the flipped configuration, i.e., \(E(\underline {\sigma }^{t}) > E({{\sigma }_{1}^{t}},\ldots ,-{{\sigma }_{i}^{t}},\ldots ,{{\sigma }_{N}^{t}})\), then the flipped configuration is selected as the new current configuration \(\underline {\sigma }^{t+1} = ({{\sigma }_{1}^{t}},\ldots ,-{{\sigma }_{i}^{t}},\ldots ,{{\sigma }_{N}^{t}})\). When the objective difference is 0, the previous or new configuration is selected randomly with equal probability. If after visiting all of the variables, no one single-variable flip can improve the current assignment, then the configuration is identified as a local minimum and the algorithm is restarted with a new randomly generated configuration. This process is repeated until a runtime limit is reached.

Large Neighborhood Search (LNS)

The state-of-the-art meta-heuristic for benchmarking D-Wave-based QA algorithms is the Hamze-Freitas-Selby (HFS) algorithm [37, 70]. The core idea of this algorithm is to extract low treewidth subgraphs of the given Ising model and then use dynamic programming to quickly compute the optimal configuration of these subgraphs. This extract and optimize process is repeated until a specified time limit is reached. This approach has demonstrated remarkable results in a variety of benchmarking studies [16, 44, 48, 49, 65]. The notable success of this solver can be attributed to three key factors. First, it is highly specialized to solving Ising models on the Chimera graphs (i.e., Fig. 1), a topological structure that is particularly amenable to low treewidth subgraphs. Second, it leverages integer arithmetic instead of floating point, which provides a significant performance improvement but also leads to notable precision limits. Third, the baseline implementation is a highly optimized C code [69], which runs at near-ideal performance.

4.3 Quantum annealing

Extending the theoretical overview from Section 3, the following implementation details are required to leverage the D-Wave 2000Q platform as a reliable optimization tool. The QA algorithm considered here consists of programming the Ising model of interest and then repeating the annealing process some number of times (i.e., num_reads) and then returning the lowest energy solution that was found among all of those replicates. No correction or solution polishing is applied in this solver. By varying the number of reads considered (e.g., from 10 to 10,000), the solution quality and total runtime of the QA algorithm increases. It is important to highlight that the D-Wave platform provides a wide variety of parameters to control the annealing process (e.g., annealing time, qubit offsets, custom annealing schedules, etc.). In the interest of simplicity and reproducibility, this work does not leverage any of those advanced features and it is likely that the results presented here would be further improved by careful utilization of those additional capabilities [2, 50, 56].

Note that all of the problems considered in this work have been generated to meet the implementation requirements discussed in Section 3.1 for a specific D-Wave chip deployed at Los Alamos National Laboratory. Consequently, no problem transformations are required to run the instances on the target hardware platform. Most notably, no embedding or rescaling is required. This approach is standard practice in QA evaluation studies and the arguments for it are discussed at length in [15, 16].

5 Structure detection experiments

This section presents the primary result of this work. Specifically, it analyzes three crafted optimization problems of increasing complexity—the Biased Ferromagnet, Frustrated Biased Ferromagnet, and Corrupted Biased Ferromagnet—all of which highlight the potential for QA to quickly identify the global structural properties of these problems. The algorithm performance analysis focuses on two key metrics, solution quality over time (i.e., performance profile) and the minimum hamming distance to any optimal solution over time. The hamming distance metric is particularly informative in this study as the problems have been designed to have local minima that are very close to the global optimum in terms of objective value, but are very distant in terms of hamming distance. The core finding is that QA produces solutions that are close to global optimality, both in terms of objective value and hamming distance.

Problem generation

All problems considered in this work are defined by simple probabilistic graphical models and are generated on a specific D-Wave hardware graph. To avoid bias towards one particular random instance, 100 instances are generated and the mean over this collection of instances is presented. Additionally, a random gauge transformation is applied to every instance to obfuscate the optimal solution and mitigate artifacts from the choice of initial condition in each solution approach.

Computation Environment

The CPU-based algorithms are run on HPE ProLiant XL170r servers with dual Intel 2.10GHz CPUs and 128GB memory. Gurobi 9.0 [35] was used for solving the Integer Programming (ILP/IQP) formulations. All of the algorithms were configured to only leverage one thread and the reported runtime reflects the wall clock time of each solver’s core routine and does not include pre-processing or post-processing of the problem data.

The QA computation is conducted on a D-Wave 2000Q quantum annealer deployed at Los Alamos National Laboratory. This computer has a 16-by-16 Chimera cell topology with random omissions; in total, it has 2032 spins (i.e., \(\mathcal {N}\)) and 5924 couplers (i.e., \(\mathcal {E}\)). The hardware is configured to execute 10 to 10,000 annealing runs using a 5-microsecond annealing time per run and a random gauge transformation every 100 runs, to mitigate the various sources of bias in the problem encoding. The reported runtime of the QA hardware reflects the amount of on-chip time used; it does not include the overhead of communication or scheduling of the computation, which takes about one to two seconds. Given a sufficient engineering effort to reduce overheads, on-chip time would be the dominating runtime factor.

5.1 The biased ferromagnet

$$ \begin{array}{@{}rcl@{}} \boldsymbol{J}_{ij} &=& -1.00 ~\forall i,j \in {\mathcal{E}};\\ P(\boldsymbol{h}_{i} &=& 0.00) = 0.990 , P(\boldsymbol{h}_{i} = -1.00) = 0.010 ~\forall i \in {\mathcal{N}} \end{array} $$
(BFM)

Inspired by the Ferromagnet model, this study begins with Biased FerroMagnet (BFM) model—a toy problem to build an intuition for a type of structure that QA can exploit. Notice that this model has no frustration and has a few linear terms that bias it to prefer σi = 1 as the global optimal solution. W.h.p. σi = 1 is a unique optimal solution and the assignment of σi = − 1 is a local minimum that is sub-optimal by \(0.02 \cdot |{\mathcal {N}}|\) in expectation and has a maximal hamming distance of \(|{\mathcal {N}}|\). The local minimum is an attractive solution because it is nearly optimal; however, it is hard for a local search solver to escape from it due to its hamming distance from the true global minimum. This instance presents two key algorithmic challenges: first, one must effectively detect the global structure (i.e., all the variables should take the same value); second, one must correctly discriminate between the two nearly optimal solutions that are very distant from one another.

Figure 2 presents the results of running all of the algorithms from Section 4 on the BFM model. The key observations are as follows:

  • Both the greedy (i.e., SCD) and relaxation-based solvers (i.e., IQP/ILP/MS) correctly identify this problem’s structure and quickly converge on the globally optimal solution (Fig. 2, top-right).

  • Neighborhood-based local search methods (e.g., GD) tend to get stuck in the local minimum of this problem. Even advanced local search methods (e.g., HFS) may miss the global optimum in rare cases (Fig. 2, top).

  • The hamming distance analysis indicates that QA has a high probability (i.e., greater than 0.9) of finding the exact global optimal solution (Fig. 2, bottom-right). This explains why just 20 runs is sufficient for QA to find the optimal solution w.h.p. (Fig. 2, top-right).

A key observation from this toy problem is that making a continuous relaxation of the problem (e.g., IQP/ILP/MS) can help algorithms detect global structure and avoid local minima that present challenges for neighborhood-based local search methods (e.g., GD/LNS). QA has comparable performance to these relaxation-based methods, both in terms of solution quality and runtime, and does appear to detect the global structure of the BFM problem class.

Fig. 2
figure 2

Performance profile (top) and Hamming Distance (bottom) analysis for the Biased Ferromagnet instance

Howeve encouraging these results are, the BFM problem is a straw-man that is trivial for five of the seven solution methods considered here. The next experiment introduces frustration to the BFM problem to understand how that impacts problem difficulty for the solution methods considered.

5.2 The frustrated biased ferromagnet

$$ \begin{array}{@{}rcl@{}} \boldsymbol{J}_{ij} &=& -1.00 ~\forall i,j \in {\mathcal{E}} \\ P(\boldsymbol{h}_{i} &=& 0.00) = 0.970, P(\boldsymbol{h}_{i} = -1.00) = 0.020, P(\boldsymbol{h}_{i} = 1.00) = 0.010 ~\forall i \in {\mathcal{N}} \end{array} $$
(FBFM)

The next step considers a slightly more challenging problem called a Frustrated Biased Ferromagnet (FBFM), which is a specific case of the random field Ising model [21] and similar in spirit to the Clause Problems considered in [57]. The FBFM deviates from the BFM by introducing frustration among the linear terms of the problem. Notice that on average 2% of the decision variables locally prefer σi = 1 while 1% prefer σi = − 1. Throughout the optimization process these two competing preferences must be resolved, leading to frustration. W.h.p. this model has the same unique global optimal solution as the BFM that occurs when σi = 1. The opposite assignment of σi = − 1 remains a local minimum that is sub-optimal by \(0.02 \cdot |{\mathcal {N}}|\) in expectation and has a maximal hamming distance of \(|{\mathcal {N}}|\). By design, the energy difference of these two extreme assignments is consistent with BFM, to keep the two problem classes as similar as possible.

Figure 3 presents the same performance analysis for the FBFM model. The key observations are as follows:

  • When compared to BFM, FBFM presents an increased challenge for the simple greedy (i.e., SCD) and local search (i.e., GD/MS) algorithms.

  • Although the SCD algorithm is worse than HFS in terms of objective quality, it is comparable or better in terms of hamming distance (Fig. 3, bottom-left). This highlights how these two metrics capture different properties of the underlying algorithms.

  • The results of QA and the relaxation-based solvers (i.e., IQP/ILP), are nearly identical to the BFM case, suggesting that this type of frustration does not present a significant challenge for these solution approaches.

These results suggest that frustration in the linear terms alone (i.e., h) is not sufficient for building optimization tasks that are non-trivial for a wide variety of general purpose solution methods. In the next study, frustration in the quadratic terms (i.e., J) is incorporated to increase the difficulty for the relaxation-based solution methods.

Fig. 3
figure 3

Performance profile (top) and Hamming Distance (bottom) analysis for the Frustrated Biased Ferromagnet instance

5.3 The corrupted biased ferromagnet

$$ \begin{array}{@{}rcl@{}} P(\boldsymbol{J}_{ij} &=& -1.00) = 0.625, P(\boldsymbol{J}_{ij} = 0.20) = 0.375 ~\forall i,j \in {\mathcal{E}} \\ P(\boldsymbol{h}_{i} &=& 0.00) = 0.970, P(\boldsymbol{h}_{i} = -1.00) = 0.020 , P(\boldsymbol{h}_{i} = 1.00) = 0.010 ~\forall i \in {\mathcal{N}} \end{array} $$
(CBFM)

The inspiration for this instance is to leverage insights from the theory of Spin glasses to build more computationally challenging problems. The core idea is to carefully corrupt the ferromagnetic problem structure with frustrating anti-ferromagnetic links that obfuscate the ferromagnetic properties without completely destroying them. A parameter sweep of different corruption values yields the Corrupted Biased FerroMagnet (CBFM) model, which retains the global structure that σi = 1 is a near globally optimal solution w.h.p., while obfuscating this property with misleading anti-ferromagnetic links and frustrated local fields.

Figure 4 presents a similar performance analysis for the CBFM model. The key observations are as follows:

  • In contrast to the BFM and FBFM cases, solvers that leverage continuous relaxations, such as IQP and ILP, do not immediately identify this problem’s structure and can take between 50 to 700 seconds to identify the globally optimal solution (Fig. 4, top-left).

  • The advanced local search method (i.e., HFS) consistently converges to a global optimum (Fig. 4, top-right), which does not always occur in the BFM and FBFM cases.

  • Although the MS algorithm is notably worse than GD in terms of objective quality, it is notably better in terms of hamming distance. This further indicates how these two metrics capture different properties of the underlying algorithms (Fig. 4, bottom-left).

  • Although this instance presents more of a challenge for QA than BFM and FBFM, QA still finds the global minimum with high probability; 500-1000 runs is sufficient to find a near-optimal solution in all cases. This is 10 to 100 times faster than the next-best algorithm, HFS (Fig. 4, top-right).

  • The hamming distance analysis suggests that the success of the QA approach is that it has a significant probability (i.e., greater than 0.12) of returning a solution that has a hamming distance of less than 1% from the global optimal solution (Fig. 4, bottom-right).

The overarching trend of this study is that QA is successful in detecting the global structure of the BFM, FBFM, and CBFM instances (i.e., low hamming distance to optimal, w.h.p.). Furthermore, it can do so notably faster than all of the other algorithms considered here. This suggests that, in this class of problems, QA brings a unique value that is not captured by the other algorithms considered. Similar to how the relaxation methods succeed at the BFM and FBFM instances, we hypothesize that the success of QA on the CBFM instance is driven by the solution search occurring in a smooth high-dimensional continuous space as discussed in Section 3. In this instance class, QA may also benefit from so-called finite-range tunnelling effects, which allows QA to change the state of multiple variables simultaneously (i.e., global moves) [22, 27]. Regardless of the underlying cause, QA’s performance on the CBFM instance is particularly notable and worthy of further investigation.

Fig. 4
figure 4

Performance profile (top) and Hamming Distance (bottom) analysis for the Corrupted Biased Ferromagnet instance

5.4 Bias structure variants

As part of the design process uniform field variants of the problems proposed herein were also considered. These variants featured weaker and more uniform distributed bias terms. Specifically, the term P(hi = − 1.00) = 0.010 was replaced with P(hi = − 0.01) = 1.000. Upon continued analysis, it was observed that the stronger and less-uniform bias terms resulted in more challenging cases for all of the solution methods considered, and hence, were selected as the preferred design for the problems proposed by this work. In the interest of completeness, Appendix A provides a detailed analysis of the uniform-field variants of the BFM, FBFM, and CBFM instances to illustrate how this problem variant impacts the performance of the solution methods considered here.

5.5 A comparison to other instance classes

The CBFM problem was designed to have specific structural properties that are beneficial to the QA approach. It is important to note that not all instance classes have such an advantageous structure. This point is highlighted in Fig. 5, which compares three landmark problem classes from the QA benchmarking literature: Weak-Strong Cluster Networks (WSCN) [22], Frustrated Cluster Loops with Gadgets (FCLG) [4], and Random Couplers and Fields (RANF-1) [16, 20]. These results show that D-Wave’s current 2000Q hardware platform can be outperformed by local and complete search methods on some classes of problems. However, it is valuable to observe that these previously proposed instance classes are either relatively easy for local search algorithms (i.e., WSCN and RANF) or relatively easy for complete search algorithms (i.e., WSCN and FCLG), both of which are not ideal properties for conducting benchmarking studies. To the best of our knowledge, the proposed CBFM problem is the first instance class that presents a notable computational challenge for both local search and complete search algorithms.

Fig. 5
figure 5

Performance profiles of other problem classes from the literature

6 Quantum annealing as a primal heuristic

QA’s notable ability to find high-quality solutions to the CBFM problem suggests the development of hybrid algorithms, which leverage QA for finding upper bounds within a complete search method that can also provide global optimality proofs. A simple version of such an approach was developed where 1000 runs of QA were used to warm-start the IQP solver with a high-quality initial solution. The results of this hybrid approach are presented in Fig. 6. The IQP solver clearly benefits from the warm-start on short time scales. However, it does not lead to a notable reduction in the time to producing the optimality proof. This suggests that a state-of-the-art hybrid complete search solver needs to combine QA for finding upper bounds with more sophisticated lower-bounding techniques, such as those presented in [6, 44].

Fig. 6
figure 6

Performance profile of Warm-Starting IQP with QA solutions

7 Conclusion

This work explored how quantum annealing hardware might be able to support heuristic algorithms in finding high-quality solutions to challenging combinatorial optimization problems. A careful analysis of quantum annealing’s performance on the Biased Ferromagnet, Frustrated Biased Ferromagnet, and Corrupted Biased Ferromagnet problems with more than 2,000 decision variables suggests that this approach is capable of quickly identifying the structure of the optimal solution to these problems, while a variety of local and complete search algorithms struggle to identify this structure. This result suggests that integrating quantum annealing into meta-heuristic algorithms could yield unique variable assignments and increase the discovery of high-quality solutions.

Although demonstration of a runtime advantage was not the focus of this work, the success of quantum annealing on the Corrupted Biased Ferromagnet problem compared to other solution methods is a promising outcome for QA and warrants further investigation. An in-depth theoretical study of the Corrupted Biased Ferromagnet case could provide deeper insights into the structural properties that quantum annealing is exploiting in this problem and would provide additional insights into the classes of problems that have the best chance to demonstrate an unquestionable computational advantage for quantum annealing hardware. It is important to highlight that while the research community is currently searching for an unquestionable computational advantage for quantum annealing hardware by any means necessary, significant additional research will be required to bridge the gap between contrived hardware-specific optimization tasks and practical optimization applications.