During the last two years, the Google Quantum AI team has made progress in understanding the physics governing quantum annealers. We recently applied these new insights to construct proof-of-principle optimization problems and programmed these into the D-Wave 2X quantum annealer that Google operates jointly with NASA. The problems were designed to demonstrate that quantum annealing can offer runtime advantages for hard optimization problems characterized by rugged energy landscapes.
We found that for problem instances involving nearly 1000 binary variables, quantum annealing significantly outperforms its classical counterpart, simulated annealing. It is more than 100 million times faster than simulated annealing running on a single core. We also compared the quantum hardware to another algorithm called Quantum Monte Carlo. This is a method designed to emulate the behavior of quantum systems, but it runs on conventional processors. While the scaling with size between these two methods is comparable, they are again separated by a large factor sometimes as high as 100 million.
While these results are intriguing and very encouraging, there is more work ahead to turn quantum enhanced optimization into a practical technology. The design of next generation annealers must facilitate the embedding of problems of practical relevance. For instance, we would like to increase the density and control precision of the connections between the qubits as well as their coherence. Another enhancement we wish to engineer is to support the representation not only of quadratic optimization, but of higher order optimization as well. This necessitates that not only pairs of qubits can interact directly but also larger sets of qubits.
Google’s quantum hardware group is working on these improvements which will make it easier for users to input hard optimization problems. For higher-order optimization problems, rugged energy landscapes will become typical. Problems with such landscapes stand to benefit from quantum optimization because quantum tunneling makes it easier to traverse tall and narrow energy barriers.
Quantum annealing (QA) has been proposed as a quantum enhanced optimization heuristic exploiting tunneling. Here, we demonstrate how finite range tunneling can provide considerable computational advantage. For a crafted problem designed to have tall and narrow energy barriers separating local minima, the D-Wave 2X quantum annealer achieves significant runtime advantages relative to Simulated Annealing (SA). For instances with 945 variables this results in a time-to-99%-success-probability that is ∼ 100 million times faster than SA running on a single processor core. We also compared physical QA with Quantum Monte Carlo (QMC), an algorithm that emulates quantum tunneling on classical processors. We observe a substantial constant overhead against physical QA: D-Wave 2X runs up to ∼ 100 million times faster than an optimized implementation of QMC on a single core. To investigate whether finite range tunneling will also confer an advantage for problems of practical interest, we conduct numerical studies on binary optimization problems that cannot yet be represented on quantum hardware. For random instances of the number partitioning problem, we find numerically that QMC, as well as other algorithms designed to simulate QA, scale better than SA and better than the best known classical algorithms for this problem. We discuss the implications of these findings for the design of next generation quantum annealers.
Dwave processors are definitely quantum
Whether computational quantum tunneling has indeed been realized in D-Wave processors has been a subject of substantial debate. This debate has now been settled in the affirmative with a sequence of publications demonstrating that quantum resources are present and functional in the processors. Indeed, references studied the performance of the DWave device on problems where eight qubit cotunneling events were employed in a functional manner to reach low-lying energy solutions.
A Remark on Scaling
Certain quantum algorithms have asymptotically better scaling than the best known classical algorithms. While such asymptotic behavior can be of tremendous value, this is not always the case; moreover, there can be substantial value in quantum algorithms which do not show asymptotically better scaling than classical approaches. The first reason for this is that current quantum hardware is restricted to rather modest problem sizes of less than order 1000 qubits. At the same time, when performing numerical simulations of quantum dynamics, in particular when doing open system calculations, we are often limited to problem sizes smaller than 100 qubits. Extrapolating from such finite size findings can be misleading as it is often difficult to determine whether a computational approach has reached its asymptotic behavior.
When forecasting the future promise of a given hardware design, there is a tendency to focus on the qubit dimension. However, this perspective is not necessarily helpful. For instance, when looking at runtime as a function of qubit dimension, one may conclude that the measurements reported here indicate that QMC and physical annealing have a comparable slope, scale similarly and that therefore, the upside of physical annealing is bounded.
Improving temperature and precision control and having richer connectivity can have more impact than just scaling qubits
However, the large and practically important prefactor depends on a number of factors such as temperature. Furthermore, we expect future hardware to have substantially better control precision, substantially richer connectivity graphs and dramatically improved T1 and T2 times. With such changes, next generation annealers may drastically increase the constant separation between algorithms, leading to very different performance from generation to generation.
To illustrate how dramatic this effect can be, when we ran smaller instances of the weak-strong cluster networks on the older D-Wave Vesuvius chips we predicted that at 1000 variables DWave would be 10,000 times faster than SA. In fact, we observed a speedup of more than a factor of 100 million.
This because certain noise parameters were improved and the new dilution refrigerator operates at a lower temperature. Similarly, we suspect that a number of previous attempts to extrapolate the D-Wave runtimes for 1000 qubits will turn out to be of limited use in forecasting the performance of future devices. For this reason, the current study focuses on runtime ratios that were actually measured on the largest instances solvable using the current device, rather than on extrapolations of asymptotic behavior which may not be relevant once we have devices which can attempt larger problems.
For higher order optimization problems, rugged energy landscapes will become typical. As we saw in our experiments with the D-Wave 2X, problems with such landscapes stand to benefit from quantum optimization because quantum tunneling makes it easier to traverse tall and narrow energy barriers. Therefore, we expect that quantum annealing might also deliver runtime advantages for problems of practical interest such as Kth order binary optimization with larger K.
More work is needed to turn quantum enhanced optimization into a practical technology. The design of next generation annealers must facilitate the embedding of problems of practical relevance. For instance, we would like to increase the density and control precision of the connections between the qubits as well as their coherence. Another enhancement we wish to engineer is to support the representation not only of quadratic optimization but of higher order optimization as well. This necessitates that not only pairs of qubits can interact directly but also larger sets of qubits. Providing such improvements will make it easier for end users to input hard optimization problems.
The work presented here focused on the computational resource that is experimentally most accessible for a quantum annealer: finite range tunneling. However this is far from complete. A coherent annealer could accelerate the transition through saddle points, an issue slowing down the training of deep neural networks, for reasons similar to those that make a quantum walk spread faster than a classical random walker. It could also dramatically accelerate sampling from a probability distribution via the mechanism of many body delocalization. The computational value of such physics still needs to be properly understood and modeled.